alva

Build and deploy agentic finance applications on the Alva platform. Access 250+ financial data sources (crypto, equities, macro, on-chain, social), run cloud-side analytics, backtest trading strategies, and release interactive playbooks -- all from your AI agents.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "alva" with this command: npx skills add alva-ai/skills/alva-ai-skills-alva

Alva

What is Alva

Alva is an agentic finance platform. It provides unified access to 250+ financial data sources spanning crypto, equities, ETFs, macroeconomic indicators, on-chain analytics, and social sentiment -- including spot and futures OHLCV, funding rates, company fundamentals, price targets, insider and senator trades, earnings estimates, CPI, GDP, Treasury rates, exchange flows, DeFi metrics, news feeds, social media and more!

What Alva Skills Enables

The Alva skill connects any AI agent or IDE to the full Alva platform. With it you can:

  • Access financial data -- query any of Alva's 250+ data SDKs programmatically, or bring your own data via HTTP API or direct upload.
  • Run cloud-side analytics -- write JavaScript that executes on Alva Cloud in a secure runtime. No local compute, no dependencies, no infrastructure to manage.
  • Build agentic playbooks -- create data pipelines, trading strategies, and scheduled automations that run continuously on Alva Cloud.
  • Deploy trading strategies -- backtest with the Altra trading engine and run continuous live paper trading.
  • Release and share -- turn your work into a hosted playbook web app at https://alva.ai/<username>/playbooks/<playbook_id>, and share it with the world.

In short: turn your ideas into a forever-running finance agent that gets things done for you.

Capabilities & Common Workflows

1. ALFS (Alva FileSystem)

The foundation of the platform. ALFS is a cloud filesystem with per-user isolation. Every user has a private home directory; all paths are private by default and only accessible by the owning user. Public read access can be explicitly granted on specific paths via grant. Scripts, data feeds, playbook assets, and shared libraries all live on ALFS.

Key operations: read, write, mkdir, stat, readdir, remove, rename, copy, symlink, chmod, grant, revoke.

2. JS Runtime

Run JavaScript on Alva Cloud in a sandboxed V8 isolate. Code executed inside Alva's /api/v1/run runtime runs entirely on Alva's servers -- it cannot access the host machine's filesystem, environment variables, or processes. The runtime has access to ALFS, all 250+ SDKs, HTTP networking, LLM access, and the Feed SDK.

3. SDKHub

250+ built-in financial data SDKs. To find the right SDK for a task, use the two-step retrieval flow:

  1. Pick a partition from the index below.
  2. Call GET /api/v1/sdk/partitions/:partition/summary to see module summaries, then load the full doc for the chosen module.

SDK Partition Index

PartitionDescription
spot_market_price_and_volumeSpot OHLCV for crypto and equities. Price bars, volume, historical candles.
crypto_futures_dataPerpetual futures: OHLCV, funding rates, open interest, long/short ratio.
crypto_technical_metricsCrypto technical & on-chain indicators: MA, EMA, RSI, MACD, Bollinger, MVRV, SOPR, NUPL, whale ratio, market cap, FDV, etc. (20 modules)
crypto_exchange_flowExchange inflow/outflow data for crypto assets.
crypto_fundamentalsCrypto market fundamentals: circulating supply, max supply, market dominance.
crypto_screenerScreen crypto assets by technical metrics over custom time ranges.
company_crypto_holdingsPublic companies' crypto token holdings (e.g. MicroStrategy BTC).
equity_fundamentalsStock fundamentals: income statements, balance sheets, cash flow, margins, PE, PB, ROE, ROA, EPS, market cap, dividend yield, enterprise value, etc. (31 modules)
equity_estimates_and_targetsAnalyst price targets, consensus estimates, earnings guidance.
equity_events_calendarDividend calendar, stock split calendar.
equity_ownership_and_flowInstitutional holdings, insider trades, senator trading activity.
stock_screenerScreen stocks by sector, industry, country, exchange, IPO date, earnings date, financial & technical metrics. (9 modules)
stock_technical_metricsStock technical indicators: beta, volatility, Bollinger, EMA, MA, MACD, RSI-14, VWAP, avg daily dollar volume.
etf_fundamentalsETF holdings breakdown.
macro_and_economics_dataCPI, GDP, unemployment, federal funds rate, Treasury rates, PPI, consumer sentiment, VIX, TIPS, nonfarm payroll, retail sales, recession probability, etc. (20 modules)
technical_indicator_calculation_helpers50+ pure calculation helpers: RSI, MACD, Bollinger Bands, ATR, VWAP, Ichimoku, Parabolic SAR, KDJ, OBV, etc. Input your own price arrays.
feed_widgetsSocial & news data feeds: news, Twitter/X, YouTube, Reddit, podcasts, web search (Brave, Grok).
askGeneral news and market articles.

You can also bring your own data by uploading files to ALFS or fetching from external HTTP APIs within the runtime.

4. Altra (Alva Trading Engine)

A feed-based event-driven backtesting engine for quantitative trading strategies. A trading strategy IS a feed: all output data (targets, portfolio, orders, equity, metrics) lives under a single feed's ALFS path. Altra supports historical backtesting and continuous live paper trading, with custom indicators, portfolio simulation, and performance analytics.

5. Deploy on Alva Cloud

Once your data analytics scripts and feeds are ready, deploy them as scheduled cronjobs on Alva Cloud. They run continuously on your chosen schedule (e.g. every hour, every day). All data is private by default; grant public access to specific paths so anyone -- or any playbook page -- can read the data.

6. Build the Playbook Web App

After your data pipelines are deployed and producing data, build the playbook's web interface. Create HTML5 pages that read from Alva's data gateway and visualize the results. Follow the Alva Design System for styling, layout, and component guidelines.

7. Release

Three phases:

  1. Write HTML to ALFS: POST /api/v1/fs/write the playbook HTML to ~/playbooks/{name}/index.html.
  2. Create playbook draft: POST /api/v1/draft/playbook — creates DB records, writes draft files and playbook.json to ALFS automatically.
  3. Call release API: POST /api/v1/release/playbook — creates release DB records, uploads HTML to CDN, and writes release files to ALFS automatically. Returns playbook_id (numeric).

Once released, the playbook is accessible at https://alva.ai/<username>/playbooks/<playbook_id> — ready to share with the world. Use the playbook_id from the release response and the username from GET /api/v1/me to construct this URL.


Detailed sub-documents (read these for in-depth reference):

DocumentContents
api-reference.mdFull REST API reference (filesystem, run, deploy, user info, time series paths)
jagent-runtime.mdWriting jagent scripts: module system, built-in modules, async model, constraints
feed-sdk.mdFeed SDK guide: creating data feeds, time series, upstreams, state management
altra-trading.mdAltra backtesting engine: strategies, features, signals, testing, debugging
deployment.mdDeploying scripts as cronjobs for scheduled execution
design-system.mdAlva Design System entry point: tokens, typography, layout; links to widget, component, and playbook specs
adk.mdAgent Development Kit: adk.agent() API, tool calling, ReAct loop, examples

Setup

All configuration is done via environment variables.

VariableRequiredDescription
ALVA_API_KEYyesYour API key (create and manage at alva.ai)
ALVA_ENDPOINTnoAlva API base URL. Defaults to https://api-llm.prd.alva.ai if not set

First-Time Setup

If ALVA_API_KEY is not set, ask the user whether they already have an API key. Then follow the matching path:

Path A — User already has a key:

Ask them to paste the key. Then set it up and verify on their behalf:

export ALVA_API_KEY="<the key they pasted>"
curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" https://api-llm.prd.alva.ai/api/v1/me

On success ({"id":...,"username":"..."}), suggest persisting the key in their shell profile (~/.zshrc, ~/.bashrc, etc.) so it's available in future sessions. Then ask what they want to do — offer concrete starting points like: build a playbook, explore financial data, backtest a trading strategy, or set up a data pipeline.

Path B — User does not have a key:

  1. Sign up at alva.ai (if no account yet).
  2. Log in → Settings → API Keys → Create New Key → copy the key.
  3. Paste it back — then set up and verify (same as Path A).

Making API Requests

All API examples in this skill use HTTP notation (METHOD /path). Every request requires the X-Alva-Api-Key header unless marked (public, no auth).

Curl templates for reference:

# Authenticated
curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" "$ALVA_ENDPOINT{path}"

# Authenticated + JSON body
curl -s -H "X-Alva-Api-Key: $ALVA_API_KEY" -H "Content-Type: application/json" \
  "$ALVA_ENDPOINT{path}" -d '{body}'

# Public read (no API key, absolute path)
curl -s "$ALVA_ENDPOINT{path}"

Discovering User Info

Retrieve your user_id and username:

GET /api/v1/me
→ {"id":1,"username":"alice"}

Quick API Reference

See api-reference.md for full details.

Filesystem (/api/v1/fs/)

MethodEndpointDescription
GET/api/v1/fs/read?path={path}Read file content (raw bytes) or time series data
POST/api/v1/fs/writeWrite file (raw body or JSON with data field)
GET/api/v1/fs/stat?path={path}Get file/directory metadata
GET/api/v1/fs/readdir?path={path}List directory entries
POST/api/v1/fs/mkdirCreate directory (recursive)
DELETE/api/v1/fs/remove?path={path}Remove file or directory
POST/api/v1/fs/renameRename / move
POST/api/v1/fs/copyCopy file
POST/api/v1/fs/symlinkCreate symlink
GET/api/v1/fs/readlink?path={path}Read symlink target
POST/api/v1/fs/chmodChange permissions
POST/api/v1/fs/grantGrant read/write access to a path
POST/api/v1/fs/revokeRevoke access

Paths: ~/data/file.json (home-relative) or /alva/home/<username>/... (absolute). Public reads use absolute paths without API key.

Run (/api/v1/run)

MethodEndpointDescription
POST/api/v1/runExecute JavaScript (inline code or entry_path to a script on filesystem)

Deploy (/api/v1/deploy/)

MethodEndpointDescription
POST/api/v1/deploy/cronjobCreate a cronjob
GET/api/v1/deploy/cronjobsList cronjobs (paginated)
GET/api/v1/deploy/cronjob/:idGet cronjob details
PATCH/api/v1/deploy/cronjob/:idUpdate cronjob (name, cron, args)
DELETE/api/v1/deploy/cronjob/:idDelete cronjob
POST/api/v1/deploy/cronjob/:id/pausePause cronjob
POST/api/v1/deploy/cronjob/:id/resumeResume cronjob

Release (/api/v1/release/)

MethodEndpointDescription
POST/api/v1/release/feedRegister feed (DB + link to cronjob task). Call after deploying cronjob.
POST/api/v1/release/playbookRelease playbook for public hosting. Call after writing playbook HTML.

Name uniqueness: Both name in releaseFeed and releasePlaybook must be unique within your user space. Use GET /api/v1/fs/readdir?path=~/feeds or GET /api/v1/fs/readdir?path=~/playbooks to check existing names before releasing.

SDK Documentation (/api/v1/sdk/)

MethodEndpointDescription
GET/api/v1/sdk/doc?name={module_name}Get full doc for a specific SDK module
GET/api/v1/sdk/partitionsList all SDK partitions
GET/api/v1/sdk/partitions/:partition/summaryGet one-line summaries of all modules in a partition

SDK retrieval flow: pick a partition from the index above → call /partitions/:partition/summary to see module summaries → call /sdk/doc?name=... to load the full doc for the chosen module.

Trading Pair Search (/api/v1/trading-pairs/)

MethodEndpointDescription
GET/api/v1/trading-pairs/search?q={q}Search trading pairs by base asset (fuzzy match)

Search before writing code to check which symbols/exchanges Alva supports. Supports exact match + prefix fuzzy search by base asset or alias. Comma-separated queries for multiple searches.

GET /api/v1/trading-pairs/search?q=BTC,ETH
→ {"trading_pairs":[{"base":"BTC","quote":"USDT","symbol":"BINANCE_PERP_BTC_USDT","exchange":"binance","type":"crypto-perp","fee_rate":0.001,...},...]}

User Info

MethodEndpointDescription
GET/api/v1/meGet authenticated user's id and username

Runtime Modules Quick Reference

Scripts executed via /api/v1/run run in a sandboxed V8 isolate on Alva's servers -- they cannot access the host machine's filesystem, environment variables, or shell. Host-agent permissions still apply. See jagent-runtime.md for full details.

Modulerequire()Description
alfsrequire("alfs")Filesystem (uses absolute paths /alva/home/<username>/...)
envrequire("env")userId, username, args from request
net/httprequire("net/http")fetch(url, init) for async HTTP requests
@alva/algorithmrequire("@alva/algorithm")Statistics
@alva/feedrequire("@alva/feed")Feed SDK for persistent data pipelines + FeedAltra trading engine
@alva/adkrequire("@alva/adk")Agent SDK for LLM requests — agent() for LLM agents with tool calling
@test/suiterequire("@test/suite")Jest-style test framework (describe, it, expect, runTests)

SDKHub: 250+ data modules available via require("@arrays/crypto/ohlcv:v1.0.0") etc. Version suffix is optional (defaults to v1.0.0). To discover function signatures and response shapes, use the SDK doc API (GET /api/v1/sdk/doc?name=...).

Key constraints: No top-level await (wrap script in (async () => { ... })();). No Node.js builtins (fs, path, http). Module exports are frozen.


Feed SDK Quick Reference

See feed-sdk.md for full details.

Feeds are persistent data pipelines that store time series data, readable via filesystem paths.

const { Feed, feedPath, makeDoc, num } = require("@alva/feed");
const { getCryptoKline } = require("@arrays/crypto/ohlcv:v1.0.0");
const { indicators } = require("@alva/algorithm");

const feed = new Feed({ path: feedPath("btc-ema") });

feed.def("metrics", {
  prices: makeDoc("BTC Prices", "Close + EMA10", [num("close"), num("ema10")]),
});

(async () => {
  await feed.run(async (ctx) => {
    const raw = await ctx.kv.load("lastDate");
    const lastDateMs = raw ? Number(raw) : 0;

    const now = Math.floor(Date.now() / 1000);
    const start =
      lastDateMs > 0 ? Math.floor(lastDateMs / 1000) : now - 30 * 86400;

    const bars = getCryptoKline({
      symbol: "BTCUSDT",
      start_time: start,
      end_time: now,
      interval: "1h",
    })
      .response.data.slice()
      .reverse();
    const closes = bars.map((b) => b.close);
    const ema10 = indicators.ema(closes, { period: 10 });

    const records = bars
      .map((b, i) => ({
        date: b.date,
        close: b.close,
        ema10: ema10[i] || null,
      }))
      .filter((r) => r.date > lastDateMs);

    if (records.length > 0) {
      await ctx.self.ts("metrics", "prices").append(records);
      await ctx.kv.put("lastDate", String(records[records.length - 1].date));
    }
  });
})();

Feed output is readable at: ~/feeds/btc-ema/v1/data/metrics/prices/@last/100


Data Modeling Patterns

All data produced by a feed should use feed.def() + ctx.self.ts().append(). Do not use alfs.writeFile() for feed output data.

Pattern A -- Snapshot (latest-wins): For data that represents current state (company detail, ratings, price target consensus). Use start-of-day as the date so re-runs overwrite.

const today = new Date();
today.setHours(0, 0, 0, 0);
await ctx.self
  .ts("info", "company")
  .append([
    { date: today.getTime(), name: company.name, sector: company.sector },
  ]);

Read @last/1 for current snapshot, @last/30 for 30-day history.

Pattern B -- Event log: For timestamped events (insider trades, news, senator trades). Each event uses its natural date. Same-date records are auto-grouped.

const records = trades.map((t) => ({
  date: new Date(t.transactionDate).getTime(),
  name: t.name,
  type: t.type,
  shares: t.shares,
}));
await ctx.self.ts("activity", "insiderTrades").append(records);

Pattern C -- Tabular (versioned batch): For data where the whole set refreshes each run (top holders, EPS estimates). Stamp all records with the same run timestamp; same-date grouping stores them as a batch.

const now = Date.now();
const records = holdings.map((h, i) => ({
  date: now,
  rank: i + 1,
  name: h.name,
  marketValue: h.value,
}));
await ctx.self.ts("research", "institutions").append(records);
Data TypePatternDate StrategyRead Query
OHLCV, indicatorsTime series (standard)Bar timestamp@last/252
Company detail, ratingsSnapshot (A)Start of day@last/1
Insider trades, newsEvent log (B)Event timestamp@last/50
Holdings, estimatesTabular (C)Run timestamp@last/N

See feed-sdk.md for detailed data modeling examples and deduplication behavior.


Deploying Feeds

Every feed follows a 6-step lifecycle:

  1. Write -- define schema + incremental logic with ctx.kv
  2. Upload -- write script to ~/feeds/<name>/v1/src/index.js
  3. Test -- POST /api/v1/run with entry_path to verify output
  4. Grant -- make feed public via POST /api/v1/fs/grant
  5. Deploy -- POST /api/v1/deploy/cronjob for scheduled execution
  6. Release -- POST /api/v1/release/feed to register the feed in the database (requires the task_id from the deploy step)
Data TypeRecommended ScheduleRationale
Stock OHLCV + technicals0 */4 * * * (every 4h)Markets update during trading hours
Company detail, price targets0 8 * * * (daily 8am)Changes infrequently
Insider/senator trades0 8 * * * (daily 8am)SEC filings are daily
Earnings estimates0 8 * * * (daily 8am)Updated periodically

See deployment.md for the full deployment guide and API reference.


Debugging Feeds

Resetting Feed Data (development only)

During development, use the REST API to clear stale or incorrect data. Do not use this in production.

# Clear a specific time series output
DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data/market/ohlcv&recursive=true

# Clear an entire group (all outputs under "market")
DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data/market&recursive=true

# Full reset: clear ALL data + KV state (removes the data mount, re-created on next run)
DELETE /api/v1/fs/remove?path=~/feeds/my-feed/v1/data&recursive=true

Inline Debug Snippets

Test SDK shapes before building a full feed:

POST /api/v1/run
{"code":"const { getCryptoKline } = require(\"@arrays/crypto/ohlcv:v1.0.0\"); JSON.stringify(Object.keys(getCryptoKline({ symbol: \"BTCUSDT\", start_time: 0, end_time: 0, interval: \"1h\" })));"}

Altra Trading Engine Quick Reference

See altra-trading.md for full details.

Altra is a feed-based event-driven backtesting engine. A trading strategy IS a feed: all output data lives under a single ALFS path. Decisions execute at bar CLOSE.

const { createOHLCVProvider } = require("@arrays/data/ohlcv-provider:v1.0.0");
const { FeedAltraModule } = require("@alva/feed");
const { FeedAltra, e, Amount } = FeedAltraModule;

const altra = new FeedAltra(
  {
    path: "~/feeds/my-strategy/v1",
    startDate: Date.parse("2025-01-01T00:00:00Z"),
    portfolioOptions: { initialCash: 1_000_000 },
    simOptions: { simTick: "1min", feeRate: 0.001 },
    perfOptions: { timezone: "UTC", marketType: "crypto" },
  },
  createOHLCVProvider(),
);

const dg = altra.getDataGraph();
dg.registerOhlcv("BINANCE_SPOT_BTC_USDT", "1d");
dg.registerFeature({ name: "rsi" /* ... */ });

altra.setStrategy(strategyFn, {
  trigger: { type: "events", expr: e.ohlcv("BINANCE_SPOT_BTC_USDT", "1d") },
  inputConfig: {
    ohlcvs: [{ id: { pair: "BINANCE_SPOT_BTC_USDT", interval: "1d" } }],
    features: [{ id: "rsi" }],
  },
  initialState: {},
});

(async () => {
  await altra.run(Date.now());
})();

ADK (Agent Development Kit) Quick Reference

See adk.md for full details.

ADK is a universal agent development kit that runs inside the Jagent V8 runtime. Use it to build LLM-powered agents that autonomously reason, call tools, and produce structured output — ideal for periodic research, insight generation, and document analysis feeds.

const adk = require("@alva/adk");

const result = await adk.agent({
  system: "You are a senior equity analyst...",
  prompt: "Analyze NVDA quarterly earnings trends.",
  tools: [/* tool definitions */],
  maxTurns: 5,
});

log(result.content);    // Final LLM text response
log(result.toolCalls);  // All tool invocations made

When to Use ADK

Use CaseDescription
Periodic research feedsScheduled agents that fetch data, reason over it, and produce structured insights (e.g. weekly earnings analysis, daily macro commentary)
Document / data analysisAgents that read documents or datasets, extract key points, and output structured summaries
Multi-source synthesisAgents that call multiple data SDKs, cross-reference findings, and produce a unified research note
Agentic data pipelinesFeed scripts where the "transform" step requires LLM reasoning (classification, sentiment, summarization)

Core API

const result = await adk.agent({
  system,    // (optional) system prompt — define the agent's role and output format
  prompt,    // user query or task description
  tools,     // array of Tool objects the agent can invoke
  maxTurns,  // max ReAct loop iterations (default: 10)
});
// result: { content: string, turns: number, toolCalls: ToolCallRecord[] }

Tool Calling — What Tools Are For

Tools are the agent's hands. Use them to:

  • Query data: Fetch upstream data from Alva SDKs, external HTTP APIs, or ALFS files. The agent decides which data to retrieve based on its reasoning.
  • Collect context: Pull in multiple data sources (earnings, macro indicators, news) so the agent can cross-reference and synthesize.
  • Store / fetch memory: Read and write to ALFS or ctx.kv to persist state across runs — e.g. store a running summary, retrieve last analysis for comparison, or cache intermediate results.
// Example: tools for data query + memory
const tools = [
  {
    name: "getEarnings",
    description: "Fetch quarterly earnings for a stock symbol",
    parameters: { type: "object", properties: { symbol: { type: "string" } }, required: ["symbol"] },
    fn: async (args) => {
      const { getCompanyIncomeStatements } = require("@arrays/data/stock/company/income:v1.0.0");
      return getCompanyIncomeStatements({ symbol: args.symbol, period_type: "quarter",
        start_time: Date.parse("2023-01-01"), end_time: Date.now(), limit: 20 }).response.metrics;
    },
  },
  {
    name: "readMemory",
    description: "Read previous analysis from memory",
    parameters: { type: "object", properties: { key: { type: "string" } }, required: ["key"] },
    fn: async (args) => {
      const alfs = require("alfs");
      const env = require("env");
      try {
        return JSON.parse(await alfs.readFile(`/alva/home/${env.username}/data/memory/${args.key}.json`));
      } catch { return null; }
    },
  },
  {
    name: "writeMemory",
    description: "Store analysis result to memory for future runs",
    parameters: { type: "object", properties: { key: { type: "string" }, value: { type: "object" } }, required: ["key", "value"] },
    fn: async (args) => {
      const alfs = require("alfs");
      const env = require("env");
      await alfs.writeFile(`/alva/home/${env.username}/data/memory/${args.key}.json`, JSON.stringify(args.value));
      return { saved: true };
    },
  },
];

General Best Practices

  • Keep tools focused: Each tool should do one thing (fetch data, compute a metric, read/write memory). Let the agent compose them.
  • Combine with Feed SDK: Store agent output as time series via ctx.self.ts().append() for queryable, versioned research history.
  • Idempotent runs: Use ctx.kv to track last-processed dates, so re-runs don't duplicate insights.

Deployment Quick Reference

See deployment.md for full details.

Deploy feed scripts or tasks as cronjobs for scheduled execution:

POST /api/v1/deploy/cronjob
{"path":"~/feeds/btc-ema/v1/src/index.js","cron_expression":"0 */4 * * *","name":"BTC EMA Update"}

Cronjobs execute the script via the same jagent runtime as /api/v1/run. Max 20 cronjobs per user. Min interval: 1 minute.

After deploying a cronjob, register the feed, create a playbook draft, then release the playbook for public hosting. The playbook HTML must already be written to ALFS at ~/playbooks/{name}/index.html via fs/write before releasing.

Important: Feed names and playbook names must be unique within your user space. Before creating a new feed or playbook, use GET /api/v1/fs/readdir?path=~/feeds or GET /api/v1/fs/readdir?path=~/playbooks to check for existing names and avoid conflicts.

# 1. Release feed (register in DB, link to cronjob)
POST /api/v1/release/feed
{"name":"btc-ema","version":"1.0.0","task_id":42}
→ {"feed_id":100,"name":"btc-ema","feed_major":1}

# 2. Create playbook draft (creates DB record + ALFS draft files automatically)
POST /api/v1/draft/playbook
{"name":"btc-dashboard","description":"BTC market dashboard","feeds":[{"feed_id":100}]}
→ {"playbook_id":99,"playbook_version_id":200}

# 3. Release playbook (reads HTML from ALFS, uploads to CDN, writes release files automatically)
POST /api/v1/release/playbook
{"name":"btc-dashboard","version":"v1.0.0","feeds":[{"feed_id":100}]}
→ {"playbook_id":99,"version":"v1.0.0","published_url":"https://alice.playbook.alva.ai/btc-dashboard/v1.0.0/index.html"}

# After release, output the alva.ai playbook link to the user:
# https://alva.ai/<username>/playbooks/<playbook_id>
# e.g. https://alva.ai/alice/playbooks/99

Alva Design System

All Alva playbook pages, dashboards, and widgets must follow the Alva Design System. Start with design-system.md: it is the single global entry point for tokens, typography, page-level layout rules, and the reading path to the more detailed design references.

Read only what you need:


Filesystem Layout Convention

PathPurpose
~/tasks/<name>/src/Task source code
~/feeds/<name>/v1/src/Feed script source code
~/feeds/<name>/v1/data/Feed synth mount (auto-created by Feed SDK)
~/playbooks/<name>/Playbook web app assets
~/data/General data storage
~/library/Shared code modules

Prefer using the Feed SDK for all data organization, including point-in-time snapshots. Store snapshots as single-record time series rather than raw JSON files via alfs.writeFile(). This keeps all data queryable through a single consistent read pattern (@last, @range, etc.).


Common Pitfalls

  • @last returns chronological (oldest-first) order, consistent with @first and @range. No manual sorting needed.
  • Time series reads return flat JSON records. Paths with @last, @range, etc. return JSON arrays of flat records like [{"date":...,"close":...,"ema10":...}]. Regular paths return file content with Content-Type: application/octet-stream.
  • last(N) limits unique timestamps, not records. When multiple records share a timestamp (grouped via append()), auto-flatten may return more than N individual records.
  • The data/ in feed paths is the synth mount. feedPath("my-feed") gives ~/feeds/my-feed/v1, and the Feed SDK mounts storage at <feedPath>/data/. Don't name your group "data" or you'll get data/data/....
  • Public reads require absolute paths. Unauthenticated reads must use /alva/home/<username>/... (not ~/...). Discover your username via GET /api/v1/me.
  • Top-level await is not supported. Wrap async code in (async () => { ... })();.
  • require("alfs") uses absolute paths. Inside the V8 runtime, alfs.readFile() needs full paths like /alva/home/alice/.... Get your username from require("env").username.
  • No Node.js builtins. require("fs"), require("path"), require("http") do not exist. Use require("alfs") for files, require("net/http") for HTTP.
  • Altra run() is async. FeedAltra.run() returns a Promise<RunResult>. Always await it: const result = await altra.run(endDate);
  • Altra decisions happen at bar CLOSE. Feature timestamps must use bar.endTime, not bar.date. Using bar.date introduces look-ahead bias.
  • Altra lookback: feature vs strategy. Feature lookback controls how many bars the feature computation sees. Strategy lookback controls how many feature outputs the strategy function sees. They are independent.
  • Cronjob path must point to an existing script. The deploy API validates the entry_path exists via filesystem stat before creating the cronjob.
  • Always create a draft before releasing. POST /api/v1/release/playbook requires the playbook to already exist (created via POST /api/v1/draft/playbook).

Resource Limits

ResourceLimit
V8 heap per execution2 GB
Write payload10 MB max per request
HTTP response body128 MB max
Max cronjobs per user20
Min cron interval1 minute

Error Responses

All errors return: {"error":{"code":"...","message":"..."}}

HTTP StatusCodeMeaning
400INVALID_ARGUMENTBad request or invalid path
401UNAUTHENTICATEDMissing or invalid API key
403PERMISSION_DENIEDAccess denied
404NOT_FOUNDFile/directory not found
429RATE_LIMITEDRate limit / runner pool exhausted
500INTERNALServer error

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

open-alva

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

azure-rbac

Use the 'azure__documentation' tool to find the minimal role definition that matches the desired permissions the user wants to assign to an identity. If no built-in role matches the desired permissions, use the 'azure__extension_cli_generate' tool to create a custom role definition with the desired permissions. Then use the 'azure__extension_cli_generate' tool to generate the CLI commands needed to assign that role to the identity. Finally, use the 'azure__bicepschema' and 'azure__get_azure_bestpractices' tools to provide a Bicep code snippet for adding the role assignment. If user is asking about role necessary to set access, refer to Prerequisites for Granting Roles down below:

Repository SourceNeeds Review
136.7K155microsoft
Web3

crypto-report

No summary provided by upstream source.

Repository SourceNeeds Review
772-aahl
Web3

agentwallet

No summary provided by upstream source.

Repository SourceNeeds Review