weave-integration

Add W&B Weave observability and LLM tracing to any project. Use when instrumenting LLM calls for token visibility, latency tracking, cost analysis, or debugging. Supports TypeScript/Node.js and Python projects. Weave provides automatic tracing for OpenAI, Anthropic, and 20+ LLM providers with minimal code changes.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "weave-integration" with this command: npx skills add altryne/weavify-skill/altryne-weavify-skill-weave-integration

Weave Integration

Add W&B Weave observability to any LLM project. Weave traces all LLM calls, captures tokens/latency/costs, and provides a UI for debugging and evaluation.

Quick Start

1. Install

# TypeScript/Node.js
npm install weave

# Python
pip install weave

2. Get API Key

Set WANDB_API_KEY environment variable. Get key from wandb.ai/settings.

export WANDB_API_KEY="your-key-here"

3. Initialize

// TypeScript
import * as weave from 'weave';
await weave.init('your-team/project-name');
# Python
import weave
weave.init('your-team/project-name')

4. Trace LLM Calls

Auto-patching (supported providers traced automatically):

// TypeScript - CommonJS: works out of the box
import OpenAI from 'openai';
import * as weave from 'weave';

await weave.init('my-project');
const client = new OpenAI();
// All calls now traced automatically

Manual wrapping (for custom functions or unsupported libs):

// TypeScript
const myFunction = weave.op(async (input: string) => {
  // your code here
  return result;
});
# Python
@weave.op()
def my_function(input: str):
    return result

TypeScript Setup Details

See references/typescript.md for:

  • ESM configuration (--import=weave/instrument)
  • Bundler compatibility (Next.js, Vite)
  • Manual patching fallback

Supported Providers (Auto-traced)

OpenAI, Anthropic, Cohere, Mistral, Google, Groq, Together AI, LiteLLM, Azure, Bedrock, Cerebras, HuggingFace, OpenRouter, NVIDIA NIM, and more.

Full list: https://docs.wandb.ai/weave/guides/integrations

Integration Workflow

When adding Weave to a project:

  1. Find LLM call sites — search for OpenAI/Anthropic client usage
  2. Add weave.init() — early in app startup, before any LLM calls
  3. Verify auto-patching — check traces appear in W&B UI
  4. Wrap custom functions — use weave.op() for additional visibility
  5. Add cost tracking — Weave tracks tokens automatically for supported providers

Viewing Traces

After running your app:

  • Open wandb.ai → Your project → Weave tab
  • See all traces with inputs, outputs, latency, token usage, costs
  • Filter, search, and export call data

Environment Variables

VariablePurpose
WANDB_API_KEYAuthentication (required)
WEAVE_IMPLICITLY_PATCH_INTEGRATIONSSet false to disable auto-patching

Common Patterns

Wrap Existing Client

import { wrapOpenAI } from 'weave';
import OpenAI from 'openai';

const client = wrapOpenAI(new OpenAI());

Trace Class Methods

class MyAgent {
  @weave.op
  async predict(prompt: string) {
    return "response";
  }
}

Add Display Names

const myOp = weave.op(myFunction, {
  callDisplayName: (input) => `Custom Name: ${input}`
});

Clawdbot-Specific Integration

For Clawdbot/similar Node.js agents:

  1. Locate the LLM client initialization (usually Anthropic/OpenAI SDK)
  2. Add weave.init() in the main entry point
  3. For ESM, add --import=weave/instrument to node invocation
  4. All provider calls will be traced to W&B

Troubleshooting

  • No traces appearing: Check WANDB_API_KEY is set
  • ESM not patching: Use --import=weave/instrument flag
  • Bundler issues: Mark LLM libs as external in config
  • Manual fallback: Use wrapOpenAI() or explicit weave.op() wrappers

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

openclaw-version-monitor

监控 OpenClaw GitHub 版本更新,获取最新版本发布说明,翻译成中文, 并推送到 Telegram 和 Feishu。用于:(1) 定时检查版本更新 (2) 推送版本更新通知 (3) 生成中文版发布说明

Archived SourceRecently Updated
Coding

ask-claude

Delegate a task to Claude Code CLI and immediately report the result back in chat. Supports persistent sessions with full context memory. Safe execution: no data exfiltration, no external calls, file operations confined to workspace. Use when the user asks to run Claude, delegate a coding task, continue a previous Claude session, or any task benefiting from Claude Code's tools (file editing, code analysis, bash, etc.).

Archived SourceRecently Updated
Coding

ai-dating

This skill enables dating and matchmaking workflows. Use it when a user asks to make friends, find a partner, run matchmaking, or provide dating preferences/profile updates. The skill should execute `dating-cli` commands to complete profile setup, task creation/update, match checking, contact reveal, and review.

Archived SourceRecently Updated
Coding

clawhub-rate-limited-publisher

Queue and publish local skills to ClawHub with a strict 5-per-hour cap using the local clawhub CLI and host scheduler.

Archived SourceRecently Updated