rig

Build LLM-powered applications with Rig, the Rust AI framework. Use when creating agents, RAG pipelines, tool-calling workflows, structured extraction, or streaming completions. Covers all providers with a unified API.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "rig" with this command: npx skills add 0xplaygrounds/rig/0xplaygrounds-rig-rig

Building with Rig

Rig is a Rust library for building LLM-powered applications with a provider-agnostic API. All patterns use the builder pattern and async/await via tokio.

Quick Start

use rig::completion::Prompt;
use rig::providers::openai;

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    let client = openai::Client::from_env();

    let agent = client
        .agent(openai::GPT_4O)
        .preamble("You are a helpful assistant.")
        .build();

    let response = agent.prompt("Hello!").await?;
    println!("{}", response);
    Ok(())
}

Core Patterns

1. Simple Agent

let agent = client.agent(openai::GPT_4O)
    .preamble("System prompt")
    .temperature(0.7)
    .max_tokens(2000)
    .build();

let response = agent.prompt("Your question").await?;

2. Agent with Tools

Define a tool by implementing the Tool trait, then attach it:

let agent = client.agent(openai::GPT_4O)
    .preamble("You can use tools.")
    .tool(MyTool)
    .build();

See references/tools.md for the full Tool trait signature.

3. RAG (Retrieval-Augmented Generation)

let embedding_model = client.embedding_model(openai::TEXT_EMBEDDING_ADA_002);
let index = vector_store.index(embedding_model);

let agent = client.agent(openai::GPT_4O)
    .preamble("Answer using the provided context.")
    .dynamic_context(5, index)  // top-5 similar docs per query
    .build();

See references/rag.md for vector store setup and the Embed derive macro.

4. Streaming

use futures::StreamExt;
use rig::streaming::StreamedAssistantContent;
use rig::agent::prompt_request::streaming::MultiTurnStreamItem;

let mut stream = agent.stream_prompt("Tell me a story").await?;

while let Some(chunk) = stream.next().await {
    match chunk? {
        MultiTurnStreamItem::StreamAssistantItem(
            StreamedAssistantContent::Text(text)
        ) => print!("{}", text.text),
        MultiTurnStreamItem::FinalResponse(resp) => {
            println!("\n{}", resp.response());
        }
        _ => {}
    }
}

5. Structured Extraction

use schemars::JsonSchema;
use serde::{Deserialize, Serialize};

#[derive(Deserialize, Serialize, JsonSchema)]
struct Person {
    pub name: Option<String>,
    pub age: Option<u8>,
}

let extractor = client.extractor::<Person>(openai::GPT_4O).build();
let person = extractor.extract("John is 30 years old.").await?;

6. Chat with History

use rig::completion::Chat;

let history = vec![
    Message::from("Hi, I'm Alice."),
    // ...previous messages
];
let response = agent.chat("What's my name?", history).await?;

Agent Builder Methods

MethodDescription
.preamble(str)Set system prompt
.context(str)Add static context document
.dynamic_context(n, index)Add RAG with top-n retrieval
.tool(impl Tool)Attach a callable tool
.tools(Vec<Box<dyn ToolDyn>>)Attach multiple tools
.temperature(f64)Set temperature (0.0-1.0)
.max_tokens(u64)Set max output tokens
.additional_params(json!{...})Provider-specific params
.tool_choice(ToolChoice)Control tool usage
.build()Build the agent

Available Providers

Create a client with ProviderName::Client::from_env() or ProviderName::Client::new("key").

ProviderModuleExample Model Constant
OpenAIopenaiGPT_4O, GPT_4O_MINI
AnthropicanthropicCLAUDE_4_OPUS, CLAUDE_4_SONNET
CoherecohereCOMMAND_R_PLUS
MistralmistralMISTRAL_LARGE
Geminigeminimodel string
Groqgroqmodel string
Ollamaollamamodel string
DeepSeekdeepseekmodel string
xAIxaimodel string
Togethertogethermodel string
Perplexityperplexitymodel string
OpenRouteropenroutermodel string
HuggingFacehuggingfacemodel string
Azureazuredeployment string
Hyperbolichyperbolicmodel string
Galadrielgaladrielmodel string
Moonshotmoonshotmodel string
Miramiramodel string
Voyage AIvoyageaiembeddings only

Vector Store Crates

BackendCrate
In-memoryrig-core (built-in)
MongoDBrig-mongodb
LanceDBrig-lancedb
Qdrantrig-qdrant
SQLiterig-sqlite
Neo4jrig-neo4j
Milvusrig-milvus
SurrealDBrig-surrealdb

Key Rules

  • All async code runs on tokio.
  • Use WasmCompatSend / WasmCompatSync instead of raw Send / Sync for WASM compatibility.
  • Use proper error types with thiserror — never Result<(), String>.
  • Avoid .unwrap() — use ? operator.

Further Reference

Detailed API documentation (available when installed via Claude Code skills):

  • tools — Tool trait, ToolDefinition, ToolEmbedding, attachment patterns
  • rag — Vector stores, Embed derive, EmbeddingsBuilder, search requests
  • providers — Provider-specific initialization, model constants, env vars
  • patterns — Multi-agent, hooks, streaming details, chaining, extraction

For the full reference, see the Rig examples at rig-core/examples/ or https://docs.rig.rs

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

rig-migrate

No summary provided by upstream source.

Repository SourceNeeds Review
General

rig

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

clinic-visit-prep

帮助患者整理就诊前问题、既往记录、检查清单与时间线,不提供诊断。;use for healthcare, intake, prep workflows;do not use for 给诊断结论, 替代医生意见.

Archived SourceRecently Updated
Automation

changelog-curator

从变更记录、提交摘要或发布说明中整理对外 changelog,并区分用户价值与内部改动。;use for changelog, release-notes, docs workflows;do not use for 捏造未发布功能, 替代正式合规审批.

Archived SourceRecently Updated