tauri-dev

Tauri 2.0 Development Skill

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "tauri-dev" with this command: npx skills add liquid4all/cookbook/liquid4all-cookbook-tauri-dev

Tauri 2.0 Development Skill

Architecture Overview

LocalCowork uses Tauri 2.0 with a three-layer architecture. This skill covers the Rust backend (middle layer) and its integration with both the React frontend (top layer) and the MCP servers + inference backend (bottom layer).

React Frontend ←──Tauri IPC──→ Rust Backend ←──JSON-RPC/stdio──→ MCP Servers │ └──OpenAI API──→ Local LLM (Ollama/llama.cpp)

Key References

  • docs/architecture-decisions/001-tauri-over-electron.md — why Tauri

  • docs/architecture-decisions/003-model-abstraction-layer.md — the OpenAI API contract

  • docs/patterns/human-in-the-loop.md — confirmation/undo flow

  • docs/patterns/context-window-management.md — 32k token budget

Rust Backend Modules

agent_core/conversation.rs — ConversationManager

Manages conversation state, history, and persistence.

pub struct ConversationManager { db: SqlitePool, // Conversation history stored in SQLite current_session: Session, context_manager: ContextWindowManager, }

impl ConversationManager { /// Create a new conversation session pub async fn new_session(&mut self) -> Result<SessionId>;

/// Add a user message and get the model's response
pub async fn send_message(&#x26;mut self, message: &#x26;str) -> Result&#x3C;MessageStream>;

/// Get conversation history for context window
pub fn get_history(&#x26;self, max_tokens: usize) -> Vec&#x3C;Message>;

/// Persist a message to SQLite
async fn persist_message(&#x26;self, message: &#x26;Message) -> Result&#x3C;()>;

}

agent_core/tool_router.rs — ToolRouter

Routes model tool calls to the appropriate MCP server.

pub struct ToolRouter { mcp_client: MCPClient, audit_logger: AuditLogger, }

impl ToolRouter { /// Process a tool call from the model pub async fn dispatch(&self, tool_call: ToolCall) -> Result<ToolResult> { // 1. Look up tool in the MCP registry // 2. Check if confirmation is required // 3. If confirmed (or not required): send JSON-RPC call to server // 4. Log to audit trail // 5. If undo supported: push to undo stack // 6. Return result }

/// Check if a tool requires user confirmation
fn requires_confirmation(&#x26;self, tool_name: &#x26;str) -> bool;

/// Push to undo stack for reversible actions
async fn push_undo(&#x26;self, tool_call: &#x26;ToolCall, result: &#x26;ToolResult) -> Result&#x3C;()>;

}

agent_core/context_window.rs — ContextWindowManager

Manages the 32k token budget. See docs/patterns/context-window-management.md .

pub struct ContextWindowManager { max_tokens: usize, // 32,768 tokenizer: Tokenizer, // tiktoken-rs system_prompt: String, tool_definitions: String, }

impl ContextWindowManager { /// Build the full prompt for the model pub fn build_prompt( &self, history: &[Message], active_context: Option<&str>, ) -> Result<Vec<ChatMessage>>;

/// Count tokens for a string
pub fn count_tokens(&#x26;self, text: &#x26;str) -> usize;

/// Evict old messages when context is tight
fn evict_oldest(&#x26;mut self, history: &#x26;mut Vec&#x3C;Message>);

}

mcp_client/ — MCP Client

Manages MCP server processes and JSON-RPC communication.

pub struct MCPClient { servers: HashMap<String, MCPServerProcess>, tool_registry: ToolRegistry, }

impl MCPClient { /// Start all configured MCP servers pub async fn start_servers(&mut self, config: &MCPConfig) -> Result<()>;

/// Get the aggregated tool definitions for the LLM
pub fn get_tool_definitions(&#x26;self) -> Vec&#x3C;ToolDefinition>;

/// Send a tool call to the appropriate server
pub async fn call_tool(&#x26;self, name: &#x26;str, args: Value) -> Result&#x3C;Value>;

/// Gracefully shutdown all servers
pub async fn shutdown(&#x26;mut self) -> Result&#x3C;()>;

}

struct MCPServerProcess { child: Child, // tokio::process::Child stdin: ChildStdin, stdout: BufReader<ChildStdout>, tools: Vec<ToolDefinition>, }

inference/ — Inference Client

OpenAI-compatible API client for the local LLM.

pub struct InferenceClient { base_url: String, // e.g., "http://localhost:11434/v1" model: String, // e.g., "qwen2.5:32b-instruct" http_client: reqwest::Client, }

impl InferenceClient { /// Send a chat completion request (streaming) pub async fn chat_completion( &self, messages: Vec<ChatMessage>, tools: Vec<ToolDefinition>, ) -> Result<impl Stream<Item = StreamChunk>>;

/// Parse tool calls from model response
fn parse_tool_calls(response: &#x26;str) -> Result&#x3C;Vec&#x3C;ToolCall>>;

}

Tauri IPC Commands

The frontend communicates with the Rust backend via Tauri commands.

// src-tauri/src/commands/chat.rs #[tauri::command] async fn send_message( state: tauri::State<'_, AppState>, message: String, ) -> Result<String, String> { let mut conv = state.conversation_manager.lock().await; let response = conv.send_message(&message).await .map_err(|e| e.to_string())?; Ok(response) }

#[tauri::command] async fn confirm_action( state: tauri::State<'_, AppState>, action_id: String, confirmed: bool, ) -> Result<(), String> { let router = state.tool_router.lock().await; if confirmed { router.execute_confirmed(&action_id).await.map_err(|e| e.to_string())?; } else { router.reject_action(&action_id).await.map_err(|e| e.to_string())?; } Ok(()) }

#[tauri::command] async fn undo_last_action( state: tauri::State<'_, AppState>, ) -> Result<String, String> { let router = state.tool_router.lock().await; router.undo_last().await.map_err(|e| e.to_string()) }

Frontend invocation:

import { invoke } from '@tauri-apps/api/core';

const response = await invoke<string>('send_message', { message: userInput }); await invoke('confirm_action', { actionId: 'act-001', confirmed: true }); await invoke('undo_last_action');

Tauri Permissions (tauri.conf.json)

Each capability is granted explicitly:

{ "app": { "security": { "capabilities": [ { "identifier": "filesystem-access", "description": "Access user-granted directories", "permissions": [ "fs:allow-read", "fs:allow-write", "fs:scope-$DOCUMENTS", "fs:scope-$DOWNLOADS" ] }, { "identifier": "process-management", "description": "Manage MCP server child processes", "permissions": [ "shell:allow-spawn", "shell:allow-kill" ] }, { "identifier": "clipboard-access", "permissions": ["clipboard-manager:allow-read", "clipboard-manager:allow-write"] } ] } } }

Coding Standards (Rust)

  • Edition 2021

  • cargo clippy -- -D warnings must pass (zero warnings)

  • All public functions have doc comments (/// )

  • Error handling: thiserror for custom errors, anyhow for application errors

  • Async: tokio runtime (multi-threaded)

  • Max 300 lines per file — extract to submodules when approaching

  • Use tracing crate for structured logging (integrates with shared Logger)

  • No unwrap() in production code — use ? operator or explicit error handling

Dependencies (Cargo.toml)

Key crates:

[dependencies] tauri = { version = "2", features = ["shell-open"] } tokio = { version = "1", features = ["full"] } serde = { version = "1", features = ["derive"] } serde_json = "1" sqlx = { version = "0.7", features = ["runtime-tokio", "sqlite"] } reqwest = { version = "0.12", features = ["json", "stream"] } tiktoken-rs = "0.5" thiserror = "1" anyhow = "1" tracing = "0.1" tracing-subscriber = "0.3"

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

tauri-dev

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

tauri-dev

No summary provided by upstream source.

Repository SourceNeeds Review
Security

cookbook-audit

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

Tuya Cloud

Read sensor data and control Tuya IoT devices via Tuya Cloud API. Use when the user wants to list devices, read temperature, humidity, soil moisture, battery...

Registry SourceRecently Updated