honcho-integration

Honcho Integration Guide

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "honcho-integration" with this command: npx skills add plastic-labs/honcho/plastic-labs-honcho-honcho-integration

Honcho Integration Guide

What is Honcho

Honcho is an open source memory library for building stateful agents. It works with any model, framework, or architecture. You send Honcho the messages from your conversations, and custom reasoning models process them in the background — extracting premises, drawing conclusions, and building rich representations of each participant over time. Your agent can then query those representations on-demand ("What does this user care about?", "How technical is this person?") and get grounded, reasoned answers.

The key mental model: Peers are any participant — human or AI. Both are represented the same way. Observation settings (observe_me , observe_others ) control which peers Honcho reasons about. Typically you want Honcho to model your users (observe_me=True ) but not your AI assistant (observe_me=False ). Sessions scope conversations between peers. Messages are the raw data you feed in — Honcho reasons about them asynchronously and stores the results as the peer's representation. No messages means no reasoning means no memory.

Your agent accesses this memory through peer.chat(query) (ask a natural language question, get a reasoned answer), session.context() (get formatted conversation history + representations), or both.

Integration Workflow

Follow these phases in order:

Phase 1: Codebase Exploration

Before asking the user anything, explore the codebase to understand:

  • Language & Framework: Is this Python or TypeScript? What frameworks are used (FastAPI, Express, Next.js, etc.)?

  • Existing AI/LLM code: Search for existing LLM integrations (OpenAI, Anthropic, LangChain, etc.)

  • Entity structure: Identify users, agents, bots, or other entities that interact

  • Session/conversation handling: How does the app currently manage conversations?

  • Message flow: Where are messages sent/received? What's the request/response cycle?

Use Glob and Grep to find:

  • **/.py or **/.ts files with "openai", "anthropic", "llm", "chat", "message"

  • User/session models or types

  • API routes handling chat or conversation endpoints

Bot framework detected? If the codebase is built around an agent loop, tool registry, session manager, and message bus (e.g., nanobot, openclaw, picoclaw), read {baseDir}/references/bot-frameworks.md for framework-specific integration guidance and check {baseDir}/references/bot-frameworks/<framework>/ for concrete reference implementations.

Phase 2: Interview (REQUIRED)

After exploring the codebase, use the AskUserQuestion tool to clarify integration requirements. Ask these questions (adapt based on what you learned in Phase 1):

Question Set 1 - Entities & Peers

Ask about which entities should be Honcho peers:

  • header: "Peers"

  • question: "Which entities should Honcho track and build representations for?"

  • options based on what you found (e.g., "End users only", "Users + AI assistant", "Users + multiple AI agents", "All participants including third-party services")

  • Include a follow-up if they have multiple AI agents: should any AI peers be observed?

Question Set 2 - Integration Pattern

Ask how they want to use Honcho context:

  • header: "Pattern"

  • question: "How should your AI access Honcho's user context?"

  • options:

  • "Tool call (Recommended)" - "Agent queries Honcho on-demand via function calling"

  • "Pre-fetch" - "Fetch user context before each LLM call with predefined queries"

  • "context()" - "Include conversation history and representations in prompt"

  • "Multiple patterns" - "Combine approaches for different use cases"

Question Set 3 - Session Structure

Ask about conversation structure:

  • header: "Sessions"

  • question: "How should conversations map to Honcho sessions?"

  • options based on their app (e.g., "One session per chat thread", "One session per user", "Multiple users per session (group chat)", "Custom session logic")

Question Set 4 - Specific Queries (if using pre-fetch pattern)

If they chose pre-fetch, ask what context matters:

  • header: "Context"

  • question: "What user context should be fetched for the AI?"

  • multiSelect: true

  • options: "Communication style", "Expertise level", "Goals/priorities", "Preferences", "Recent activity summary", "Custom queries"

Phase 3: Implementation

Based on interview responses, implement the integration:

  • Install the SDK

  • Create Honcho client initialization

  • Set up peer creation for identified entities

  • Implement the chosen integration pattern(s)

  • Add message storage after exchanges

  • Update any existing conversation handlers

Phase 4: Verification

  • If the Honcho CLI is available, run honcho doctor to confirm connectivity before testing the integration code

  • Use honcho peer list and honcho peer chat to verify peers exist and the dialectic endpoint works independently of the integration

  • Ensure all message exchanges are stored to Honcho

  • Verify AI peers have observe_me=False (unless user specifically wants AI observation)

  • Check that the workspace ID is consistent across the codebase

  • Confirm environment variable for API key is documented

Before You Start

Check the latest SDK versions at https://docs.honcho.dev/changelog/introduction

  • Python SDK: honcho-ai

  • TypeScript SDK: @honcho-ai/sdk

Get an API key ask the user to get a Honcho API key from https://app.honcho.dev and add it to the environment.

Verify with the CLI (optional but recommended). If the user has the Honcho CLI installed (pip install honcho-cli ), they can validate their setup before writing any integration code:

honcho init # persist API key + URL to ~/.honcho/config.json honcho doctor # verify connectivity, config, workspace health honcho peer chat # test the dialectic endpoint interactively

This is the fastest way to confirm the API key and URL are correct before debugging SDK code.

Installation

Python (use uv)

uv add honcho-ai

TypeScript (use bun)

bun add @honcho-ai/sdk

Sync vs Async

TypeScript — The SDK is async by default. All methods return promises. No separate sync API.

Python — The SDK provides both sync and async interfaces:

  • Sync (default): from honcho import Honcho — use in sync frameworks (Flask, Django, CLI scripts)

  • Async: from honcho import Honcho with .aio namespace — use in async frameworks (FastAPI, Starlette, async workers)

Sync usage (Flask, Django, scripts)

from honcho import Honcho honcho = Honcho(workspace_id="my-app", api_key=os.environ["HONCHO_API_KEY"]) peer = honcho.peer("user-123") response = peer.chat("What does this user prefer?")

Async usage (FastAPI, Starlette)

from honcho import Honcho honcho = Honcho(workspace_id="my-app", api_key=os.environ["HONCHO_API_KEY"]) peer = await honcho.aio.peer("user-123") response = await peer.aio.chat("What does this user prefer?")

Match the client to the framework — check whether the codebase uses async def handlers or sync def handlers and choose accordingly. The rest of this skill shows sync Python examples; swap to .aio equivalents for async codebases.

Core Integration Patterns

  1. Initialize with a Single Workspace

Use ONE workspace for your entire application. The workspace name should reflect your app/product.

Python:

from honcho import Honcho import os

Sync client (Flask, Django, scripts)

honcho = Honcho( workspace_id="your-app-name", api_key=os.environ["HONCHO_API_KEY"], environment="production" )

Async client (FastAPI, Starlette) — use honcho.aio for all operations

honcho.aio.peer(), honcho.aio.session(), etc.

TypeScript:

import { Honcho } from '@honcho-ai/sdk';

// All methods are async by default const honcho = new Honcho({ workspaceId: "your-app-name", apiKey: process.env.HONCHO_API_KEY, environment: "production" });

  1. Create Peers for ALL Entities

Create peers for every entity in your business logic - users AND AI assistants.

Python:

from honcho.api_types import PeerConfig

Human users

user = honcho.peer("user-123")

AI assistants - set observe_me=False so Honcho doesn't model the AI

assistant = honcho.peer("assistant", configuration=PeerConfig(observe_me=False)) support_bot = honcho.peer("support-bot", configuration=PeerConfig(observe_me=False))

TypeScript:

// Human users const user = await honcho.peer("user-123");

// AI assistants - set observeMe=false so Honcho doesn't model the AI const assistant = await honcho.peer("assistant", { configuration: { observeMe: false } }); const supportBot = await honcho.peer("support-bot", { configuration: { observeMe: false } });

  1. Multi-Peer Sessions

Sessions can have multiple participants. Configure observation settings per-peer.

Python:

from honcho.api_types import SessionPeerConfig

session = honcho.session("conversation-123")

User is observed (Honcho builds a model of them)

user_config = SessionPeerConfig(observe_me=True, observe_others=True)

AI is NOT observed (no model built of the AI)

ai_config = SessionPeerConfig(observe_me=False, observe_others=True)

session.add_peers([ (user, user_config), (assistant, ai_config) ])

TypeScript:

const session = await honcho.session("conversation-123");

await session.addPeers([ [user, { observeMe: true, observeOthers: true }], [assistant, { observeMe: false, observeOthers: true }] ]);

  1. Add Messages to Sessions

Python:

session.add_messages([ user.message("I'm having trouble with my account"), assistant.message("I'd be happy to help. What seems to be the issue?"), user.message("I can't reset my password") ])

TypeScript:

await session.addMessages([ user.message("I'm having trouble with my account"), assistant.message("I'd be happy to help. What seems to be the issue?"), user.message("I can't reset my password") ]);

Using Honcho for AI Agents

Pattern A: Dialectic Chat as a Tool Call (Recommended for Agents)

Make Honcho's chat endpoint available as a tool for your AI agent. This lets the agent query user context on-demand.

Python (OpenAI function calling):

import openai from honcho import Honcho

honcho = Honcho(workspace_id="my-app", api_key=os.environ["HONCHO_API_KEY"])

Define the tool for your agent

honcho_tool = { "type": "function", "function": { "name": "query_user_context", "description": "Query Honcho to retrieve relevant context about the user based on their history and preferences. Use this when you need to understand the user's background, preferences, past interactions, or goals.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "A natural language question about the user, e.g. 'What are this user's main goals?' or 'What communication style does this user prefer?'" } }, "required": ["query"] } } }

def handle_honcho_tool_call(user_id: str, query: str) -> str: """Execute the Honcho chat tool call.""" peer = honcho.peer(user_id) return peer.chat(query)

Use in your agent loop

def run_agent(user_id: str, user_message: str): messages = [{"role": "user", "content": user_message}]

response = openai.chat.completions.create(
    model="gpt-4",
    messages=messages,
    tools=[honcho_tool]
)

# Handle tool calls
if response.choices[0].message.tool_calls:
    for tool_call in response.choices[0].message.tool_calls:
        if tool_call.function.name == "query_user_context":
            import json
            args = json.loads(tool_call.function.arguments)
            result = handle_honcho_tool_call(user_id, args["query"])
            # Continue conversation with tool result...

TypeScript (OpenAI function calling):

import OpenAI from 'openai'; import { Honcho } from '@honcho-ai/sdk';

const honcho = new Honcho({ workspaceId: "my-app", apiKey: process.env.HONCHO_API_KEY });

const honchoTool: OpenAI.ChatCompletionTool = { type: "function", function: { name: "query_user_context", description: "Query Honcho to retrieve relevant context about the user based on their history and preferences.", parameters: { type: "object", properties: { query: { type: "string", description: "A natural language question about the user" } }, required: ["query"] } } };

async function handleHonchoToolCall(userId: string, query: string): Promise<string> { const peer = await honcho.peer(userId); return await peer.chat(query); }

Pattern B: Pre-fetch Context with Targeted Queries

For simpler integrations, fetch user context before the LLM call using pre-defined queries.

Python:

def get_user_context_for_prompt(user_id: str) -> dict: """Fetch key user attributes via targeted Honcho queries.""" peer = honcho.peer(user_id)

return {
    "communication_style": peer.chat("What communication style does this user prefer? Be concise."),
    "expertise_level": peer.chat("What is this user's technical expertise level? Be concise."),
    "current_goals": peer.chat("What are this user's current goals or priorities? Be concise."),
    "preferences": peer.chat("What key preferences should I know about this user? Be concise.")
}

def build_system_prompt(user_context: dict) -> str: return f"""You are a helpful assistant. Here's what you know about this user:

Communication style: {user_context['communication_style']} Expertise level: {user_context['expertise_level']} Current goals: {user_context['current_goals']} Key preferences: {user_context['preferences']}

Tailor your responses accordingly."""

TypeScript:

async function getUserContextForPrompt(userId: string): Promise<Record<string, string>> { const peer = await honcho.peer(userId);

const [style, expertise, goals, preferences] = await Promise.all([
    peer.chat("What communication style does this user prefer? Be concise."),
    peer.chat("What is this user's technical expertise level? Be concise."),
    peer.chat("What are this user's current goals or priorities? Be concise."),
    peer.chat("What key preferences should I know about this user? Be concise.")
]);

return {
    communicationStyle: style,
    expertiseLevel: expertise,
    currentGoals: goals,
    preferences: preferences
};

}

Pattern C: Get Context for LLM Integration

Use context() for conversation history with built-in LLM formatting.

Python:

import openai

session = honcho.session("conversation-123") user = honcho.peer("user-123") assistant = honcho.peer("assistant", configuration=PeerConfig(observe_me=False))

Get context formatted for your LLM

context = session.context( tokens=2000, peer_target=user.id, # Include representation of this user summary=True # Include conversation summaries )

Convert to OpenAI format

messages = context.to_openai(assistant=assistant)

Or Anthropic format

messages = context.to_anthropic(assistant=assistant)

Add the new user message

messages.append({"role": "user", "content": "What should I focus on today?"})

response = openai.chat.completions.create( model="gpt-4", messages=messages )

Store the exchange

session.add_messages([ user.message("What should I focus on today?"), assistant.message(response.choices[0].message.content) ])

TypeScript:

import OpenAI from 'openai';

const session = await honcho.session("conversation-123"); const user = await honcho.peer("user-123"); const assistant = await honcho.peer("assistant", { configuration: { observeMe: false } });

// Get context formatted for your LLM const context = await session.context({ tokens: 2000, peerTarget: user.id, // Include representation of this user summary: true // Include conversation summaries });

// Convert to OpenAI format const messages = context.toOpenAI(assistant);

// Or Anthropic format // const messages = context.toAnthropic(assistant);

// Add the new user message messages.push({ role: "user", content: "What should I focus on today?" });

const openai = new OpenAI(); const response = await openai.chat.completions.create({ model: "gpt-4", messages });

// Store the exchange await session.addMessages([ user.message("What should I focus on today?"), assistant.message(response.choices[0].message.content!) ]);

Streaming Responses

Python:

stream = peer.chat_stream("What do we know about this user?")

for chunk in stream: print(chunk, end="", flush=True)

TypeScript:

const stream = await peer.chatStream("What do we know about this user?");

for await (const chunk of stream) { process.stdout.write(chunk); }

Integration Checklist

When integrating Honcho into an existing codebase:

  • Install SDK with uv add honcho-ai (Python) or bun add @honcho-ai/sdk (TypeScript)

  • Set up HONCHO_API_KEY environment variable

  • Initialize Honcho client with a single workspace ID

  • Create peers for all entities (users AND AI assistants)

  • Set observe_me=False for AI peers

  • Configure sessions with appropriate peer observation settings

  • Choose integration pattern:

  • Tool call pattern for agentic systems

  • Pre-fetch pattern for simpler integrations

  • context() for conversation history

  • Store messages after each exchange to build user models

  • (Optional) Run honcho doctor to verify connectivity before testing integration code

  • (Optional) Use honcho peer chat to test dialectic queries independently

Common Mistakes to Avoid

  • Multiple workspaces: Use ONE workspace per application

  • Forgetting AI peers: Create peers for AI assistants, not just users

  • Observing AI peers: Set observe_me=False for AI peers unless you specifically want Honcho to model your AI's behavior

  • Not storing messages: Always call add_messages() to feed Honcho's reasoning engine

  • Blocking on processing: Messages are processed asynchronously — don't poll or wait for reasoning to complete before continuing

Resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Interior Fullplan

# 室内设计·全案方案自动生成器

Archived SourceRecently Updated
General

kb-archiver

智能本地知识库归档系统 v1.1.0。支持 AI 智能分类、批量归档、全文搜索、统计报告。 自动将文件分类归档到本地知识库,提取全文索引支持秒级搜索。 小文件存本地、大文件可对接云存储。支持 Excel/Word/PPT/PDF/TXT 等格式。 当用户需要:归档文件、建立知识库、全文检索文档内容、管理大量工作文档、批量处理文件夹时使用。 关键词:知识库、归档、文件管理、全文搜索、文档索引、批量归档、AI分类

Archived SourceRecently Updated
General

honest-agent

强制诚实系统:防止AI撒谎、虚构、言行不一。核心功能:(1) 承诺自动追踪(写入honest-commitments.json)(2) 回复前诚实校验拦截 (3) 媒体并行识别(大模型+OCR择优)(4) 诚实审计日志 (5) 安全独立存储。触发词:诚实、撒谎、虚构、承诺、图片识别、媒体处理、我承诺、我会帮你。

Archived SourceRecently Updated
General

long-term-plan

长期计划推进技能。用于管理需要持续数天/数周的任务,采用滚动节点式规划(每3-5天为一个阶段,阶段结束时复盘并规划下一阶段)。支持自动日期计算、每日同步、复盘统计、多任务管理、极简指令。当用户说"开一个长期计划"、"今天计划任务"、"阶段复盘"、"滚动下一阶段"、"列出所有计划"时使用。

Archived SourceRecently Updated