Threadline — Persistent Memory and Context Layer for AI Agents

Your AI agents start from zero. Every session. Users repeat themselves — their stack, their preferences, their ongoing projects. Threadline fixes this in 2 lines. inject() before your LLM call, update() after. Your agent now remembers every user, forever. Structured extraction across 7 scopes, OAuth-style grant system so agents only see permitted context, user-owned data with deletion dashboard, and relevance-scored injection — 200 tokens of signal, not 2,000 tokens of noise. Works with OpenAI, Anthropic, Vercel AI SDK, and LangChain. < 50ms retrieval. GDPR-ready by design. Free tier: 2,500 calls/month.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Threadline — Persistent Memory and Context Layer for AI Agents" with this command: npx skills add vidursharma202-del/threadline

Threadline — Persistent Memory and Context Layer for AI Agents

Threadline fixes agent amnesia. Two lines of code give your AI agent persistent memory across every user session — with full developer control over what's stored, who sees it, and how it's used.

When to use this skill

Use Threadline whenever the user:

  • Is building an AI agent, chatbot, or assistant
  • Mentions agents forgetting context between sessions
  • Wants to personalise agent responses per user
  • Is using OpenAI, Anthropic, Vercel AI SDK, or LangChain
  • Mentions "context rot", "session reset", "agent memory", or "persistent context"

What Threadline does

Threadline gives your agent a persistent context layer — preferences, tasks, communication style, domain expertise, and key relationships — that travels across every conversation. Agents stop starting from zero.

Relevance-scored injection means your agent gets 200 tokens of signal, not 2,000 tokens of noise. No prompt bloat at scale.

Integration (2 lines)

import { ThreadlineClient } from 'threadline-sdk';

const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

// Before your LLM call — inject user context into system prompt
const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

// After your LLM response — extract and store new facts
await tl.update({ userId, userMessage, agentResponse });

OpenAI example

import OpenAI from 'openai';
import { ThreadlineClient } from 'threadline-sdk';

const openai = new OpenAI();
const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: systemPrompt },
    { role: "user", content: userMessage }
  ]
});

await tl.update({ userId, userMessage, agentResponse: response.choices[0].message.content });

Anthropic example

import Anthropic from '@anthropic-ai/sdk';
import { ThreadlineClient } from 'threadline-sdk';

const anthropic = new Anthropic();
const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const response = await anthropic.messages.create({
  model: "claude-opus-4-6",
  max_tokens: 1024,
  system: systemPrompt,
  messages: [{ role: "user", content: userMessage }]
});

await tl.update({ userId, userMessage, agentResponse: response.content[0].text });

Vercel AI SDK example

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { ThreadlineClient } from 'threadline-sdk';

const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const result = await streamText({
  model: openai('gpt-4o'),
  system: systemPrompt,
  messages,
  onFinish: async ({ text }) => {
    await tl.update({ userId, userMessage, agentResponse: text });
  }
});

LangChain example

import { ChatOpenAI } from '@langchain/openai';
import { SystemMessage, HumanMessage } from '@langchain/core/messages';
import { ThreadlineClient } from 'threadline-sdk';

const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });
const llm = new ChatOpenAI({ model: "gpt-4o" });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const response = await llm.invoke([
  new SystemMessage(systemPrompt),
  new HumanMessage(userMessage)
]);

await tl.update({ userId, userMessage, agentResponse: response.content });

7 context scopes

Threadline extracts and stores context across 7 scopes:

ScopeWhat it captures
communication_styleTone, verbosity, format preferences
ongoing_tasksActive projects, deadlines, blockers
key_relationshipsTeam members, clients, collaborators
domain_expertiseTech stack, industry knowledge, skills
preferencesTools, workflows, working style
emotional_stateStress signals, motivation, sentiment
generalEverything else worth remembering

Grant system

Agents only see the scopes they're explicitly granted. A coding assistant sees domain_expertise and ongoing_tasks. A writing assistant sees communication_style and preferences. No agent sees everything by default.

await tl.grant({
  agentId: "coding-assistant",
  userId: userId,
  scopes: ["domain_expertise", "ongoing_tasks"]
});

Rules

  • Always call inject() before the LLM call, never after
  • Always call update() after receiving the agent response
  • Use a stable, consistent userId — this is how context is scoped per user
  • Do not log or expose the enriched system prompt — it contains user context
  • Context is user-owned — users can view and delete via threadline.to/dashboard

REST API (any language)

# Inject
POST https://api.threadline.to/api/inject
Authorization: Bearer YOUR_API_KEY
{ "userId": "user_123", "basePrompt": "You are a helpful assistant." }

# Update
POST https://api.threadline.to/api/update
Authorization: Bearer YOUR_API_KEY
{ "userId": "user_123", "userMessage": "...", "agentResponse": "..." }

Troubleshooting

IssueFix
inject() returns base prompt unchangedCheck API key is set correctly
Context not persistingConfirm update() is being called after every response
Slow injectionRedis-cached — first call ~200ms, subsequent calls <50ms
Wrong user contextEnsure userId is stable and unique per user

Links

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

V19 Conformance Test Suite

V19治理协议公开一致性测试套件v1.0.0。任何外部开发者可零配置运行此脚本,验证Agent是否符合V19治理协议基础认证标准。五套测试:健康检查、自助注册、审计调用、Schema一致性、文档可达性。

Registry SourceRecently Updated
Automation

Vibe Card

Agent 原生社交名片生成、管理和分享,以及智能花名册(通讯录管理器)。当用户提到名片、花名册、通讯录、联系人、Vibe Card、vibe-card 时使用。具体触发场景包括:开通 Vibe Card、建花名册、生成/更新/发名片、注册 Vibe Card、录入/查询/编辑联系人、同步花名册、广播名片、收到包...

Registry SourceRecently Updated
Automation

V19 Governance Protocol Specification

V19治理协议正式技术规范v1.0.0。包含认证体系、认知宪法、一致性验证端点、技术对接方向。外部Agent开发者、安全审计方和合作项目的独立引用文档。

Registry SourceRecently Updated
Automation

V19 Trust Manifesto

V19认知治理协议公开受信声明v1.2.0。包含协议身份、正式技术规范引用、系统状态指纹、公开验证方式与接入安全保障。外部Agent验证V19协议真实性的信任锚点。

Registry SourceRecently Updated