sharpagent-memory-system

SharpAgent Layered Memory System — 6-layer memory hierarchy from ephemeral to long-term archive. Features dream processing (consolidation/archiving/forgetting/merging), TF-IDF semantic search, Chinese tokenization support. Solves the three memory problems: forgetting what matters, never forgetting, and poor retrieval.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sharpagent-memory-system" with this command: npx skills add yezhaowang888-stack/sharpagent-memory-system

SharpAgent Layered Memory System v1.0.0

Memory like a human — remembers what matters, forgets the rest, and searches accurately. Based on Layered Memory Sys (ClawHub, +324% in 24 days) 6-layer architecture + dream processing + Mem0 persistence.

Core Problem

Current agent memory problems:

ProblemSymptomConsequence
ForgetsEvery session starts freshRepeats mistakes, no accumulation
Never forgetsAll history flatHigh retrieval noise, key info drowned
Poor searchKeyword only, no semantics"five-factor" doesn't find "trust verification"

SharpAgent memory solves all three with 6 layers + dream processing.

Memory Layers

                        [User interaction]
                            │
                   ┌────────┴────────┐
                   │   L1: Ephemeral │  ← Current session context
                   └────────┬────────┘
                            │ Session ends
                   ┌────────┴────────┐
                   │   L2: Short-Term│  ← Recent sessions (rolling window)
                   └────────┬────────┘
                            │ Periodic migration
                   ┌────────┴────────┐
                   │   L3: Working   │  ← Active task state
                   └────────┬────────┘
                            │ Task complete
                   ┌────────┴────────┐
                   │   L4: Contextual│  ← Task-level context
                   └────────┬────────┘
                            │ Dream processing
                   ┌────────┴────────┐
                   │   L5: Long-Term │  ← Preferences & lessons
                   └────────┬────────┘
                            │ Archive expired
                   ┌────────┴────────┐
                   │   L6: Archive   │  ← Historical archive
                   └─────────────────┘

L1: Ephemeral

Storage: Current session all messages Capacity: Model context window Lifespan: Session end = gone Index: None, linear Use: In-context understanding

L2: Short-Term

Storage: Last N sessions (default N=5) Medium: In-memory (Redis optional) Capacity: 5 session summaries, ≤2KB each Lifespan: 7 days → auto-migrate to L4 Index: Session ID + timestamp + tags Use: Quick cross-session reference

L3: Working

Storage: Active task state Medium: JSON files (memory/working/) Capacity: ≤10KB per task Lifespan: Task complete → L4; task interrupted → kept Index: Task ID + status + last update Use: Resume interrupted tasks, multitasking

L4: Contextual

Storage: Completed task full context Medium: JSON files (memory/contextual/) + optional SQLite Capacity: Unlimited, but retrieval Top 5 Lifespan: Until dream processing (30d no reference → L5) Index: TF-IDF full-text Use: Look back at past tasks, reuse solutions

L5: Long-Term

Storage: Persistent cross-session knowledge Medium: SQLite (memory/long_term.db) + Chinese tokenizer index Capacity: Unlimited Lifespan: Permanent unless explicitly forgotten Index: TF-IDF + jieba Chinese tokenization Use: User preferences, lessons, best practices, key decisions

L6: Archive

Storage: Expired or low-referenced L4/L5 entries Medium: SQLite (memory/archive.db), read-only Capacity: Theoretically infinite Lifespan: Permanent read-only Index: None (time + category) Use: Legal compliance retention, audit trail

Dream Processing

Dreams aren't just for humans. Agents need low-load memory maintenance too.

Trigger: Heartbeat (low load, every 30 min), user says "clean up", or scheduled 04:00 daily.

Four Dream Operations:

1. Consolidation

Combine scattered memory fragments into coherent knowledge.

Input: Multiple fragments
→ "Use 150-char abstracts" (verified multiple times)
→ "User prefers shorter versions"
→ "Briefing read rate improved 30%"
→ Consolidate to:
  "Best practice: 150-char abstract in briefings (3x verified, +30% read rate)"

Trigger: Same pattern appears ≥3 times

2. Archiving

Move low-reference items out of working cache.

→ L4 entries with 0 references in 30 days
→ Move to L6 archive
→ Remove from L4 tag index

Trigger: 30-day reference count = 0

3. Forgetting

Actively delete low-value, duplicate, or outdated content.

→ "User once preferred Python 3.9" (3 months ago, now 3.13)
→ Outdated, delete
→ Keep space for valuable info

Trigger:

  • Newer version available
  • Explicitly contradicted
  • 90 days with <2 references

  • User says "forget this"

4. Merging

Combine multiple related L5 entries into higher-level patterns.

→ "Prefers 150-char abstract" (confidence=8)
→ "Prefers bullet points" (confidence=7)
→ "Dislikes tables" (confidence=6)
→ Merge:
  "User prefers briefings in bullet points + 150-char abstract, avoid tables"

Trigger: High confidence (≥7) + same category

Search

Standard Search

def search(query, layers=["L4", "L5"]):
    tokens = jieba.cut(query)      # Chinese tokenization
    vec = tfidf_vectorizer.transform(tokens)
    scores = cosine_similarity(vec, layer_index)
    return top_k(scores, k=5)

Chinese Tokenization

jieba.load_userdict("memory/custom_dict.txt")
# "五元组审查" → ["五元组", "审查"] not mis-split
# "惠迈校准框架" → ["惠迈", "校准", "框架"]

Search Priority

ScenarioSearch layersK
Real-time replyL5 → L4 → L23
Deep analysisL5 → L4 → L65
User asks "earlier..."L4 → L53
User asks "I remember..."L5 → L43

Storage Architecture

memory/
├── working/            # L3: Task working area (JSON)
├── contextual/         # L4: Contextual memory (JSON)
├── long_term.db        # L5: SQLite + FTS5 index
├── archive.db          # L6: Archive storage
├── custom_dict.txt     # Domain Chinese word dictionary
└── dream_log.json      # Dream processing log

Edge Cases

SituationAction
Chinese + English mixed inputjieba auto-detects Chinese, English space-tokenized
No search resultsFallback to simple keyword matching
L5 exceeds 1000 entriesTrigger dream (archive + forget + merge)
Dream active, new interactionDream pauses immediately, interaction wins
User says "forget xxx"Soft delete in L5, L6 retains for audit
Bulk history importWrite directly to L6, skip layer traversal

Quality Gates

CheckWhatFail action
Layers separatedEphemeral/Short/Working/Contextual/Long/Archive distinctRestructure
Search recall"five-factor" retrieves "trust verification"Check tokenization
Dream non-blockingNormals interaction during dreamsAsync flag
Forget traceableForgotten entries auditableSoft delete
Chinese tokenizationDomain words preservedUpdate custom_dict.txt

Integration Points

Five-Factor Review

  • Important memory entries verified before writing
  • L5 entries carry FiveFactorResult

Self-Evolving Loop

  • Reflections → L4 contextual, verified lessons → L5 long-term
  • Dream consolidation output = new "best-practice" L5 entries

Engineering Lifecycle

  • Working L3 supports task checkpoint/resume
  • L4 stores engineering decision traces

Version History

  • v1.0.0 — Initial release. 6-layer memory architecture with dream processing, TF-IDF search, Chinese tokenization.

SharpAgent · MIT-0 · 2026-05-11

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Unified Memory V5

统一记忆系统 - AI Agent 专用记忆系统,支持 Context Tree、智能摘要、知识图谱、工作流引擎。零依赖,完整对标 QMD/MetaGPT

Registry SourceRecently Updated
9350Profile unavailable
Automation

Skill

Persistent, consensus-validated memory for AI agents via SAGE MCP server. Gives you institutional memory that survives across conversations — memories go thr...

Registry SourceRecently Updated
4510Profile unavailable
Automation

Eywa

Multi-agent coordination, spatial memory, and swarm navigation. Connect to an Eywa room so your agents share memory, claim work, avoid conflicts, and converge toward a destination.

Registry SourceRecently Updated
1.2K0Profile unavailable
Automation

Mnemo Cortex

Installs and wires Mnemo Cortex (local-first persistent memory) into OpenClaw and other MCP-capable agents. Use for cross-session recall, decision history, o...

Registry SourceRecently Updated
1110Profile unavailable