neural-memory

Associative memory with spreading activation for persistent, intelligent recall. Use PROACTIVELY when: (1) You need to remember facts, decisions, errors, or context across sessions (2) User asks "do you remember..." or references past conversations (3) Starting a new task — inject relevant context from memory (4) After making decisions or encountering errors — store for future reference (5) User asks "why did X happen?" — trace causal chains through memory Zero LLM dependency. Neural graph with Hebbian learning, memory decay, contradiction detection, and temporal reasoning.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "neural-memory" with this command: npx skills add nhadaututtheky/neural-memory

NeuralMemory — Associative Memory for AI Agents

A biologically-inspired memory system that uses spreading activation instead of keyword/vector search. Memories form a neural graph where neurons connect via 20 typed synapses. Frequently co-accessed memories strengthen their connections (Hebbian learning). Stale memories decay naturally. Contradictions are auto-detected.

Why not just vector search? Vector search finds documents similar to your query. NeuralMemory finds conceptually related memories through graph traversal — even when there's no keyword or embedding overlap. "What decision did we make about auth?" activates time + entity + concept neurons simultaneously and finds the intersection.

Setup

1. Install NeuralMemory

pip install neural-memory

The brain and config at ~/.neuralmemory/ are auto-created on first use.

2. Install the OpenClaw Plugin (Recommended)

The plugin occupies the exclusive memory slot — auto-injects context before each agent run and auto-captures memories after.

# Install from npm
npm install -g neuralmemory

Add to ~/.openclaw/openclaw.json:

{
  "plugins": {
    "load": {
      "paths": ["<path-to-installed-plugin>"]
    },
    "entries": {
      "neuralmemory": {
        "enabled": true,
        "config": {
          "pythonPath": "python",
          "brain": "default",
          "autoContext": true,
          "autoCapture": true
        }
      }
    },
    "slots": {
      "memory": "neuralmemory"
    }
  }
}

Plugin features:

  • 6 tools registered automatically (nmem_remember, nmem_recall, nmem_context, nmem_todo, nmem_stats, nmem_health)
  • before_agent_start hook: injects tool instructions + relevant memories as context (persists across /new)
  • agent_end hook: auto-extracts facts, decisions, and TODOs from the conversation
  • Configurable: contextDepth (0-3), maxContextTokens (100-10000)

After installing, build the plugin:

cd <path-to-installed-plugin>
npm run build

This compiles TypeScript to JavaScript in dist/. The plugin entry point is dist/index.js.

Windows Installation

On Windows, use forward slashes or escaped backslashes in openclaw.json paths:

{
  "plugins": {
    "load": {
      "paths": ["C:/Users/<you>/AppData/Roaming/npm/node_modules/neuralmemory"]
    }
  }
}

To find the installed path:

npm list -g neuralmemory --parseable

If openclaw plugins list doesn't show the plugin:

  1. Verify the path in openclaw.json points to the package root (where package.json is)
  2. Ensure npm run build was run (the dist/ folder must exist with compiled .js files)
  3. Use python instead of python3 in the plugin config (Windows default)

Alternative: MCP Configuration (Manual)

If you prefer MCP over the plugin, add to ~/.openclaw/mcp.json:

{
  "mcpServers": {
    "neural-memory": {
      "command": "python",
      "args": ["-m", "neural_memory.mcp"],
      "env": {
        "NEURALMEMORY_BRAIN": "default"
      }
    }
  }
}

On Windows, use "python" (not "python3"). This gives you all 60 MCP tools but without the auto-context/auto-capture hooks.

3. Verify

nmem stats

You should see brain statistics (neurons, synapses, fibers).

Troubleshooting

SymptomCauseFix
openclaw plugins list doesn't show pluginPlugin path wrong or not builtRun npm run build, verify path in openclaw.json
Agent runs nmem remember in terminalAgent confused CLI vs toolPlugin now auto-injects tool instructions via systemPrompt
Agent forgets tools after /newNo tool instructions in new sessionPlugin now injects systemPrompt on every before_agent_start
python3 not found (Windows)Windows uses python not python3Set pythonPath: "python" in plugin config
Timeout errorsSlow machine or large brainIncrease timeout in plugin config (max 120000ms)

Tools Reference

Core Memory Tools

ToolPurposeWhen to Use
nmem_rememberStore a memoryAfter decisions, errors, facts, insights, user preferences
nmem_recallQuery memoriesBefore tasks, when user references past context, "do you remember..."
nmem_contextGet recent memoriesAt session start, inject fresh context
nmem_todoQuick TODO with 30-day expiryTask tracking

Intelligence Tools

ToolPurposeWhen to Use
nmem_autoAuto-extract memories from textAfter important conversations — captures decisions, errors, TODOs automatically
nmem_recall (depth=3)Deep associative recallComplex questions requiring cross-domain connections
nmem_habitsWorkflow pattern suggestionsWhen user repeats similar action sequences

Management Tools

ToolPurposeWhen to Use
nmem_healthBrain health diagnosticsPeriodic checkup, before sharing brain
nmem_statsBrain statisticsQuick overview of memory counts
nmem_versionBrain snapshots and rollbackBefore risky operations, version checkpoints
nmem_transplantTransfer memories between brainsCross-project knowledge sharing

Workflow

At Session Start

  1. Call nmem_context to inject recent memories into your awareness
  2. If user mentions a specific topic, call nmem_recall with that topic

During Conversation

  1. When a decision is made: nmem_remember with type="decision"
  2. When an error occurs: nmem_remember with type="error"
  3. When user states a preference: nmem_remember with type="preference"
  4. When asked about past events: nmem_recall with appropriate depth

At Session End

  1. Call nmem_auto with action="process" on important conversation segments
  2. This auto-extracts facts, decisions, errors, and TODOs

Examples

Remember a decision

nmem_remember(
  content="Use PostgreSQL for production, SQLite for development",
  type="decision",
  tags=["database", "infrastructure"],
  priority=8
)

Recall with spreading activation

nmem_recall(
  query="database configuration for production",
  depth=1,
  max_tokens=500
)

Returns memories found via graph traversal, not keyword matching. Related memories (e.g., "deploy uses Docker with pg_dump backups") surface even without shared keywords.

Trace causal chains

nmem_recall(
  query="why did the deployment fail last week?",
  depth=2
)

Follows CAUSED_BY and LEADS_TO synapses to trace cause-and-effect chains.

Auto-capture from conversation

nmem_auto(
  action="process",
  text="We decided to switch from REST to GraphQL because the frontend needs flexible queries. The migration will take 2 sprints. TODO: update API docs."
)

Automatically extracts: 1 decision, 1 fact, 1 TODO.

Key Features

  • Zero LLM dependency — Pure algorithmic: regex, graph traversal, Hebbian learning
  • Spreading activation — Associative recall through neural graph, not keyword/vector search
  • 20 synapse types — Temporal (BEFORE/AFTER), causal (CAUSED_BY/LEADS_TO), semantic (IS_A/HAS_PROPERTY), emotional (FELT/EVOKES), conflict (CONTRADICTS)
  • Memory lifecycle — Short-term → Working → Episodic → Semantic with Ebbinghaus decay
  • Contradiction detection — Auto-detects conflicting memories, deprioritizes outdated ones
  • Hebbian learning — "Neurons that fire together wire together" — memory improves with use
  • Temporal reasoning — Causal chain traversal, event sequences, temporal range queries
  • Brain versioning — Snapshot, rollback, diff brain state
  • Brain transplant — Transfer filtered knowledge between brains
  • Vietnamese + English — Full bilingual support for extraction and sentiment

Depth Levels

DepthNameSpeedUse Case
0Instant<10msQuick facts, recent context
1Context~50msStandard recall (default)
2Habit~200msPattern matching, workflow suggestions
3Deep~500msCross-domain associations, causal chains

Notes

  • Memories are stored locally in SQLite at ~/.neuralmemory/brains/<brain>.db
  • No data is sent to external services (unless optional embedding provider is configured)
  • Brain isolation: each brain is independent, no cross-contamination
  • nmem_remember returns fiber_id for reference tracking
  • Priority scale: 0 (trivial) to 10 (critical), default 5
  • Memory types: fact, decision, preference, todo, insight, context, instruction, error, workflow, reference

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Partnerships Channel Coach

Coach a B2B SaaS founder, CRO, or head-of-partnerships through building a real channel-and-partnerships motion — distinct from one-off integration deals or a...

Registry SourceRecently Updated
General

Aihot Skill Lite

AI HOT (aihot.virxact.com) 中文 AI 资讯查询 Skill。当用户想知道"今天 AI 圈有什么"、"AI 日报"、"AI HOT"、"AI 资讯"、"AI 热点"、"最近 AI"、"OpenAI/Anthropic/Google 最近发布了什么"、"AI hot today"、"AI...

Registry SourceRecently Updated
General

Founder Ceo Firing Coach

Coach a board director, lead investor, co-founder, or HR leader through the difficult process of firing or transitioning a founder-CEO — distinct from firing...

Registry SourceRecently Updated
General

B2b Customer Advisory Board Coach

Coach a B2B founder, CEO, CRO, CCO, or product leader through designing, recruiting, and running a Customer Advisory Board (CAB) — a 6-12 person body of seni...

Registry SourceRecently Updated