local-memory-search

Local, offline semantic search over OpenClaw memory files (MEMORY.md and memory/*.md) using Python embeddings + FAISS (no online LLM/API). Use when Dave asks for "memory search" without cloud billing, wants semantic recall across notes, or wants to share a reusable local-memory semantic search skill.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "local-memory-search" with this command: npx skills add blackbasilisk/local-memory-search/blackbasilisk-local-memory-search-local-memory-search

Local Memory Search (offline)

This skill adds a local semantic search workflow for OpenClaw memory files:

  • MEMORY.md
  • memory/*.md

It builds a local vector index (FAISS) using a small sentence-transformers embedding model.

Requirements

  • Python 3.10+ in PATH (python --version)

Choose your backend (user choice)

A) Light (recommended): lsa (TF‑IDF + SVD)

  • No neural model
  • No HuggingFace downloads
  • Good “semantic-ish” matching (better than plain keyword search)
cd $env:USERPROFILE\.openclaw\workspace\skills\local-memory-search
python -m venv .venv
.\.venv\Scripts\python.exe -m pip install -U pip
.\.venv\Scripts\python.exe -m pip install -r .\scripts\requirements-lsa.txt

B) Heavy (best semantic): sentence-transformers (Torch)

  • True neural embeddings
  • Bigger install (torch) + model download
cd $env:USERPROFILE\.openclaw\workspace\skills\local-memory-search
python -m venv .venv
.\.venv\Scripts\python.exe -m pip install -U pip
.\.venv\Scripts\python.exe -m pip install -r .\scripts\requirements-sentence-transformers.txt

Build / refresh the index

Default (light LSA backend):

.\.venv\Scripts\python.exe .\scripts\index_memory.py --workspace "$env:USERPROFILE\.openclaw\workspace" --backend lsa

Heavy backend example:

.\.venv\Scripts\python.exe .\scripts\index_memory.py --workspace "$env:USERPROFILE\.openclaw\workspace" --backend sentence-transformers --model "sentence-transformers/all-MiniLM-L6-v2"

Default workflow: jump + quote (recommended)

This matches the recommended operational pattern:

  1. semantic jump to the best chunk
  2. open the source file
  3. print exact lines (with a little context)
# Minimal output (default): prints just the best-matching lines
.\.venv\Scripts\python.exe .\scripts\jump_memory.py --query "o365 timezone config" --top 1

# If you want provenance:
.\.venv\Scripts\python.exe .\scripts\jump_memory.py --query "o365 timezone config" --top 1 --show-source --show-line-numbers --context 2

Tip: for cleaner quotes, re-index with default overlap 0 (the default).

Search (semantic only)

.\.venv\Scripts\python.exe .\scripts\search_memory.py --query "o365 timezone config" --top 5

If your index was built with a different backend/model, search_memory.py will automatically use the index metadata. You can override:

.\.venv\Scripts\python.exe .\scripts\search_memory.py --backend fastembed --model "BAAI/bge-small-en-v1.5" --query "..."

Notes

  • Index is stored under: ~/.openclaw/credentials/local-memory-search/
  • Re-run indexing after you edit memory files.
  • This is a local alternative to OpenClaw's built-in memory_search tool.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

github-tools

Interact with GitHub using the `gh` CLI. Use `gh issue`, `gh pr`, `gh run`, and `gh api` for issues, PRs, CI runs, and advanced queries.

Archived SourceRecently Updated
Coding

openclaw-version-monitor

监控 OpenClaw GitHub 版本更新,获取最新版本发布说明,翻译成中文, 并推送到 Telegram 和 Feishu。用于:(1) 定时检查版本更新 (2) 推送版本更新通知 (3) 生成中文版发布说明

Archived SourceRecently Updated
Coding

ask-claude

Delegate a task to Claude Code CLI and immediately report the result back in chat. Supports persistent sessions with full context memory. Safe execution: no data exfiltration, no external calls, file operations confined to workspace. Use when the user asks to run Claude, delegate a coding task, continue a previous Claude session, or any task benefiting from Claude Code's tools (file editing, code analysis, bash, etc.).

Archived SourceRecently Updated
Coding

ai-dating

This skill enables dating and matchmaking workflows. Use it when a user asks to make friends, find a partner, run matchmaking, or provide dating preferences/profile updates. The skill should execute `dating-cli` commands to complete profile setup, task creation/update, match checking, contact reveal, and review.

Archived SourceRecently Updated