paperforge

Use this skill when the user wants a Zotero-bounded local research agent that stores AI summaries, tracks contributions and limitations, and assembles evidence for idea-centered Q&A in OpenClaw or Codex.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "paperforge" with this command: npx skills add xxtars/paperforge/xxtars-paperforge-paperforge

PaperForge

This skill turns a Zotero library into a local research memory that OpenClaw or Codex can reason over.

PaperForge supports optional built-in acquisition tasks for OpenClaw or Codex, but still treats Zotero as the source boundary for the core memory loop.

This skill is self-contained. If installed by itself, initialize local config from assets/paperforge.config.example.json via scripts/init_config.py.

Use This Skill For

  • importing Zotero exports and attachment paths into a local workspace
  • watching Zotero for changes and triggering incremental sync
  • creating executor-facing acquisition tasks for finding and importing papers into Zotero
  • working after papers were added to Zotero either manually or through a PaperForge acquisition task
  • detecting papers that still need AI summary
  • generating executor task files for paper summaries
  • storing structured paper memory after the executor reads a paper
  • assembling evidence bundles for novelty, limitation, and idea-refinement questions

Do Not Use This Skill For

  • replacing Zotero as the source of truth for raw papers
  • embedding model logic into local scripts
  • broad web search before checking local evidence

Core Principle

Split responsibilities cleanly:

  • acquisition is optional: human download or PaperForge acquisition task
  • Zotero stores papers and human corrections
  • local scripts prepare, persist, retrieve, and package evidence
  • OpenClaw or Codex performs the actual model reasoning

Minimal Workflow

  1. Optional: create an acquisition task with python3 skills/paperforge/scripts/init_acquisition_task.py --config paperforge.config.json --title "paper title"
  2. If acquisition happened, save the result with python3 skills/paperforge/scripts/save_acquisition_result.py --config paperforge.config.json --input /path/to/result.json
  3. If paperforge.config.json does not exist yet, create it with python3 skills/paperforge/scripts/init_config.py
  4. Run python3 skills/paperforge/scripts/watch_zotero.py --config paperforge.config.json --interval 30 for automatic detection, or run sync_zotero.py manually.
  5. Inspect workspace/index/pending_summaries.json
  6. For each pending paper:
    • run python3 skills/paperforge/scripts/init_summary_task.py --config paperforge.config.json --paper-id <paper_id>
    • read the generated task file and references/summary_prompt.md
    • read the linked PDF
    • produce structured memory following references/schemas.md
    • persist it with python3 skills/paperforge/scripts/save_memory.py --config paperforge.config.json --paper-id <paper_id> --input /path/to/memory.json
  7. Before answering a research question, build a context bundle:
    • python3 skills/paperforge/scripts/build_context.py --config paperforge.config.json --question "..." --idea workspace/ideas/current_idea.md
  8. Read references/qa_prompt.md and use the generated context bundle as the grounding package for the executor's final answer.

When To Read References

  • Read references/schemas.md when creating or validating AI summary payloads.
  • Read references/acquisition.md when using PaperForge's built-in acquisition flow.
  • Read references/acquisition_prompt.md and references/acquisition_result_schema.md when completing an acquisition task.
  • Read references/summary_prompt.md when preparing a single-paper AI summary.
  • Read references/qa_prompt.md when answering a grounded idea question from a context bundle.
  • Read references/workflow.md when you need the full end-to-end process for sync, summarization, and idea-centered Q&A.

File Rules

  • source.json stores imported Zotero metadata and attachment paths.
  • memory.json stores structured AI understanding of a paper.
  • context/*.json stores evidence bundles for downstream reasoning.
  • assets/ contains templates and example inputs required to use the skill standalone.
  • Keep memory.json machine-readable and concise. Put the final long-form answer in executor output, not inside the memory file.

Retrieval Rules

  • Always search local memory.json files before using the web.
  • Use acquisition tasks only to get papers into Zotero.
  • Use the web only when the local workspace lacks enough evidence or the user explicitly asks for newer external work.
  • When web evidence is added, label it as external rather than Zotero-backed.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

openclaw-version-monitor

监控 OpenClaw GitHub 版本更新,获取最新版本发布说明,翻译成中文, 并推送到 Telegram 和 Feishu。用于:(1) 定时检查版本更新 (2) 推送版本更新通知 (3) 生成中文版发布说明

Archived SourceRecently Updated
Coding

ask-claude

Delegate a task to Claude Code CLI and immediately report the result back in chat. Supports persistent sessions with full context memory. Safe execution: no data exfiltration, no external calls, file operations confined to workspace. Use when the user asks to run Claude, delegate a coding task, continue a previous Claude session, or any task benefiting from Claude Code's tools (file editing, code analysis, bash, etc.).

Archived SourceRecently Updated
Coding

ai-dating

This skill enables dating and matchmaking workflows. Use it when a user asks to make friends, find a partner, run matchmaking, or provide dating preferences/profile updates. The skill should execute `dating-cli` commands to complete profile setup, task creation/update, match checking, contact reveal, and review.

Archived SourceRecently Updated
Coding

clawhub-rate-limited-publisher

Queue and publish local skills to ClawHub with a strict 5-per-hour cap using the local clawhub CLI and host scheduler.

Archived SourceRecently Updated