Session Wrap-Up
Review the current session for learnings worth persisting and self-improvement signals.
REVIEW-ONLY MODE
This skill analyzes and suggests. It does NOT apply changes without explicit user confirmation.
-
Do NOT write to any file until the user confirms specific items
-
Do NOT create new files unless the user approves
-
Present all findings for review, then wait for confirmation before each application step
Step 0: Discover Project Root
Run the discovery script:
bash "$DISCOVER" root
Where $DISCOVER is the absolute path to scripts/discover.sh within this skill's directory.
Path safety — shell state does not persist between tool calls, so you must provide full script paths on each call:
-
Use $HOME instead of the literal home directory (e.g., bash "$HOME/code/…/discover.sh" , not bash "/Users/name/…/discover.sh" ). This prevents username hallucination.
-
Copy values from tool output. When reusing a value returned by a previous command (like $PROJECT_ROOT ), copy it verbatim from that command's output. Never retype a path from memory.
-
Verify on first call: if a script call fails with "No such file", the path is wrong — STOP and re-derive from the skill-loading context.
-
Never ignore a non-zero exit. If any script in this skill fails, stop and report the error before continuing.
Store the output as $PROJECT_ROOT . If the command fails, inform the user and stop.
Analyze Session
Step 1: Read Existing Documentation
Before analyzing, read these files to avoid surfacing items already documented:
-
$PROJECT_ROOT/CLAUDE.md (or equivalent project docs: AGENTS.md , GEMINI.md ) — whichever exists, store its name as $PROJECT_DOCS for use in routing targets later
-
$PROJECT_ROOT/.claude/rules/ (or equivalent scoped rules directory) — if it exists, read all files in the directory; store the path as $SCOPED_RULES_DIR
-
$PROJECT_ROOT/CLAUDE.local.md (or equivalent personal project docs) — if it exists, store its path as $PERSONAL_PROJECT_DOCS
-
~/.claude/CLAUDE.md (or equivalent user global docs) — if it exists, store its path as $USER_GLOBAL_DOCS
Step 1b: Read Feedback History
Check if $PROJECT_ROOT/.dev/wrapup-feedback.json exists. If it does, read it and analyze the patterns before scanning the conversation.
The feedback file contains records from previous wrapup sessions. It can exist in two formats:
Simple format (early sessions) — a flat array of session records:
[ { "session": "...", "findings": [...] }, ... ]
Compacted format (after 30+ sessions) — aggregate stats plus recent raw records:
{ "aggregate": { "sessions_summarized": 35, "total_findings": 142, "skip_rates": { "convention": 0.82, "friction": 0.65, "gotcha": 0.25 }, "accept_rates": { "preference": 0.88, "gotcha": 0.70 }, "reroute_map": { "gotcha→scoped_rules": { "actual": "user_global", "frequency": 0.85 }, "convention→project_docs_add": { "actual": "personal_memory", "frequency": 0.60 } } }, "recent": [ /* last 20 raw records */ ] }
If the file has ≥30 raw records (simple format), compact it before proceeding:
-
Compute aggregate stats from all records except the most recent 20:
-
skip_rates : for each finding type, count skips ÷ total proposals
-
accept_rates : for each finding type, count accepts ÷ total proposals
-
reroute_map : for each type→proposed_dest pair that was rerouted more than once, record the most common actual_dest and its frequency
-
sessions_summarized : number of sessions folded into the aggregate
-
total_findings : total findings across summarized sessions
-
Keep the 20 most recent raw records in recent
-
Write the compacted format back to the file
-
Log: "Compacted wrapup feedback: [N] older sessions summarized, [20] recent sessions preserved."
If already in compacted format and recent has ≥30 records, re-compact:
-
Take the oldest 10 records from recent (first 10 in the array)
-
Merge their stats into the existing aggregate :
-
Recompute skip_rates and accept_rates as weighted averages: (old_rate × old_total + new_count) / (old_total + new_total) for each finding type
-
Update reroute_map — if a new reroute pattern emerges or an existing frequency changes, update it
-
Increment sessions_summarized and total_findings
-
Remove those 10 records from recent , leaving the 20 most recent
-
Write back to the file
-
Log: "Re-compacted wrapup feedback: merged [10] sessions into aggregate ([N] total summarized), [20] recent sessions preserved."
If already in compacted format and recent has <30 records, no compaction needed — use as-is.
Extract these patterns (from whichever format is present):
-
Skip patterns — Which finding types does this user consistently skip? If a type has been skipped in ≥70% of past proposals (from aggregate skip_rates or computed from raw records), treat it as low-value and raise the bar significantly before proposing that type again. Only propose it if the finding is exceptionally specific and actionable.
-
Reroute patterns — Which type→destination pairs get rerouted, and where do they actually end up? Use reroute_map from aggregate or compute from raw records. If gotcha → scoped_rules gets rerouted to user_global most of the time, default to user_global for that type going forward.
-
Accept patterns — Which type→destination pairs does the user consistently accept? These are your strongest signal — lean into more proposals that match these patterns.
-
Trend detection — When both aggregate and recent records exist, compare them. If a type's skip rate in recent records differs significantly from the aggregate (e.g., was 80% skipped historically but only 30% in recent), favor the recent pattern — the user's preferences have shifted.
Store the extracted patterns as $FEEDBACK_PATTERNS for use in Step 3.
If the feedback file does not exist, proceed normally — this is the first wrapup session or feedback tracking hasn't started yet.
Step 2: Scan Conversation
Verify the conversation contains substantive exchanges (at least one user message and one assistant response beyond the skill invocation itself). If the conversation history appears empty or contains only the skill invocation, state: "No conversation history available to review. Run this skill at the end of a working session, not at the start." and STOP.
Review the full conversation history for findings worth persisting or acting on. If the session was short or routine with nothing notable, state "Nothing to report from this session." and stop.
What to scan for:
-
Corrections — Places where the user corrected the assistant's approach, naming, or assumptions
-
Stated preferences — "Always do X", "Never do Y", "I prefer Z"
-
Project conventions — Patterns discovered during implementation (naming, file structure, API style)
-
Gotchas — Pitfalls or workarounds encountered
-
Friction — Repeated manual steps, things the user had to ask for explicitly
-
Mistakes — Errors the assistant made and corrected
-
Skill gaps — Knowledge the assistant lacked or got wrong
-
Automation opportunities — Repetitive patterns that could become scripts or skills
Quality filters — apply strictly:
-
Be selective — Only surface items that would genuinely change behavior in future sessions. If the finding wouldn't alter how you approach a task, skip it.
-
Be specific — "Use snake_case for database columns" beats "follow naming conventions"
-
Skip duplicates — Do not surface items already in project docs, rules, user global, or auto memory (read in Step 1). PRD files (e.g., .dev/ ) count as existing documentation only if they are git-tracked (not gitignored). If the PRD directory is gitignored, findings documented there are transient and should still be routed to persistent project docs.
-
Skip session-specific context — Do not record task details, in-progress state, or temporary debugging notes
-
Skip general knowledge — Standard language/library/framework behavior that any experienced developer knows is not worth persisting. Only persist if the behavior is non-obvious AND project-specific or likely to recur in this codebase.
-
Prefer team-shared destinations — When in doubt about where something belongs, default to project docs over personal memory. Most valuable findings are things the team should know.
Step 3: Classify and Route Findings
For each finding, assign a type and a destination.
Finding types:
Type Description
convention
Coding style, naming, architecture patterns
preference
User workflow choices, stated preferences
fact
Project-specific knowledge
gotcha
Pitfalls or workarounds
friction
Repeated manual steps or slowdowns
mistake
Errors made and corrected
skill-gap
Knowledge the assistant lacked
automation
Repetitive patterns that could become scripts
Destinations:
Every AI coding tool offers similar tiers of persistent documentation. This skill uses general concepts mapped to tool-specific paths:
Destination What belongs here Claude Code Codex Gemini CLI
Project docs (update) Corrections to existing team documentation CLAUDE.md edit AGENTS.md edit GEMINI.md edit
Project docs (add) New team knowledge: conventions, architecture, operations, gotchas CLAUDE.md add AGENTS.md add GEMINI.md add
Scoped rules Invariants tied to specific files; forgetting risks silent breakage .claude/rules/<topic>.md
subdirectory AGENTS.md
subdirectory GEMINI.md
User global Personal preferences that apply across ALL projects ~/.claude/CLAUDE.md
~/.codex/AGENTS.md
~/.gemini/GEMINI.md
Personal project Private, ephemeral, or machine-specific project context CLAUDE.local.md
— —
Personal memory AI self-notes: non-instructional observations about user or project auto memory — save_memory
Decision tree — evaluate in order, stop at first match:
-
Does this correct or extend something already documented? → Project docs (update)
-
Would the team benefit from knowing this? (conventions, architecture decisions, build/test/deploy commands, project-wide gotchas, common mistakes in this codebase) → Project docs (add)
-
Is this tied to specific files where forgetting causes silent breakage? → Scoped rules
-
Is this a personal preference that applies across ALL projects? ("always use X", "never do Y" regardless of which project) → User global
-
Is this private or machine-specific context for this project? (local environment, personal test data, temporary workarounds) → Personal project
-
Is this a non-instructional observation that provides useful context? (debugging history, how the user works, codebase quirks that aren't actionable instructions) → Personal memory
-
None of the above → skip it. Not every finding needs to be persisted.
Routing guard rails:
-
If you can phrase it as an instruction ("do X", "avoid Y", "use Z when W"), it is NOT personal memory — route to project docs, scoped rules, or config instead.
-
If the finding would help a new team member onboard, it belongs in project docs.
-
If more than half your findings route to personal memory, re-evaluate — you are likely under-using project docs.
Feedback-adjusted routing — If $FEEDBACK_PATTERNS exist from Step 1b, apply them now:
-
For each finding, check if its type → destination pair matches a known reroute pattern. If so, use the user's historical preferred destination instead.
-
For findings whose type has a ≥70% skip rate, remove them unless they are significantly more specific or actionable than the typical skipped examples in the feedback history.
-
For findings matching high-accept patterns, keep them with confidence.
-
If in doubt between two destinations and feedback history shows a clear preference, follow the history.
Self-check — After routing all findings, review once before presenting:
-
Count destinations. Does >50% go to personal memory? If yes, re-route: for each personal memory item, re-apply the "phrasable as instruction" test and the decision tree from step 1.
-
For each personal memory item, verify it truly fails all earlier decision tree steps (1–5). If it matches an earlier step, re-route it there.
-
This is a single pass. If after re-evaluation the distribution still exceeds 50% personal memory, accept it — the findings are genuinely personal memory items. Do not re-check more than once.
Routing examples:
Finding Correct Why
"Tests must run with --no-cache flag" Project docs (add) Operational instruction the team needs
"User prefers small, incremental commits" User global Cross-project personal preference
"Payment module silently swallows errors in catch blocks" Scoped rules File-tied gotcha; forgetting causes bugs
"User corrected: use pnpm not npm " Project docs (update or add) Team should know the package manager
"Flaky test in auth.spec.ts caused by timezone mismatch" Personal memory Debugging context, not an instruction
"Always run migrations before seeding" Project docs (add) Operational instruction, not a personal note
Step 4: Present Findings
If no findings, state: "Nothing to report from this session." and skip to the summary.
Present findings in two parts:
Part A — Detailed Analysis: For each finding, write a short paragraph explaining what happened, why it matters, and the proposed action. Number each finding.
Session Findings:
-
[Title] (type → destination ) [2-3 sentence explanation of what happened, why it matters, and what to persist or do.]
-
[Title] (type → destination ) [2-3 sentence explanation...]
Part B — Recap Table: After the detailed analysis, present a summary table:
Type Finding Destination Target
1 gotcha [Short description] Project docs (update) $PROJECT_DOCS
2 convention [Short description] Scoped rules $SCOPED_RULES_DIR/naming.md
Which items would you like to apply? Reply with the numbers (e.g., "1, 3"), "all", or "none" to skip.
STOP. Wait for the user to select items before proceeding.
Step 5: Apply Confirmed Items
For each confirmed item, apply based on its destination:
Project docs (update) items:
-
Read $PROJECT_DOCS
-
Locate the existing section that needs correction
-
Present the proposed diff: "I'll change [old] to [new] in [section]"
-
Apply after confirmation
Project docs (add) items:
-
Read $PROJECT_DOCS
-
Find the most appropriate existing section for the new content
-
Present the proposed addition: "I'll add this under [section]: [content] "
-
Apply after confirmation
Scoped rules items:
-
Check if $SCOPED_RULES_DIR/<topic>.md exists
-
If it exists, read it and present the proposed append
-
If it doesn't exist, present the new file content (include paths: frontmatter scoped to relevant files/directories)
-
Apply after confirmation
Scoped rules (update) items:
-
Read the existing rule file
-
Present the proposed diff
-
Apply after confirmation
User global items:
-
Check if $USER_GLOBAL_DOCS exists
-
If it does NOT exist, inform the user: "[path] does not exist. This item requires creating it. Proceed?" Wait for confirmation before creating.
-
Read $USER_GLOBAL_DOCS
-
Find or create an appropriate section
-
Present the proposed addition — note that this affects ALL projects
-
Apply after confirmation
Personal project items:
-
Check if $PERSONAL_PROJECT_DOCS exists
-
If it does NOT exist, inform the user: "[path] does not exist. This item requires creating it. Proceed?" Wait for confirmation before creating.
-
Read $PERSONAL_PROJECT_DOCS
-
Present the proposed content
-
Apply after confirmation
Personal memory items:
-
Save the content to your auto memory
-
Use concise, specific phrasing (e.g., "Project uses pnpm, not npm")
-
For detailed items, specify a topic file name (e.g., "save to debugging topic")
-
Index entries in MEMORY.md should be brief pointers; details go in topic files
-
Confirm: "Saved to auto memory: [brief description]"
Automation items:
-
Present the automation idea as a suggested next step (do not create scripts in this skill)
-
Format: "Consider creating a script or skill for: [description]"
After applying, confirm: "Applied [N] items. [M] automation suggestions noted for future work."
Step 6: Record Feedback
After the user has made all their decisions (accept, skip, reroute, or "none"), append a feedback record to $PROJECT_ROOT/.dev/wrapup-feedback.json .
If the file doesn't exist, create it with an empty array [] first.
Feedback record schema:
{ "session": "YYYY-MM-DD", "findings": [ { "type": "convention", "summary": "Short description of the finding", "proposed_dest": "project_docs_add", "decision": "accept", "actual_dest": "project_docs_add" }, { "type": "gotcha", "summary": "Short description of the finding", "proposed_dest": "scoped_rules", "decision": "reroute", "actual_dest": "user_global" }, { "type": "friction", "summary": "Short description of the finding", "proposed_dest": "project_docs_add", "decision": "skip", "actual_dest": null } ] }
Field values:
-
decision : one of accept , skip , reroute
-
proposed_dest : the destination the skill originally proposed, using these keys: project_docs_update , project_docs_add , scoped_rules , user_global , personal_project , personal_memory , automation
-
actual_dest : the destination the user confirmed (same keys), or null if skipped
-
summary : a brief description of the finding (no sensitive data, no absolute paths)
Append the new record to the existing array. Do not overwrite previous records — the history is the learning signal.
If the user replied "none" (skipped all findings), still record the feedback with all findings marked as skip . This is valuable negative signal.
Summary
Report what was accomplished:
Session wrap-up complete.
-
Items applied: [N] items to [list destinations touched]
-
Automation ideas: [M] noted for future work (or "none")
-
Files modified: [list each file that was changed, or "none"]
PRIVACY RULES
NEVER include in any destination file:
-
Absolute paths with usernames — use relative paths from project root
-
Secrets, API keys, tokens, credentials — use placeholders (<API_KEY> , $ENV_VAR )
-
Personal information (names, emails) — use generic references