Total Skills
23
Skills published by xsyetopz with real stars/downloads and source-aware metadata.
Total Skills
23
Total Stars
69
Total Downloads
0
Comparison chart based on real stars and downloads signals from source data.
caveman-commit
3
caveman-compress
3
caveman-help
3
caveman-review
3
caveman
3
debug
3
decide
3
design-polish
3
Draft terse commit messages in Caveman style without changing git state.
Compress prose-first memory/docs files into Caveman style, preserving code, links, and structure while saving a .original.md backup.
Explain Caveman modes, boundaries, and escape hatches without changing code or repo state.
Produce one-line or ultra-terse code review verdicts when the user explicitly asks for Caveman review output.
Ultra-terse response mode that compresses assistant prose while preserving technical substance. Supports off, lite, full, ultra, wenyan-lite, wenyan, and wenyan-ultra. Use when the user wants fewer tokens, shorter answers, or Caveman mode.
Structured debugging workflow for reproducing failures, narrowing root causes, and collecting evidence before changing code. Triggers: debug, debugging, root cause, reproduce, failing behavior, broken flow, investigate bug, triage failure, isolate issue, regression analysis.
Shared collaboration protocol for presenting options, tradeoffs, and rationale in technical decisions. Triggers: present options, tradeoffs, collaboration, decision making, "which approach", "how should we", alternatives, options analysis, trade-off.
Refine frontend/UI/UX work so it stops looking like generic AI output. Use for layout, typography, spacing, interaction polish, design system cleanup, and frontend visual audits across Claude, Codex, OpenCode, and Copilot.
Detect and remove AI-generated linguistic slop from code, comments, documentation, READMEs, changelogs, commit messages, and any text artifacts. Use whenever the user mentions "AI slop", "deslop", "remove AI-isms", "sounds like AI", "too AI", "clean up AI writing", "make it sound human", or asks to review text for AI patterns. Also trigger when reviewing any AI-generated documentation, comments, or prose -- even if the user doesn't explicitly mention AI slop -- if the content exhibits hallmark AI writing patterns like filler adjectives, hedge phrases, or obvious code comments. Trigger for any request to "clean up", "tighten", or "edit" AI-generated text, and when auditing codebases for comment quality.
Standards for writing and maintaining documentation: READMEs, changelogs, ADRs, API docs, and inline docs. Triggers: documentation, docs, README, changelog, ADR, architecture decision record, API documentation, doc standards, write docs, update docs.
Use when evaluating or improving code organization, naming, public API shape, subsystem boundaries, registration patterns, or runtime-state ownership. Triggers: elegance, clean codebase, ownership boundaries, naming discipline, god file split, public API cleanup, table-driven registration, runtime state organization, subsystem boundaries, API shape, module organization.
Error handling patterns, Result types, exception strategies, and error recovery. Triggers: error handling, Result type, exception, error recovery, error propagation, unwrap, panic, try-catch, error boundary, error types, anyhow, thiserror.
Evidence-first codebase exploration for repo mapping, architecture reading, component inventory, and research-heavy understanding work. Triggers: explore, research, investigate codebase, understand project, map repo, architecture walkthrough, component inventory, where is, how does this work.
Git workflow conventions for commits, branches, and PRs. Use when the user mentions commit flow, PR flow, branching, rebasing, release hygiene, or Git workflow rules.
Export session context as a structured handoff file for explicit transfer or archival. Use when the user wants a handoff file, session export, cross-tool transfer artifact, or explicit saved summary before ending. Do not treat this as the default Claude continuity path.
Interactive onboarding guide for new openagentsbtw users. Explains the agent system, nano workflow, platform usage, shared surfaces, and safety hooks. Triggers: onboard, onboarding, getting started, how does this work, what is openagentsbtw, help me get started, new user, setup guide, walkthrough.
Use the openagentsbtw role system and nano workflow (Research -> Plan -> Execute -> Review -> Ship) when work benefits from explicit planning, implementation, review, testing, documentation, or multi-agent orchestration.
Performance optimization patterns, profiling guidance, and common bottleneck identification. Triggers: performance, optimize, slow, bottleneck, profiling, latency, throughput, memory usage, CPU usage, N+1, caching, benchmark.
Enforces coding standards, design principles, naming conventions, and anti-patterns when writing, editing, or reviewing code. The single source of truth for code quality rules. Triggers: implement, write code, add function, create module, refactor, edit code, fix bug, review code, check quality, review PR, validate implementation, code review, restructure, extract, rename, move, simplify, decompose, clean up code, reduce complexity, split module.
Security audit checklist covering OWASP Top 10, secrets management, dependency vulnerabilities, and hardening. Triggers: security, audit, OWASP, vulnerability, hardening, CVE, secrets, injection, XSS, CSRF, authentication, authorization, penetration test.
Detects project code style conventions and injects them as context. Preloaded by @hephaestus.
Test design patterns, strategies, and coverage guidance for writing effective tests. Triggers: write tests, test patterns, testing strategy, test coverage, unit test, integration test, mock, fixture, test architecture, TDD, test-driven.
Structured tracing workflow for dependencies, call paths, data flow, and change impact analysis. Triggers: trace, dependency trace, call graph, data flow, impact analysis, who uses this, what depends on this, downstream effects, upstream callers, symbol trace.