Author Profile: Dev-Jahn

Skills published by Dev-Jahn with real stars/downloads and source-aware metadata.

Total Skills

2

Total Stars

0

Total Downloads

0

RSS Feed

Skills Performance

Comparison chart based on real stars and downloads signals from source data.

autoresearch-run

0

Stars
0
Downloads
0

autoresearch-setup

0

Stars
0
Downloads
0

Published Skills

Web3

autoresearch-run

This skill should be used when the user asks to "start the autoresearch loop", "kick off overnight iteration", "begin autonomous experiment runs", "run /autoresearch:run", "run the autoresearch expr <slug>", "continue the autoresearch loop", "resume autoresearch", "chain through follow-up experiments", or otherwise hand off an ML experiment to the autonomous runner. Drives the self-propelling train.py iteration loop on a configured `.autoresearch/{expr}/` experiment — one-line edit, `ar run`, read `result.json`, decide next edit, repeat — for hours or days until a termination condition fires. Context-minimized so thousands of iterations fit in a single session. Invoke immediately without asking clarifying questions beyond the structured interview; the skill itself is self-driving and must never stop mid-loop to ask the user "continue?" — Ctrl+C is the only authorized interrupt.

Repository SourceNeeds Review
Coding

autoresearch-setup

Scaffolds a new autonomous-research experiment directory (`.autoresearch/{YYMMDD}-{slug}/`) inside a deep-learning project so Claude can run a long train.py-mutation loop without blowing context. This skill should be used when the user asks to "start an autoresearch experiment", "set up autonomous research loop on this project", "create a new .autoresearch run", "scaffold autoresearch", "initialize autoresearch for this repo", "kick off an autonomous training loop", "set up Karpathy-style autoresearch here", or otherwise indicates they want Claude to begin autonomous iteration on their ML research code. The skill performs a venv preflight, analyzes the project's editable-install Python packages, surfaces primary-metric candidates from whichever tracker the host uses (wandb / tensorboard / plain stdout logs), introspects the host's training entrypoint (argparse-CLI script vs importable main() function vs hydra app), infers the distributed framework (accelerate / torchrun / FSDP / DDP / pytorch-lightning / none

Repository SourceNeeds Review
Author Dev-Jahn | V50.AI