project-onboarding

Invoke this skill when:

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "project-onboarding" with this command: npx skills add oimiragieo/agent-studio/oimiragieo-agent-studio-project-onboarding

When to Use

Invoke this skill when:

  • Starting work on an unfamiliar codebase

  • After context is lost (new session)

  • When check_onboarding_performed indicates no memories exist

  • When user asks to "learn about this project" or "understand this codebase"

Onboarding Workflow

Step 1: Check Existing Knowledge

First, check if onboarding was already performed:

List files in: .claude/context/memory/ Look for: project-structure.md, build-commands.md, test-commands.md

If memories exist, read them and skip to Step 6 (Validation).

Step 2: Project Discovery

First, classify the project:

Greenfield vs Brownfield Detection

Indicator Present? Classification

.git directory with history Yes Brownfield

Package manifest (package.json , requirements.txt , etc.) Yes Brownfield

Source directories (src/ , app/ , lib/ ) with code Yes Brownfield

Dirty git status (uncommitted changes) Yes Brownfield (warn user)

Empty or only README.md None of above Greenfield

For Brownfield Projects:

  • Respect Ignore Files: Check .gitignore and .claudeignore BEFORE scanning

  • Efficient File Triage:

  • Use git ls-files to list tracked files (respects .gitignore)

  • For large files (>1MB): Read only head/tail (first and last 20 lines)

  • Skip binary files, node_modules, build artifacts

  • Infer Tech Stack: Analyze manifests before asking questions

  • Context-Aware Questions: Base questions on discovered patterns

Efficient file listing (respects .gitignore)

git ls-files --exclude-standard -co | head -100

For non-git projects with manual ignores

find . -type f
-not -path '/node_modules/'
-not -path '/.git/'
-not -path '/dist/'
-not -path '/build/'
| head -100

For Greenfield Projects:

  • Create fresh context artifacts

  • Use interactive-requirements-gathering skill for setup

Analyze the project root to identify:

Package Manager & Language:

  • package.json

  • Node.js/JavaScript/TypeScript

  • pyproject.toml , requirements.txt

  • Python

  • Cargo.toml

  • Rust

  • go.mod

  • Go

  • pom.xml , build.gradle

  • Java

  • composer.json

  • PHP

Project Type:

  • Frontend, Backend, Fullstack, Library, CLI, Mobile, Monorepo

Framework Detection:

  • Parse dependencies for frameworks (React, Next.js, FastAPI, etc.)

Step 3: Build System Analysis

Identify how to build/run the project:

Check package.json scripts (Node.js):

{ "scripts": { "dev": "...", "build": "...", "start": "...", "test": "..." } }

Check Makefiles (Python, Go, Rust):

build: test: lint:

Check pyproject.toml (Python):

[tool.poetry.scripts] [tool.poe.tasks]

Document discovered commands:

  • Development: npm run dev , uv run dev

  • Build: npm run build , cargo build

  • Test: npm test , pytest

  • Lint: npm run lint , ruff check

Step 4: Directory Structure Mapping

Map key directories:

Directory Purpose

src/

Source code

lib/

Library code

test/ , tests/ , tests/

Test files

docs/

Documentation

scripts/

Utility scripts

config/

Configuration files

Identify:

  • Entry points (index.ts , main.py , app.py )

  • Component directories

  • API routes

  • Database models

Step 5: Create Onboarding Memories

Save discovered information to persistent memories:

Memory: project-structure.md

Project Structure

Overview

  • Project Type: [fullstack/backend/frontend/library]
  • Primary Language: [TypeScript/Python/Go/Rust]
  • Framework: [Next.js/FastAPI/Express/etc.]

Key Directories

  • Source: src/
  • Tests: test/
  • Config: .claude/

Entry Points

  • Main: src/index.ts
  • API: src/api/

Important Files

  • Configuration: package.json, tsconfig.json
  • Environment: .env.example

Memory: build-commands.md

Build Commands

Development

  • Start dev server: npm run dev
  • Watch mode: npm run watch

Build

  • Production build: npm run build
  • Type check: npm run typecheck

Clean

  • Clean build: npm run clean

Memory: test-commands.md

Test Commands

Unit Tests

  • Run all: npm test
  • Watch mode: npm test -- --watch
  • Coverage: npm test -- --coverage

E2E Tests

  • Run: npm run test:e2e

Linting

  • Lint: npm run lint
  • Fix: npm run lint:fix

Step 6: Validation

Validate discovered information:

Test Commands (if safe):

  • Run npm --version or equivalent to verify package manager

  • Run npm run --silent to list available scripts

  • Do NOT run build or test without user permission

Verify Paths:

  • Confirm key directories exist

  • Verify entry points are correct

Step 7: Report Summary

Output a concise summary:

Onboarding Complete

Project: [name] Type: [fullstack/backend/etc.] Framework: [Next.js/FastAPI/etc.]

Quick Commands:

  • Dev: npm run dev
  • Test: npm test
  • Build: npm run build

Key Locations:

  • Source: src/
  • Tests: test/
  • API: src/api/

Memories Created:

  • .claude/context/memory/project-structure.md
  • .claude/context/memory/build-commands.md
  • .claude/context/memory/test-commands.md

Agent Actions:

  • Check for existing memories in .claude/context/memory/

  • If no memories, run project discovery

  • Analyze package.json, directory structure

  • Create memory files

  • Report summary

Output:

Onboarding Complete

Project: agent-studio Type: Multi-agent orchestration framework Framework: Claude Code + Custom agents

Quick Commands:

  • Validate: node .claude/tools/cli/validate-agents.mjs
  • Test hooks: node .claude/hooks/routing/router-enforcer.cjs (uses .claude/lib/routing/routing-table.cjs)

Key Locations:

  • Agents: .claude/agents/
  • Skills: .claude/skills/
  • Memory: .claude/context/memory/

Memories Created: 3 files

</usage_example>

Iron Laws

  • ALWAYS check for existing onboarding memories before running discovery — duplicate onboarding creates contradictory knowledge entries that mislead future agents.

  • NEVER assume standard conventions without verification — run commands to confirm what actually works in this specific project environment.

  • ALWAYS write discovered commands and paths to persistent memory files — session context resets, but named memory files survive indefinitely.

  • ALWAYS verify discovered commands with a safe test step before documenting — incorrect commands in onboarding memories mislead every agent that reads them.

  • NEVER complete onboarding without revisiting memories when the project evolves — stale onboarding artifacts are more dangerous than no onboarding.

Anti-Patterns

Anti-Pattern Why It Fails Correct Approach

Assuming standard conventions without checking Every project has unique build/test/lint commands; wrong assumptions cause silent failures Read package.json, Makefile, or pyproject.toml and run --version to confirm

Skipping verification of discovered commands Documented-but-wrong commands mislead every future agent session Run each command with a safe no-op or --help flag to confirm it works

Storing onboarding only in session context Context resets on every new conversation; discoveries are permanently lost Write all findings to named memory files in .claude/context/memory/named/

Treating onboarding as a one-time event Projects evolve; stale commands fail silently and waste agent time Update onboarding memories after any significant project structure change

Over-documenting without prioritizing key commands Long files with low-priority info bury the critical build/test commands Structure memories with Quick Start commands at the top, details below

Memory Protocol (MANDATORY)

Before starting: Read .claude/context/memory/learnings.md

After completing:

  • New pattern discovered -> .claude/context/memory/learnings.md

  • Issue encountered -> .claude/context/memory/issues.md

  • Decision made -> .claude/context/memory/decisions.md

ASSUME INTERRUPTION: If it's not in memory, it didn't happen.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

filesystem

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

slack-notifications

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

chrome-browser

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

diagram-generator

No summary provided by upstream source.

Repository SourceNeeds Review