llms-txt-generator

Genera documentación llms.txt optimizada para LLMs. Usa cuando el usuario diga "crear llms.txt", "documentar para AI", "crear documentación para LLMs", "generar docs para modelos", o quiera hacer el repo legible para Claude/AI.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "llms-txt-generator" with this command: npx skills add testacode/llm-toolkit/testacode-llm-toolkit-llms-txt-generator

LLM Documentation Generator

Generate structured, AI-readable documentation following the llms.txt standard with granular files organized by domain.

Output Structure

llm-docs/
├── llm.txt                    # Main index (~1-2 KB)
├── llm.version.txt            # Metadata and sync info (~0.3 KB)
└── llm.{domain}.txt           # Domain-specific files (~3-50 KB each)

Workflow

Phase 1: Language Selection

Ask user preferred language before starting:

¿En qué idioma prefieres la documentación? / What language do you prefer?
- Español
- English
- Bilingual (technical terms in English, explanations in Spanish)

Phase 2: Project Analysis

Identify project type and data sources:

IndicatorProject Type
components/, design tokens, SCSSFrontend/UI Library
cmd/, CLI flags, subcommandsCLI Tool
/api/, OpenAPI, routesREST/GraphQL API
src/, exports, package.jsonGeneric Library

Detect structured data sources:

  • JSON metadata files (component docs, OpenAPI specs)
  • JSDoc/GoDoc comments
  • TypeScript definitions
  • Configuration files (package.json, go.mod)
  • Existing documentation (README, docs/)

Phase 3: Domain Planning

Based on project type, plan which llm.{domain}.txt files to create:

Frontend/UI: See references/frontend-example.md

  • tokens, utilities, styles, brands
  • components-atoms, components-molecules, components-organisms

CLI Tools: See references/cli-example.md

  • commands, core, gateway, deployment, resources, testing, usage

APIs: See references/api-example.md

  • endpoints, models, auth, errors, examples

Libraries: See references/library-example.md

  • api, internals, patterns, examples

Phase 4: Implementation Decision

Choose approach based on data availability:

ConditionApproach
Structured data exists (JSON, JSDoc, OpenAPI)Create generator script
Manual documentation neededWrite static markdown files
Mixed sourcesHybrid: script for structured, manual for rest

Generator script benefits:

  • Auto-updates when code changes
  • DRY principle: single source of truth
  • Consistent formatting
  • Add npm/make script: generate:llms

Phase 5: File Generation

llm.version.txt (always first)

# {Project} LLM Documentation

- **Version**: {semantic version}
- **Last Updated**: {YYYY-MM-DD}
- **Documentation Version**: 1.0.0
- **Files**: {count} domain files
- **Total Size**: ~{X} KB

llm.txt (main index)

# {Project} - LLM Documentation

## Project Metadata
- **Name**: {project name}
- **Type**: {frontend|cli|api|library}
- **Language**: {primary language}
- **Purpose**: {one-line description}

## Quick Reference
- **Key Modules**: {list main areas}
- **Patterns**: {architectural patterns used}
- **Dependencies**: {key dependencies}

## Documentation Structure

### {Domain 1}
#### llm.{domain1}.txt
- **Focus**: {what this file covers}
- **Use when**: {scenarios to read this file}

### {Domain 2}
...

## Reading Guide

1. Start with `llm.version.txt` for metadata
2. Read `llm.{primary-domain}.txt` for core concepts
3. Reference other files as needed

llm.{domain}.txt (domain files)

Each domain file follows this structure:

# {Domain} - {Project}

## Overview
{2-3 sentences explaining this domain}

## {Section 1}

| Name | Type | Description |
|------|------|-------------|
| ... | ... | ... |

## {Section 2}

### {Subsection}
{Content with code examples}

## Related Files
- `llm.{related}.txt` - {why related}

Best Practices

  1. File size: Keep each file under 50 KB for optimal LLM context usage
  2. Cross-references: Link between files with clear "when to read" guidance
  3. Tables: Use markdown tables for properties, tokens, parameters
  4. Code examples: Include practical, copy-pasteable examples
  5. Hierarchy: Use consistent heading levels (H1 for title, H2 for sections, H3 for subsections)

Generator Script Pattern

When creating a generator script:

// Structure
const config = { COMPONENTS_DIR, OUTPUT_DIR, ... };

// Utilities
function readFile(path) { ... }
function writeOutput(filename, content) { ... }

// Extractors (one per data source)
function extractComponents() { ... }
function extractTokens() { ... }

// Generators (one per output file)
function generateIndex() { ... }
function generateVersion() { ... }
function generateDomain() { ... }

// Main
function main() {
  // Extract all data
  // Generate all files
  // Log summary
}

// Export for testing
module.exports = { extractors, generators };

// Run if main
if (require.main === module) main();

Add to package.json:

{
  "scripts": {
    "generate:llms": "node build-scripts/create-llms-docs.js"
  }
}

Ejemplo Completo de Output

Proyecto: CLI de Deployment

Después de analizar un CLI tool, el skill genera:

llm-docs/llm.version.txt

# DeployCLI LLM Documentation

- **Version**: 2.1.0
- **Last Updated**: 2025-12-15
- **Documentation Version**: 1.0.0
- **Files**: 4 domain files
- **Total Size**: ~35 KB

llm-docs/llm.txt

# DeployCLI - LLM Documentation

## Project Metadata
- **Name**: deploy-cli
- **Type**: CLI Tool
- **Language**: TypeScript
- **Purpose**: Deploy applications to multiple cloud providers

## Quick Reference
- **Key Modules**: commands, providers, config
- **Patterns**: Command pattern, Provider abstraction
- **Dependencies**: commander, chalk, ora

## Documentation Structure

### Commands
#### llm.commands.txt
- **Focus**: All CLI commands and subcommands
- **Use when**: Need to understand available commands and flags

### Providers
#### llm.providers.txt
- **Focus**: Cloud provider integrations (AWS, GCP, Vercel)
- **Use when**: Adding or modifying provider support

### Configuration
#### llm.config.txt
- **Focus**: Config file format and options
- **Use when**: Understanding how users configure the CLI

llm-docs/llm.commands.txt

# Commands - DeployCLI

## Overview
DeployCLI exposes 5 main commands for deployment management.

## Commands

| Command | Description | Flags |
|---------|-------------|-------|
| `deploy` | Deploy to target provider | `--provider`, `--env`, `--dry-run` |
| `rollback` | Revert to previous deployment | `--version`, `--force` |
| `status` | Check deployment status | `--watch`, `--json` |
| `config` | Manage configuration | `--init`, `--validate` |
| `logs` | Stream deployment logs | `--follow`, `--since` |

## deploy

Main deployment command.

### Usage
\`\`\`bash
deploy-cli deploy --provider aws --env production
\`\`\`

### Flags
- `--provider, -p`: Target provider (aws, gcp, vercel)
- `--env, -e`: Environment (development, staging, production)
- `--dry-run`: Simulate without deploying
- `--config, -c`: Path to config file

## Related Files
- `llm.providers.txt` - Provider-specific deployment details
- `llm.config.txt` - Configuration options for deployments

Uso del Output

Una vez generados, los archivos pueden ser:

  1. Incluidos en prompts de AI:

    @llm-docs/llm.commands.txt How do I deploy to staging?
    
  2. Referenciados en CLAUDE.md:

    ## LLM Documentation
    Ver `llm-docs/` para documentación optimizada para AI.
    
  3. Mantenidos automáticamente:

    npm run generate:llms  # Regenerar después de cambios
    

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

claude-md-writer

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

doc-writer

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

feature-planner

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

pr-reviewer

No summary provided by upstream source.

Repository SourceNeeds Review