llm-tldr
AI Agent's Code Intelligence Tool - Transform massive codebases into actionable insights for AI assistants
Description
llm-tldr is your AI agent's secret weapon for understanding large codebases. Instead of drowning in 100K+ lines of raw code, get structured analysis that fits in your context window with 95% token savings and 300x faster queries.
Repository: parcadei/llm-tldr Language: Python Stars: 285 License: Apache License 2.0
When to Use This Skill
Use this skill when your AI agent needs to:
-
Analyze large codebases that exceed token limits
-
Debug complex bugs across multiple files
-
Understand code dependencies and call graphs
-
Find functions by behavior, not just text search
-
Generate accurate code changes with full context
-
Refactor code safely with impact analysis
-
Provide codebase overviews for new team members
-
Optimize LLM context usage for coding tasks
What It Does
llm-tldr provides 5 layers of code intelligence:
-
AST Analysis - Function/class structure extraction
-
Call Graphs - Who calls what, reverse dependencies
-
Control Flow - Code execution paths and complexity
-
Data Flow - Variable tracing and transformations
-
Program Slicing - Minimal code affecting specific lines
Plus semantic search - Find code by what it does, not what it's called
How It Works
Quick setup for any codebase
pip install llm-tldr tldr warm /path/to/project # Index once (~30-60s) tldr context main --project . # Get function summary (99% token savings) tldr semantic "validate JWT tokens" . # Natural language search
For AI agents:
- Before reading code: tldr tree src/
- tldr structure src/ --lang python
-
Before editing: tldr context function_name --project .
-
Before refactoring: tldr impact function_name .
-
During debugging: tldr slice file.py function 42
Why It Matters for AI Agents
Problem: AI agents struggle with large codebases
-
Token limits force incomplete context
-
Raw code dumps overwhelm reasoning
-
Missing dependencies cause incorrect changes
-
No understanding of code behavior vs. naming
Solution: llm-tldr gives agents surgical precision
-
95% token reduction for function contexts
-
300x faster queries (100ms vs 30s)
-
Behavior-based search finds code by purpose
-
Dependency tracing prevents breaking changes
-
Multi-language support (16 languages)
Examples
Example 1: Debug null pointer issue
Without llm-tldr: Read 150-line function manually, trace variables, miss control flow bug
With llm-tldr:
tldr slice src/auth.py login 42
Output: Only 6 relevant lines showing the bug:
3: user = db.get_user(username) 7: if user is None: 12: raise NotFound 28: token = create_token(user) # ← BUG: skipped null check 35: session.token = token 42: return session
Example 2: Find authentication code
Without llm-tldr: Text search for "auth" finds comments but misses actual validation logic
With llm-tldr:
tldr semantic "validate JWT tokens and check expiration" .
Finds: verify_access_token() even without "JWT" in the name, because call graph reveals its purpose
Example 3: Safe refactoring
Without llm-tldr: Guess which tests to run, risk breaking dependencies
With llm-tldr:
tldr impact login . tldr change-impact src/auth.py
Output: Exact list of callers and affected test files
Quick Reference
Repository Info
-
Homepage:
-
Topics:
-
Open Issues: 3
-
Last Updated: 2026-01-13
Languages
- Python: 100.0%
Recent Releases
No releases available
Available References
-
references/README.md
-
Complete README documentation
-
references/CHANGELOG.md
-
Version history and changes
-
references/issues.md
-
Recent GitHub issues
-
references/releases.md
-
Release notes
-
references/file_structure.md
-
Repository structure
Usage
See README.md for complete usage instructions and examples.
Generated by Skill Seeker | GitHub Repository Scraper
Technique Map
-
Role definition - Clarifies operating scope and prevents ambiguous execution.
-
Context enrichment - Captures required inputs before actions.
-
Output structuring - Standardizes deliverables for consistent reuse.
-
Step-by-step workflow - Reduces errors by making execution order explicit.
-
Edge-case handling - Documents safe fallbacks when assumptions fail.
Technique Notes
These techniques improve reliability by making intent, inputs, outputs, and fallback paths explicit. Keep this section concise and additive so existing domain guidance remains primary.
Prompt Architect Overlay
Role Definition
You are the prompt-architect-enhanced specialist for lev-find-llm-tldr, responsible for deterministic execution of this skill's guidance while preserving existing workflow and constraints.
Input Contract
-
Required: clear user intent and relevant context for this skill.
-
Preferred: repository/project constraints, existing artifacts, and success criteria.
-
If context is missing, ask focused questions before proceeding.
Output Contract
-
Provide structured, actionable outputs aligned to this skill's existing format.
-
Include assumptions and next steps when appropriate.
-
Preserve compatibility with existing sections and related skills.
Edge Cases & Fallbacks
-
If prerequisites are missing, provide a minimal safe path and request missing inputs.
-
If scope is ambiguous, narrow to the highest-confidence sub-task.
-
If a requested action conflicts with existing constraints, explain and offer compliant alternatives.