notebooklm

NotebookLM Automation

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "notebooklm" with this command: npx skills add oldwinter/skills/oldwinter-skills-notebooklm

NotebookLM Automation

Automate Google NotebookLM: create notebooks, add sources, chat with content, generate artifacts (podcasts, videos, quizzes), and download results.

Installation

From PyPI (Recommended):

pip install notebooklm-py

From GitHub (use latest release tag, NOT main branch):

Get the latest release tag (using curl)

LATEST_TAG=$(curl -s https://api.github.com/repos/teng-lin/notebooklm-py/releases/latest | grep '"tag_name"' | cut -d'"' -f4) pip install "git+https://github.com/teng-lin/notebooklm-py@${LATEST_TAG}"

⚠️ DO NOT install from main branch (pip install git+https://github.com/teng-lin/notebooklm-py ). The main branch may contain unreleased/unstable changes. Always use PyPI or a specific release tag, unless you are testing unreleased features.

After installation, install the Claude Code skill:

notebooklm skill install

Prerequisites

IMPORTANT: Before using any command, you MUST authenticate:

notebooklm login # Opens browser for Google OAuth notebooklm list # Verify authentication works

If commands fail with authentication errors, re-run notebooklm login .

CI/CD, Multiple Accounts, and Parallel Agents

For automated environments, multiple accounts, or parallel agent workflows:

Variable Purpose

NOTEBOOKLM_HOME

Custom config directory (default: ~/.notebooklm )

NOTEBOOKLM_AUTH_JSON

Inline auth JSON - no file writes needed

CI/CD setup: Set NOTEBOOKLM_AUTH_JSON from a secret containing your storage_state.json contents.

Multiple accounts: Use different NOTEBOOKLM_HOME directories per account.

Parallel agents: The CLI stores notebook context in a shared file (~/.notebooklm/context.json ). Multiple concurrent agents using notebooklm use can overwrite each other's context.

Solutions for parallel workflows:

  • Always use explicit notebook ID (recommended): Pass -n <notebook_id> (for wait /download commands) or --notebook <notebook_id> (for others) instead of relying on use

  • Per-agent isolation: Set unique NOTEBOOKLM_HOME per agent: export NOTEBOOKLM_HOME=/tmp/agent-$ID

  • Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)

Agent Setup Verification

Before starting workflows, verify the CLI is ready:

  • notebooklm status → Should show "Authenticated as: email@..."

  • notebooklm list --json → Should return valid JSON (even if empty notebooks list)

  • If either fails → Run notebooklm login

When This Skill Activates

Explicit: User says "/notebooklm", "use notebooklm", or mentions the tool by name

Intent detection: Recognize requests like:

  • "Create a podcast about [topic]"

  • "Summarize these URLs/documents"

  • "Generate a quiz from my research"

  • "Turn this into an audio overview"

  • "Add these sources to NotebookLM"

Autonomy Rules

Run automatically (no confirmation):

  • notebooklm status

  • check context

  • notebooklm auth check

  • diagnose auth issues

  • notebooklm list

  • list notebooks

  • notebooklm source list

  • list sources

  • notebooklm artifact list

  • list artifacts

  • notebooklm artifact wait

  • wait for artifact completion (in subagent context)

  • notebooklm source wait

  • wait for source processing (in subagent context)

  • notebooklm research status

  • check research status

  • notebooklm research wait

  • wait for research (in subagent context)

  • notebooklm use <id>

  • set context (⚠️ SINGLE-AGENT ONLY - use -n flag in parallel workflows)

  • notebooklm create

  • create notebook

  • notebooklm ask "..."

  • chat queries

  • notebooklm source add

  • add sources

Ask before running:

  • notebooklm delete

  • destructive

  • notebooklm generate *

  • long-running, may fail

  • notebooklm download *

  • writes to filesystem

  • notebooklm artifact wait

  • long-running (when in main conversation)

  • notebooklm source wait

  • long-running (when in main conversation)

  • notebooklm research wait

  • long-running (when in main conversation)

Quick Reference

Task Command

Authenticate notebooklm login

Diagnose auth issues notebooklm auth check

Diagnose auth (full) notebooklm auth check --test

List notebooks notebooklm list

Create notebook notebooklm create "Title"

Set context notebooklm use <notebook_id>

Show context notebooklm status

Add URL source notebooklm source add "https://..."

Add file notebooklm source add ./file.pdf

Add YouTube notebooklm source add "https://youtube.com/..."

List sources notebooklm source list

Wait for source processing notebooklm source wait <source_id>

Web research (fast) notebooklm source add-research "query"

Web research (deep) notebooklm source add-research "query" --mode deep --no-wait

Check research status notebooklm research status

Wait for research notebooklm research wait --import-all

Chat notebooklm ask "question"

Chat (new conversation) notebooklm ask "question" --new

Chat (specific sources) notebooklm ask "question" -s src_id1 -s src_id2

Chat (with references) notebooklm ask "question" --json

Get source fulltext notebooklm source fulltext <source_id>

Get source guide notebooklm source guide <source_id>

Generate podcast notebooklm generate audio "instructions"

Generate podcast (JSON) notebooklm generate audio --json

Generate podcast (specific sources) notebooklm generate audio -s src_id1 -s src_id2

Generate video notebooklm generate video "instructions"

Generate quiz notebooklm generate quiz

Check artifact status notebooklm artifact list

Wait for completion notebooklm artifact wait <artifact_id>

Download audio notebooklm download audio ./output.mp3

Download video notebooklm download video ./output.mp4

Download report notebooklm download report ./report.md

Download mind map notebooklm download mind-map ./map.json

Download data table notebooklm download data-table ./data.csv

Download quiz notebooklm download quiz quiz.json

Download quiz (markdown) notebooklm download quiz --format markdown quiz.md

Download flashcards notebooklm download flashcards cards.json

Download flashcards (markdown) notebooklm download flashcards --format markdown cards.md

Delete notebook notebooklm notebook delete <id>

Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting -n shorthand: artifact wait , source wait , research wait/status , download * . Download commands also support -a/--artifact . Other commands use --notebook . For chat, use --new to start fresh conversations (avoids conversation ID conflicts).

Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for: use , delete , wait commands. For automation, prefer full UUIDs to avoid ambiguity.

Command Output Formats

Commands with --json return structured data for parsing:

Create notebook:

$ notebooklm create "Research" --json {"id": "abc123de-...", "title": "Research"}

Add source:

$ notebooklm source add "https://example.com" --json {"source_id": "def456...", "title": "Example", "status": "processing"}

Generate artifact:

$ notebooklm generate audio "Focus on key points" --json {"task_id": "xyz789...", "status": "pending"}

Chat with references:

$ notebooklm ask "What is X?" --json {"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}

Source fulltext (get indexed content):

$ notebooklm source fulltext <source_id> --json {"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}

Understanding citations: The cited_text in references is often a snippet or section header, not the full quoted passage. The start_char /end_char positions reference NotebookLM's internal chunked index, not the raw fulltext. Use SourceFulltext.find_citation_context() to locate citations:

fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id) matches = fulltext.find_citation_context(ref.cited_text) # Returns list[(context, position)] if matches: context, pos = matches[0] # First match; check len(matches) > 1 for duplicates

Extract IDs: Parse the id , source_id , or task_id field from JSON output.

Generation Types

All generate commands support:

  • -s, --source to use specific source(s) instead of all sources

  • --json for machine-readable output (returns task_id and status )

Type Command Downloadable

Podcast generate audio

Yes (.mp3)

Video generate video

Yes (.mp4)

Slides generate slide-deck

Yes (.pdf)

Infographic generate infographic

Yes (.png)

Report generate report

Yes (.md)

Mind Map generate mind-map

Yes (.json)

Data Table generate data-table

Yes (.csv)

Quiz generate quiz

Yes (.json/.md/.html)

Flashcards generate flashcards

Yes (.json/.md/.html)

Common Workflows

Research to Podcast (Interactive)

Time: 5-10 minutes total

  • notebooklm create "Research: [topic]" — if fails: check auth with notebooklm login

  • notebooklm source add for each URL/document — if one fails: log warning, continue with others

  • Wait for sources: notebooklm source list --json until all status=READY — required before generation

  • notebooklm generate audio "Focus on [specific angle]" (confirm when asked) — if rate limited: wait 5 min, retry once

  • Note the artifact ID returned

  • Check notebooklm artifact list later for status

  • notebooklm download audio ./podcast.mp3 when complete (confirm when asked)

Research to Podcast (Automated with Subagent)

Time: 5-10 minutes, but continues in background

When user wants full automation (generate and download when ready):

  • Create notebook and add sources as usual

  • Wait for sources to be ready (use source wait or check source list --json )

  • Run notebooklm generate audio "..." --json → parse artifact_id from output

  • Spawn a background agent using Task tool: Task( prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download. Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600 Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}", subagent_type="general-purpose" )

  • Main conversation continues while agent waits

Error handling in subagent:

  • If artifact wait returns exit code 2 (timeout): Report timeout, suggest checking artifact list

  • If download fails: Check if artifact status is COMPLETED first

Benefits: Non-blocking, user can do other work, automatic download on completion

Document Analysis

Time: 1-2 minutes

  • notebooklm create "Analysis: [project]"

  • notebooklm source add ./doc.pdf (or URLs)

  • notebooklm ask "Summarize the key points"

  • notebooklm ask "What are the main arguments?"

  • Continue chatting as needed

Bulk Import

Time: Varies by source count

  • notebooklm create "Collection: [name]"

  • Add multiple sources: notebooklm source add "https://url1.com" notebooklm source add "https://url2.com" notebooklm source add ./local-file.pdf

  • notebooklm source list to verify

Source limits: Max 50 sources per notebook Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files

Bulk Import with Source Waiting (Subagent Pattern)

Time: Varies by source count

When adding multiple sources and needing to wait for processing before chat/generation:

  • Add sources with --json to capture IDs: notebooklm source add "https://url1.com" --json # → {"source_id": "abc..."} notebooklm source add "https://url2.com" --json # → {"source_id": "def..."}

  • Spawn a background agent to wait for all sources: Task( prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready. For each: notebooklm source wait {id} -n {notebook_id} --timeout 120 Report when all ready or if any fail.", subagent_type="general-purpose" )

  • Main conversation continues while agent waits

  • Once sources are ready, proceed with chat or generation

Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.

Deep Web Research (Subagent Pattern)

Time: 2-5 minutes, runs in background

Deep research finds and analyzes web sources on a topic:

  • Create notebook: notebooklm create "Research: [topic]"

  • Start deep research (non-blocking): notebooklm source add-research "topic query" --mode deep --no-wait

  • Spawn a background agent to wait and import: Task( prompt="Wait for research in notebook {notebook_id} to complete and import sources. Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300 Report how many sources were imported.", subagent_type="general-purpose" )

  • Main conversation continues while agent waits

  • When agent completes, sources are imported automatically

Alternative (blocking): For simple cases, omit --no-wait :

notebooklm source add-research "topic" --mode deep --import-all

Blocks for up to 5 minutes

When to use each mode:

  • --mode fast : Specific topic, quick overview needed (5-10 sources, seconds)

  • --mode deep : Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)

Research sources:

  • --from web : Search the web (default)

  • --from drive : Search Google Drive

Output Style

Progress updates: Brief status for each step

  • "Creating notebook 'Research: AI'..."

  • "Adding source: https://example.com..."

  • "Starting audio generation... (task ID: abc123)"

Fire-and-forget for long operations:

  • Start generation, return artifact ID immediately

  • Do NOT poll or wait in main conversation - generation takes 5-45 minutes (see timing table)

  • User checks status manually, OR use subagent with artifact wait

JSON output: Use --json flag for machine-readable output:

notebooklm list --json notebooklm auth check --json notebooklm source list --json notebooklm artifact list --json

JSON schemas (key fields):

notebooklm list --json :

{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}

notebooklm auth check --json :

{"checks": {"storage_exists": true, "json_valid": true, "cookies_present": true, "sid_cookie": true, "token_fetch": true}, "details": {"storage_path": "...", "auth_source": "file", "cookies_found": ["SID", "HSID", "..."], "cookie_domains": [".google.com"]}}

notebooklm source list --json :

{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}

notebooklm artifact list --json :

{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}

Status values:

  • Sources: processing → ready (or error )

  • Artifacts: pending or in_progress → completed (or unknown )

Error Handling

On failure, offer the user a choice:

  • Retry the operation

  • Skip and continue with something else

  • Investigate the error

Error decision tree:

Error Cause Action

Auth/cookie error Session expired Run notebooklm auth check then notebooklm login

"No notebook context" Context not set Use -n <id> or --notebook <id> flag (parallel), or notebooklm use <id> (single-agent)

"No result found for RPC ID" Rate limiting Wait 5-10 min, retry

GENERATION_FAILED

Google rate limit Wait and retry later

Download fails Generation incomplete Check artifact list for status

Invalid notebook/source ID Wrong ID Run notebooklm list to verify

RPC protocol error Google changed APIs May need CLI update

Exit Codes

All commands use consistent exit codes:

Code Meaning Action

0 Success Continue

1 Error (not found, processing failed) Check stderr, see Error Handling

2 Timeout (wait commands only) Extend timeout or check status manually

Examples:

  • source wait returns 1 if source not found or processing failed

  • artifact wait returns 2 if timeout reached before completion

  • generate returns 1 if rate limited (check stderr for details)

Known Limitations

Rate limiting: Audio, video, quiz, flashcards, infographic, and slides generation may fail due to Google's rate limits. This is an API limitation, not a bug.

Reliable operations: These always work:

  • Notebooks (list, create, delete, rename)

  • Sources (add, list, delete)

  • Chat/queries

  • Mind-map, study-guide, FAQ, data-table generation

Unreliable operations: These may fail with rate limiting:

  • Audio (podcast) generation

  • Video generation

  • Quiz and flashcard generation

  • Infographic and slides generation

Workaround: If generation fails:

  • Check status: notebooklm artifact list

  • Retry after 5-10 minutes

  • Use the NotebookLM web UI as fallback

Processing times vary significantly. Use the subagent pattern for long operations:

Operation Typical time Suggested timeout

Source processing 30s - 10 min 600s

Research (fast) 30s - 2 min 180s

Research (deep) 15 - 30+ min 1800s

Notes instant n/a

Mind-map instant (sync) n/a

Quiz, flashcards 5 - 15 min 900s

Report, data-table 5 - 15 min 900s

Audio generation 10 - 20 min 1200s

Video generation 15 - 45 min 2700s

Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.

Troubleshooting

notebooklm --help # Main commands notebooklm auth check # Diagnose auth issues notebooklm auth check --test # Full auth validation with network test notebooklm notebook --help # Notebook management notebooklm source --help # Source management notebooklm research --help # Research status/wait notebooklm generate --help # Content generation notebooklm artifact --help # Artifact management notebooklm download --help # Download content

Diagnose auth: notebooklm auth check

  • shows cookie domains, storage path, validation status Re-authenticate: notebooklm login

Check version: notebooklm --version

Update skill: notebooklm skill install

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

agent-browser

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

designing-growth-loops

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

e2e-test-automation

No summary provided by upstream source.

Repository SourceNeeds Review