pipes-troubleshooting

Pipes: Troubleshooting Diagnostic

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "pipes-troubleshooting" with this command: npx skills add subsquid-labs/agent-skills/subsquid-labs-agent-skills-pipes-troubleshooting

Pipes: Troubleshooting Diagnostic

Specialized agent for diagnosing and fixing runtime errors in blockchain indexers built with Subsquid Pipes SDK.

When to Use This Skill

Activate when:

  • User reports an error message

  • Indexer crashes or stops unexpectedly

  • Data is missing or incorrect in database

  • TypeScript compilation fails

  • Database connection issues

  • Portal API errors or timeouts

  • User mentions "error", "not working", "broken", "failed", or "bug"

Important Note

Before diagnosing errors, check if the user followed the mandatory workflow in pipes-workflow skill. Many errors are caused by skipping documentation and not using proper setup procedures.

Diagnostic Checklist

  1. Identify Error Type

Compilation Errors:

  • TypeScript type mismatches

  • Missing imports or dependencies

  • ABI version conflicts (@subsquid/evm-abi 0.3.1 vs 1.x.x)

Runtime Errors:

  • Portal API connection failures

  • Database connection issues

  • Event decoding errors

  • Memory issues or OOM

  • Cursor corruption

Data Quality Issues:

  • Missing events

  • Incorrect event parameters

  • Duplicate records

  • Wrong block ranges

  1. Check Running Processes

If indexer is currently running:

Check if process is running

ps aux | grep "bun run dev|tsx src/index.ts|node"

Check output if running in background

Use BashOutput tool with bash_id

  1. Read Error Context

Always read the relevant files:

  • src/index.ts

  • Main pipeline code

  • package.json

  • Dependency versions

  • .env

  • Connection strings

  • Error stack traces from BashOutput

Common Error Patterns

Error Pattern 1: ABI Version Mismatch

Symptoms:

Type 'LogParams' is not assignable to type 'EvmLogParams' Property 'topics' is missing in type 'LogParams'

Diagnosis: Wrong @subsquid/evm-abi version Root Cause: Using 1.x.x instead of 0.3.1

Fix:

// package.json { "dependencies": { "@subsquid/evm-abi": "^0.3.1" // NOT ^1.0.0 } }

Steps:

  • Read package.json

  • Edit to correct version

  • Run npm install or bun install

  • Verify types resolve

Error Pattern 2: Portal API Connection Failed

Symptoms:

Error: connect ECONNREFUSED Error: Portal request failed with status 429 Error: Portal timeout after 30s

Diagnosis: Network or rate limit issue

Fix Options:

  • Rate Limiting (429): Add delay between requests or reduce block range

  • Connection Refused: Check internet connection, verify Portal URL

  • Timeout: Increase timeout or reduce batch size

Code Changes:

// Reduce block range to avoid rate limits range: { from: 21_000_000, to: 21_100_000 // Smaller range }

// Or adjust from block to be more recent range: { from: 21_000_000 } // Last few million blocks only

Error Pattern 3: Database Connection Failed

Symptoms:

Error: connect ECONNREFUSED localhost:5432 Error: ClickHouse authentication failed Error: Database 'pipes' does not exist

Diagnosis: Database not running or misconfigured

Fix Steps:

Check if database is running:

PostgreSQL

docker ps | grep postgres

ClickHouse

docker ps | grep clickhouse

Check connection string in .env:

cat .env

Start database if needed:

ClickHouse

docker start clickhouse

Or start with docker-compose

docker-compose up -d

Create database if missing:

ClickHouse

docker exec -it clickhouse clickhouse-client --query "CREATE DATABASE IF NOT EXISTS pipes"

Error Pattern 4: Event Decoding Failed

Symptoms:

Error: Cannot decode event with signature '0x...' TypeError: Cannot read property 'from' of undefined

Diagnosis: Wrong ABI or contract address

Fix Steps:

Read src/index.ts to check ABI import

Verify contract address is correct

Check if using correct event ABI:

// Wrong: Using wrong common ABI events: { swap: commonAbis.erc20.events.Transfer // Wrong event }

// Correct: Use proper ABI events: { swap: uniswapV3.events.Swap // Correct }

If custom contract, regenerate ABI:

npx @subsquid/evm-typegen@latest src/contracts
0xYourContractAddress
--chain-id 1

Error Pattern 5: Missing Data

Symptoms:

  • Indexer runs successfully but database is empty

  • Only partial data is indexed

  • Specific events are missing

Diagnosis: Filtering issue or wrong start block

Fix Steps:

Check start block is before events occurred:

// Verify on Etherscan when contract was deployed range: { from: 'deployment_block' }

Check if contract is a proxy:

  • Proxy contracts emit events from implementation address

  • Need to track implementation, not proxy

Verify event names match ABI exactly:

// Case-sensitive, must match exactly events: { transfer: erc20Abi.Transfer // Correct case }

Check for overly restrictive filters:

// May be filtering out too many events .filter((e) => /* check filter logic */)

Error Pattern 6: Memory Issues

Symptoms:

Error: JavaScript heap out of memory Process killed (signal 9)

Diagnosis: Indexer processing too much data at once

Fix Options:

  • Reduce block range

  • Reduce number of contracts tracked

  • Process data in smaller batches

  • Increase Node.js memory limit: NODE_OPTIONS="--max-old-space-size=4096" bun run dev

Error Pattern 7: ClickHouse Schema Issues

Symptoms:

Error: Table already exists Error: Column type mismatch Error: Cannot insert NULL into NOT NULL column

Fix Steps:

Drop and recreate table:

docker exec clickhouse clickhouse-client --password=default
--query "DROP TABLE IF EXISTS pipes.table_name"

Verify schema matches data types:

  • Addresses: String

  • Amounts: Float64 (after dividing by decimals)

  • Block numbers: UInt64

  • Timestamps: DateTime(3)

Ensure sync table is cleared for fresh starts:

docker exec clickhouse clickhouse-client --password=default
--query "DROP TABLE IF EXISTS pipes.sync"

Data Validation & Quality Checks

After an indexer completes successfully, validate the data quality to ensure production readiness.

Validation Levels

Level 1: Schema Validation (CRITICAL)

Verify table structure is correct:

-- Check table exists SELECT count() FROM system.tables WHERE database = '<database>' AND name = '<table_name>'

-- Check column types DESCRIBE <database>.<table_name>

Checks:

  • Table exists

  • All expected columns present

  • Column data types match design

  • Indexes created

  • Table engine correct

Level 2: Data Quality (HIGH PRIORITY)

Validate individual data values:

-- Address format validation SELECT countIf(length(pool_address) != 42) as invalid_length, countIf(pool_address NOT LIKE '0x%') as missing_prefix, countIf(NOT match(pool_address, '^0x[0-9a-fA-F]{40}$')) as invalid_format FROM <table_name>

-- Transaction hash format SELECT countIf(length(transaction_hash) != 66) as invalid_length, countIf(transaction_hash NOT LIKE '0x%') as missing_prefix FROM <table_name>

-- BigInt values validation SELECT countIf(amount = '') as empty_amounts, countIf(NOT match(amount, '^-?[0-9]+$')) as invalid_numbers FROM <table_name>

-- NULL checks SELECT countIf(from_address IS NULL) as null_from, countIf(to_address IS NULL) as null_to, countIf(value IS NULL) as null_value FROM <table_name>

Checks:

  • Addresses are 42 characters (0x + 40 hex)

  • Transaction hashes are 66 characters (0x + 64 hex)

  • BigInt values are valid numbers

  • No unexpected NULL values

  • Block numbers in expected range

Level 3: Completeness (MEDIUM PRIORITY)

Ensure no missing data:

-- Block range coverage SELECT MIN(block_number) as min_block, MAX(block_number) as max_block, COUNT(DISTINCT block_number) as unique_blocks FROM <table_name>

-- Check for block gaps SELECT block_number, block_number - lag(block_number) OVER (ORDER BY block_number) as gap FROM ( SELECT DISTINCT block_number FROM <table_name> ORDER BY block_number ) WHERE gap > 1

-- Event count per block SELECT block_number, COUNT(*) as event_count FROM <table_name> GROUP BY block_number HAVING event_count > 1000 ORDER BY event_count DESC LIMIT 10

Checks:

  • Block range matches expected

  • No gaps in block sequence

  • Event counts are reasonable

  • No duplicate events (same tx_hash + log_index)

Level 4: Consistency (MEDIUM PRIORITY)

Verify logical relationships:

-- Block timestamps are monotonic SELECT block_number, block_timestamp, lag(block_timestamp) OVER (ORDER BY block_number) as prev_timestamp FROM ( SELECT DISTINCT block_number, block_timestamp FROM <table_name> ORDER BY block_number ) WHERE block_timestamp < prev_timestamp

Checks:

  • Block timestamps increase with block numbers

  • Log indexes sequential within transactions

Common Data Quality Issues

Issue 1: NULL Values in Required Fields

Symptom: Critical fields contain NULL

Cause: Missing .toString() on BigInt values

Fix:

// Wrong amount: transfer.event.value,

// Correct amount: transfer.event.value.toString(),

Issue 2: Invalid Address Formats

Symptom: Addresses not 42 characters or missing 0x

Cause: Incorrect data extraction or transformation

Fix: Validate address format in transformation pipeline

Issue 3: Block Gaps

Symptom: Missing blocks in sequence

Cause: Indexer crashed and didn't resume properly

Fix: Clear sync table and restart from affected block

Validation Checklist

Before declaring success:

  • Table structure matches design

  • No NULL values in required fields

  • All addresses are valid (42 chars, 0x prefix, hex)

  • All transaction hashes valid (66 chars)

  • Block range complete (no gaps)

  • Data count increasing over time

  • Sample transactions match block explorer

Diagnostic Workflow

  • Read error message - Get exact error text

  • Identify pattern - Match to known patterns above

  • Read relevant files - Check src/index.ts, package.json, .env

  • Verify environment - Check database, network, dependencies

  • Apply fix - Edit files or run commands

  • Test fix - Restart indexer and verify

  • Validate data - Run quality checks above

  • Monitor - Watch logs to confirm resolution

Prevention Tips

  • Always use Pipes CLI - Never manually create files

  • Follow workflow - See pipes-orchestrator for 7-step workflow

  • Start with recent blocks - Test faster, iterate quicker

  • Verify setup - See ENVIRONMENT_SETUP.md before starting

  • Check patterns - See PATTERNS.md for common solutions

Related Skills

  • pipes-new-indexer - Create new indexers

  • pipes-performance - Optimize slow indexers

  • pipes-orchestrator - Routes to this skill

  • pipes-abi - Fetch contract ABIs

  • pipes-schema-design - Design schemas

Related Documentation

This skill includes comprehensive reference documentation in the references/ directory:

  • PATTERNS.md - Common indexing patterns, performance optimization, error patterns, and best practices

How to Access

Read patterns and best practices

cat pipes-sdk/pipes-troubleshooting/references/PATTERNS.md

Or use Claude Code's Read tool:

Read: pipes-sdk/pipes-troubleshooting/references/PATTERNS.md

Additional Resources

  • ENVIRONMENT_SETUP.md - Setup prerequisites

  • DEPLOYMENT_OPTIONS.md - Production deployment

  • RESEARCH_CHECKLIST.md - Protocol research workflow

Official Subsquid Documentation

  • llms-full.txt - Complete troubleshooting and error references

  • skill.md - Comprehensive Pipes SDK guide

  • EVM OpenAPI Schema - Portal API specification for debugging EVM issues

  • Available Datasets - Verify network names and endpoints

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

pipes-performance

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

pipes-new-indexer

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

portal-query-evm-transactions

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

pipes-orchestrator

No summary provided by upstream source.

Repository SourceNeeds Review