senior-data-engineer

Data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, implementing data governance, or troubleshooting data issues.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "senior-data-engineer" with this command: npx skills add alirezarezvani/senior-data-engineer

Senior Data Engineer

Production-grade data engineering skill for building scalable, reliable data systems.

Table of Contents

  1. Trigger Phrases
  2. Quick Start
  3. Workflows
  4. Architecture Decision Framework
  5. Tech Stack
  6. Reference Documentation
  7. Troubleshooting

Trigger Phrases

Activate this skill when you see:

Pipeline Design:

  • "Design a data pipeline for..."
  • "Build an ETL/ELT process..."
  • "How should I ingest data from..."
  • "Set up data extraction from..."

Architecture:

  • "Should I use batch or streaming?"
  • "Lambda vs Kappa architecture"
  • "How to handle late-arriving data"
  • "Design a data lakehouse"

Data Modeling:

  • "Create a dimensional model..."
  • "Star schema vs snowflake"
  • "Implement slowly changing dimensions"
  • "Design a data vault"

Data Quality:

  • "Add data validation to..."
  • "Set up data quality checks"
  • "Monitor data freshness"
  • "Implement data contracts"

Performance:

  • "Optimize this Spark job"
  • "Query is running slow"
  • "Reduce pipeline execution time"
  • "Tune Airflow DAG"

Quick Start

Core Tools

# Generate pipeline orchestration config
python scripts/pipeline_orchestrator.py generate \
  --type airflow \
  --source postgres \
  --destination snowflake \
  --schedule "0 5 * * *"

# Validate data quality
python scripts/data_quality_validator.py validate \
  --input data/sales.parquet \
  --schema schemas/sales.json \
  --checks freshness,completeness,uniqueness

# Optimize ETL performance
python scripts/etl_performance_optimizer.py analyze \
  --query queries/daily_aggregation.sql \
  --engine spark \
  --recommend

Workflows

→ See references/workflows.md for details

Architecture Decision Framework

Use this framework to choose the right approach for your data pipeline.

Batch vs Streaming

CriteriaBatchStreaming
Latency requirementHours to daysSeconds to minutes
Data volumeLarge historical datasetsContinuous event streams
Processing complexityComplex transformations, MLSimple aggregations, filtering
Cost sensitivityMore cost-effectiveHigher infrastructure cost
Error handlingEasier to reprocessRequires careful design

Decision Tree:

Is real-time insight required?
├── Yes → Use streaming
│   └── Is exactly-once semantics needed?
│       ├── Yes → Kafka + Flink/Spark Structured Streaming
│       └── No → Kafka + consumer groups
└── No → Use batch
    └── Is data volume > 1TB daily?
        ├── Yes → Spark/Databricks
        └── No → dbt + warehouse compute

Lambda vs Kappa Architecture

AspectLambdaKappa
ComplexityTwo codebases (batch + stream)Single codebase
MaintenanceHigher (sync batch/stream logic)Lower
ReprocessingNative batch layerReplay from source
Use caseML training + real-time servingPure event-driven

When to choose Lambda:

  • Need to train ML models on historical data
  • Complex batch transformations not feasible in streaming
  • Existing batch infrastructure

When to choose Kappa:

  • Event-sourced architecture
  • All processing can be expressed as stream operations
  • Starting fresh without legacy systems

Data Warehouse vs Data Lakehouse

FeatureWarehouse (Snowflake/BigQuery)Lakehouse (Delta/Iceberg)
Best forBI, SQL analyticsML, unstructured data
Storage costHigher (proprietary format)Lower (open formats)
FlexibilitySchema-on-writeSchema-on-read
PerformanceExcellent for SQLGood, improving
EcosystemMature BI toolsGrowing ML tooling

Tech Stack

CategoryTechnologies
LanguagesPython, SQL, Scala
OrchestrationAirflow, Prefect, Dagster
Transformationdbt, Spark, Flink
StreamingKafka, Kinesis, Pub/Sub
StorageS3, GCS, Delta Lake, Iceberg
WarehousesSnowflake, BigQuery, Redshift, Databricks
QualityGreat Expectations, dbt tests, Monte Carlo
MonitoringPrometheus, Grafana, Datadog

Reference Documentation

1. Data Pipeline Architecture

See references/data_pipeline_architecture.md for:

  • Lambda vs Kappa architecture patterns
  • Batch processing with Spark and Airflow
  • Stream processing with Kafka and Flink
  • Exactly-once semantics implementation
  • Error handling and dead letter queues

2. Data Modeling Patterns

See references/data_modeling_patterns.md for:

  • Dimensional modeling (Star/Snowflake)
  • Slowly Changing Dimensions (SCD Types 1-6)
  • Data Vault modeling
  • dbt best practices
  • Partitioning and clustering

3. DataOps Best Practices

See references/dataops_best_practices.md for:

  • Data testing frameworks
  • Data contracts and schema validation
  • CI/CD for data pipelines
  • Observability and lineage
  • Incident response

Troubleshooting

→ See references/troubleshooting.md for details

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Block Company

Block Company develops financial technology including Square payment terminals, Cash App peer-to-peer payments, and Bitcoin integration under Jack Dorsey's l...

Registry SourceRecently Updated
Coding

Browser Harness

用 LLM 友好的方式控制用户已登录的真实 Chrome(CDP)。一行命令在当前标签页跑 JS、点击、滚动、截图、读 DOM、填表、上传文件——共享 cookie/session/登录态,跨 Python 与 TypeScript Agent 操作同一个浏览器。基于 browser-use/browser-ha...

Registry SourceRecently Updated
Coding

Config Drift Scanner

Detect configuration drift across environments (dev, staging, production). Compare config files, environment variables, feature flags, and secrets across dep...

Registry SourceRecently Updated
Coding

API Gateway

Connect to 100+ APIs (Google Workspace, Microsoft 365, GitHub, Notion, Slack, Airtable, HubSpot, etc.) with managed OAuth. Use this skill when users want to...

Registry SourceRecently Updated
73.1K369byungkyu