databricks-pipelines

Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "databricks-pipelines" with this command: npx skills add databricks/databricks-agent-skills/databricks-databricks-agent-skills-databricks-pipelines

Lakeflow Spark Declarative Pipelines Development

FIRST: Use the parent databricks skill for CLI basics, authentication, profile selection, and data discovery commands.

Decision Tree

Use this tree to determine which dataset type and features to use. Multiple features can apply to the same dataset — e.g., a Streaming Table can use Auto Loader for ingestion, Append Flows for fan-in, and Expectations for data quality. Choose the dataset type first, then layer on applicable features.

User request → What kind of output?
├── Intermediate/reusable logic (not persisted) → Temporary View
│   ├── Preprocessing/filtering before Auto CDC → Temporary View feeding CDC flow
│   ├── Shared intermediate streaming logic reused by multiple downstream tables
│   ├── Pipeline-private helper logic (not published to catalog)
│   └── Published to UC for external queries → Persistent View (SQL only)
├── Persisted dataset
│   ├── Source is streaming/incremental/continuously growing → Streaming Table
│   │   ├── File ingestion (cloud storage, Volumes) → Auto Loader
│   │   ├── Message bus (Kafka, Kinesis, Pub/Sub, Pulsar, Event Hubs) → streaming source read
│   │   ├── Existing streaming/Delta table → streaming read from table
│   │   ├── CDC / upserts / track changes / keep latest per key / SCD Type 1 or 2 → Auto CDC
│   │   ├── Multiple sources into one table → Append Flows (NOT union)
│   │   ├── Historical backfill + live stream → one-time Append Flow + regular flow
│   │   └── Windowed aggregation with watermark → stateful streaming
│   └── Source is batch/historical/full scan → Materialized View
│       ├── Aggregation/join across full dataset (GROUP BY, SUM, COUNT, etc.)
│       ├── Gold layer aggregation from streaming table → MV with batch read (spark.read / no STREAM)
│       ├── JDBC/Federation/external batch sources
│       └── Small static file load (reference data, no streaming read)
├── Output to external system (Python only) → Sink
│   ├── Existing external table not managed by this pipeline → Sink with format="delta"
│   │   (prefer fully-qualified dataset names if the pipeline should own the table — see Publishing Modes)
│   ├── Kafka / Event Hubs → Sink with format="kafka" + @dp.append_flow(target="sink_name")
│   ├── Custom destination not natively supported → Sink with custom format
│   ├── Custom merge/upsert logic per batch → ForEachBatch Sink (Public Preview)
│   └── Multiple destinations per batch → ForEachBatch Sink (Public Preview)
└── Data quality constraints → Expectations (on any dataset type)

Common Traps

  • "Create a table" without specifying type → ask whether the source is streaming or batch
  • Materialized View from streaming source is an error → use a Streaming Table instead, or switch to a batch read
  • Streaming Table from batch source is an error → use a Materialized View instead, or switch to a streaming read
  • Aggregation over streaming table → use a Materialized View with batch read (spark.read.table / SELECT FROM without STREAM), NOT a Streaming Table. This is the correct pattern for Gold layer aggregation.
  • Aggregation over batch/historical data → use a Materialized View, not a Streaming Table. MVs recompute or incrementally refresh aggregates to stay correct; STs are append-only and don't recompute when source data changes.
  • Preprocessing before Auto CDC → use a Temporary View to filter/transform the source before feeding into the CDC flow. SQL: the CDC flow reads from the view via STREAM(view_name). Python: use spark.readStream.table("view_name").
  • Intermediate logic → default to Temporary View → Use a Temporary View for intermediate/preprocessing logic, even when reused by multiple downstream tables. Only consider a Private MV/ST (private=True / CREATE PRIVATE ...) when the computation is expensive and materializing once would save significant reprocessing.
  • View vs Temporary View → Persistent Views publish to Unity Catalog (SQL only), Temporary Views are pipeline-private
  • Union of streams → use multiple Append Flows. Do NOT present UNION as an alternative — it is an anti-pattern for streaming sources.
  • Changing dataset type → cannot change ST→MV or MV→ST without manually dropping the existing table first. Full refresh does NOT help. Rename the new dataset instead.
  • SQL OR REFRESH → Prefer CREATE OR REFRESH over bare CREATE for SQL dataset definitions. Both work identically, but OR REFRESH is the idiomatic convention. For PRIVATE datasets: CREATE OR REFRESH PRIVATE STREAMING TABLE / CREATE OR REFRESH PRIVATE MATERIALIZED VIEW.
  • Kafka/Event Hubs sink serialization → The value column is mandatory. Use to_json(struct(*)) AS value to serialize the entire row as JSON. Read the sink skill for details.
  • Multi-column sequencing in Auto CDC → SQL: SEQUENCE BY STRUCT(col1, col2). Python: sequence_by=struct("col1", "col2"). Read the auto-cdc skill for details.
  • Auto CDC supports TRUNCATE (SCD Type 1 only) → SQL: APPLY AS TRUNCATE WHEN condition. Python: apply_as_truncates=expr("condition"). Do NOT say truncate is unsupported.
  • Python-only features → Sinks, ForEachBatch Sinks, CDC from snapshots, and custom data sources are Python-only. When the user is working in SQL, explicitly clarify this and suggest switching to Python.
  • MV incremental refresh → Materialized Views on serverless pipelines support automatic incremental refresh for aggregations. Mention the serverless requirement when discussing incremental refresh.
  • Recommend ONE clear approach → Present a single recommended approach. Do NOT present anti-patterns or significantly inferior alternatives — it confuses users. Only mention alternatives if they are genuinely viable for different trade-offs.

Publishing Modes

Pipelines use a default catalog and schema configured in the pipeline settings. All datasets are published there unless overridden.

  • Fully-qualified names: Use catalog.schema.table in the dataset name to write to a different catalog/schema than the pipeline default. The pipeline creates the dataset there directly — no Sink needed.
  • USE CATALOG / USE SCHEMA: SQL commands that change the current catalog/schema for all subsequent definitions in the same file.
  • LIVE prefix: Deprecated. Ignored in the default publishing mode.
  • When reading or defining datasets within the pipeline, use the dataset name only — do NOT use fully-qualified names unless the pipeline already does so or the user explicitly requests a different target catalog/schema.

Comprehensive API Reference

MANDATORY: Before implementing, editing, or suggesting any code for a feature, you MUST read the linked reference file for that feature. NO exceptions — always look up the reference before writing code.

Some features require reading multiple skills together:

  • Auto Loader → also read the streaming-table skill (Auto Loader produces a streaming DataFrame, so the target is a streaming table) and look up format-specific options for the file format being loaded
  • Auto CDC → also read the streaming-table skill (Auto CDC always targets a streaming table)
  • Sinks → also read the streaming-table skill (sinks use streaming append flows)
  • Expectations → also read the corresponding dataset definition skill to ensure constraints are correctly placed

Dataset Definition APIs

FeaturePython (current)Python (deprecated)SQL (current)SQL (deprecated)Skill (Py)Skill (SQL)
Streaming Table@dp.table() returning streaming DF@dlt.table() returning streaming DFCREATE OR REFRESH STREAMING TABLECREATE STREAMING LIVE TABLEstreaming-table-pythonstreaming-table-sql
Materialized View@dp.materialized_view()@dlt.table() returning batch DFCREATE OR REFRESH MATERIALIZED VIEWCREATE LIVE TABLE (batch)materialized-view-pythonmaterialized-view-sql
Temporary View@dp.temporary_view()@dlt.view(), @dp.view()CREATE TEMPORARY VIEWCREATE TEMPORARY LIVE VIEWtemporary-view-pythontemporary-view-sql
Persistent View (UC)N/A — SQL onlyCREATE VIEWview-sql
Streaming Table (explicit)dp.create_streaming_table()dlt.create_streaming_table()CREATE OR REFRESH STREAMING TABLE (no AS)streaming-table-pythonstreaming-table-sql

Flow and Sink APIs

FeaturePython (current)Python (deprecated)SQL (current)SQL (deprecated)Skill (Py)Skill (SQL)
Append Flow@dp.append_flow()@dlt.append_flow()CREATE FLOW ... INSERT INTOstreaming-table-pythonstreaming-table-sql
Backfill Flow@dp.append_flow(once=True)@dlt.append_flow(once=True)CREATE FLOW ... INSERT INTO ... ONCEstreaming-table-pythonstreaming-table-sql
Sink (Delta/Kafka/EH/custom)dp.create_sink()dlt.create_sink()N/A — Python onlysink-python
ForEachBatch Sink@dp.foreach_batch_sink()N/A — Python onlyforeach-batch-sink-python

CDC APIs

FeaturePython (current)Python (deprecated)SQL (current)SQL (deprecated)Skill (Py)Skill (SQL)
Auto CDC (streaming source)dp.create_auto_cdc_flow()dlt.apply_changes(), dp.apply_changes()AUTO CDC INTO ... FROM STREAMAPPLY CHANGES INTO ... FROM STREAMauto-cdc-pythonauto-cdc-sql
Auto CDC (periodic snapshot)dp.create_auto_cdc_from_snapshot_flow()dlt.apply_changes_from_snapshot()N/A — Python onlyauto-cdc-python

Data Quality APIs

FeaturePython (current)Python (deprecated)SQL (current)Skill (Py)Skill (SQL)
Expect (warn)@dp.expect()@dlt.expect()CONSTRAINT ... EXPECT (...)expectations-pythonexpectations-sql
Expect or drop@dp.expect_or_drop()@dlt.expect_or_drop()CONSTRAINT ... EXPECT (...) ON VIOLATION DROP ROWexpectations-pythonexpectations-sql
Expect or fail@dp.expect_or_fail()@dlt.expect_or_fail()CONSTRAINT ... EXPECT (...) ON VIOLATION FAIL UPDATEexpectations-pythonexpectations-sql
Expect all (warn)@dp.expect_all({})@dlt.expect_all({})Multiple CONSTRAINT clausesexpectations-pythonexpectations-sql
Expect all or drop@dp.expect_all_or_drop({})@dlt.expect_all_or_drop({})Multiple constraints with DROP ROWexpectations-pythonexpectations-sql
Expect all or fail@dp.expect_all_or_fail({})@dlt.expect_all_or_fail({})Multiple constraints with FAIL UPDATEexpectations-pythonexpectations-sql

Reading Data APIs

FeaturePython (current)Python (deprecated)SQL (current)SQL (deprecated)Skill (Py)Skill (SQL)
Batch read (pipeline dataset)spark.read.table("name")dp.read("name"), dlt.read("name")SELECT ... FROM nameSELECT ... FROM LIVE.name
Streaming read (pipeline dataset)spark.readStream.table("name")dp.read_stream("name"), dlt.read_stream("name")SELECT ... FROM STREAM nameSELECT ... FROM STREAM LIVE.name
Auto Loader (cloud files)spark.readStream.format("cloudFiles")STREAM read_files(...)auto-loader-pythonauto-loader-sql
Kafka sourcespark.readStream.format("kafka")STREAM read_kafka(...)
Kinesis sourcespark.readStream.format("kinesis")STREAM read_kinesis(...)
Pub/Sub sourcespark.readStream.format("pubsub")STREAM read_pubsub(...)
Pulsar sourcespark.readStream.format("pulsar")STREAM read_pulsar(...)
Event Hubs sourcespark.readStream.format("kafka") + EH configSTREAM read_kafka(...) + EH config
JDBC / Lakehouse Federationspark.read.format("postgresql") etc.Direct table ref via federation catalog
Custom data sourcespark.read[Stream].format("custom")N/A — Python only
Static file read (batch)spark.read.format("json"|"csv"|...).load()read_files(...) (no STREAM)
Skip upstream change commits.option("skipChangeCommits", "true")read_stream("name", skipChangeCommits => true)streaming-table-pythonstreaming-table-sql

Table/Schema Feature APIs

FeaturePython (current)SQL (current)Skill (Py)Skill (SQL)
Liquid clusteringcluster_by=[...]CLUSTER BY (col1, col2)materialized-view-pythonmaterialized-view-sql
Auto liquid clusteringcluster_by_auto=TrueCLUSTER BY AUTOmaterialized-view-pythonmaterialized-view-sql
Partition columnspartition_cols=[...]PARTITIONED BY (col1, col2)materialized-view-pythonmaterialized-view-sql
Table propertiestable_properties={...}TBLPROPERTIES (...)materialized-view-pythonmaterialized-view-sql
Explicit schemaschema="col1 TYPE, ..."(col1 TYPE, ...) ASmaterialized-view-pythonmaterialized-view-sql
Generated columnsschema="..., col TYPE GENERATED ALWAYS AS (expr)"col TYPE GENERATED ALWAYS AS (expr)materialized-view-pythonmaterialized-view-sql
Row filter (Public Preview)row_filter="ROW FILTER fn ON (col)"WITH ROW FILTER fn ON (col)materialized-view-pythonmaterialized-view-sql
Column mask (Public Preview)schema="..., col TYPE MASK fn USING COLUMNS (col2)"col TYPE MASK fn USING COLUMNS (col2)materialized-view-pythonmaterialized-view-sql
Private datasetprivate=TrueCREATE PRIVATE ...materialized-view-pythonmaterialized-view-sql

Import / Module APIs

CurrentDeprecatedNotes
from pyspark import pipelines as dpimport dltBoth work. Prefer dp. Do NOT change existing dlt imports.
spark.read.table() / spark.readStream.table()dp.read() / dp.read_stream() / dlt.read() / dlt.read_stream()Deprecated reads still work. Prefer spark.*.
LIVE. prefixFully deprecated. NEVER use. Causes errors in newer pipelines.
CREATE LIVE TABLE / CREATE LIVE VIEWFully deprecated. Use CREATE STREAMING TABLE / CREATE MATERIALIZED VIEW / CREATE TEMPORARY VIEW.

Language-specific guides

Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables / DLT) is a framework for building batch and streaming data pipelines.

Scaffolding a New Pipeline Project

Use databricks bundle init with a config file to scaffold non-interactively. This creates a project in the <project_name>/ directory:

databricks bundle init lakeflow-pipelines --config-file <(echo '{"project_name": "my_pipeline", "language": "python", "serverless": "yes"}') --profile <PROFILE> < /dev/null
  • project_name: letters, numbers, underscores only
  • language: python or sql. Ask the user which they prefer:
    • SQL: Recommended for straightforward transformations (filters, joins, aggregations)
    • Python: Recommended for complex logic (custom UDFs, ML, advanced processing)

After scaffolding, create CLAUDE.md and AGENTS.md in the project directory. These files are essential to provide agents with guidance on how to work with the project. Use this content:

# Databricks Asset Bundles Project

This project uses Databricks Asset Bundles for deployment.

## Prerequisites

Install the Databricks CLI (>= v0.288.0) if not already installed:
- macOS: `brew tap databricks/tap && brew install databricks`
- Linux: `curl -fsSL https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh | sh`
- Windows: `winget install Databricks.DatabricksCLI`

Verify: `databricks -v`

## For AI Agents

Read the `databricks` skill for CLI basics, authentication, and deployment workflow.
Read the `databricks-pipelines` skill for pipeline-specific guidance.

If skills are not available, install them: `databricks experimental aitools skills install`

Pipeline Structure

  • Follow the medallion architecture pattern (Bronze → Silver → Gold) unless the user specifies otherwise
  • Use the convention of 1 dataset per file, named after the dataset
  • Place transformation files in a src/ or transformations/ folder
my-pipeline-project/
├── databricks.yml                        # Bundle configuration
├── resources/
│   ├── my_pipeline.pipeline.yml          # Pipeline definition
│   └── my_pipeline_job.job.yml           # Scheduling job (optional)
└── src/
    ├── my_table.py (or .sql)             # One dataset per file
    ├── another_table.py (or .sql)
    └── ...

Scheduling Pipelines

To schedule a pipeline, add a job that triggers it in resources/<name>.job.yml:

resources:
  jobs:
    my_pipeline_job:
      trigger:
        periodic:
          interval: 1
          unit: DAYS
      tasks:
        - task_key: refresh_pipeline
          pipeline_task:
            pipeline_id: ${resources.pipelines.my_pipeline.id}

Running Pipelines

You must deploy before running. In local development, code changes only take effect after databricks bundle deploy. Always deploy before any run, dry run, or selective refresh.

  • Selective refresh is preferred when you only need to run one table. For selective refresh it is important that dependencies are already materialized.
  • Full refresh is the most expensive and dangerous option, and can lead to data loss, so it should be used only when really necessary. Always suggest this as a follow-up that the user explicitly needs to select.

Development Workflow

  1. Validate: databricks bundle validate --profile <profile>
  2. Deploy: databricks bundle deploy -t dev --profile <profile>
  3. Run pipeline: databricks bundle run <pipeline_name> -t dev --profile <profile>
  4. Check status: databricks pipelines get --pipeline-id <id> --profile <profile>

Pipeline API Reference

Detailed reference guides for each pipeline API. Read the relevant guide before writing pipeline code.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

databricks

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

databricks-apps

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

databricks-jobs

No summary provided by upstream source.

Repository SourceNeeds Review