Databricks Apps Development
FIRST: Use the parent databricks skill for CLI basics, authentication, and profile selection.
Build apps that deploy to Databricks Apps platform.
Required Reading by Phase
| Phase | READ BEFORE proceeding |
|---|---|
| Scaffolding | Parent databricks skill (auth, warehouse discovery); run databricks apps manifest and use its plugins/resources to build databricks apps init with --features and --set (see AppKit section below) |
| Writing SQL queries | SQL Queries Guide |
| Writing UI components | Frontend Guide |
Using useAnalyticsQuery | AppKit SDK |
| Adding API endpoints | tRPC Guide |
| Using Lakebase (OLTP database) | Lakebase Guide |
| Platform rules (permissions, deployment, limits) | Platform Guide — READ for ALL apps including AppKit |
| Non-AppKit app (Streamlit, FastAPI, Flask, Gradio, Next.js, etc.) | Other Frameworks |
Generic Guidelines
- App name: ≤26 characters, lowercase letters/numbers/hyphens only (no underscores). dev- prefix adds 4 chars, max 30 total.
- Validation:
databricks apps validate --profile <PROFILE>before deploying. - Smoke tests (AppKit only): ALWAYS update
tests/smoke.spec.tsselectors BEFORE running validation. Default template checks for "Minimal Databricks App" heading and "hello world" text — these WILL fail in your custom app. See testing guide. - Authentication: covered by parent
databricksskill.
Project Structure (after databricks apps init --features analytics)
client/src/App.tsx— main React component (start here)config/queries/*.sql— SQL query files (queryKey = filename without .sql)server/server.ts— backend entry (tRPC routers)tests/smoke.spec.ts— smoke test (⚠️ MUST UPDATE selectors for your app)client/src/appKitTypes.d.ts— auto-generated types (npm run typegen)
Project Structure (after databricks apps init --features lakebase)
server/server.ts— backend with Lakebase pool + tRPC routesclient/src/App.tsx— React frontendapp.yaml— manifest withdatabaseresource declarationpackage.json— includes@databricks/lakebasedependency- Note: No
config/queries/— Lakebase apps usepool.query()in tRPC, not SQL files
Data Discovery
Before writing any SQL, use the parent databricks skill for data exploration — search information_schema by keyword, then batch discover-schema for the tables you need. Do NOT skip this step.
Development Workflow (FOLLOW THIS ORDER)
Analytics apps (--features analytics):
- Create SQL files in
config/queries/ - Run
npm run typegen— verify all queries show ✓ - Read
client/src/appKitTypes.d.tsto see generated types - THEN write
App.tsxusing the generated types - Update
tests/smoke.spec.tsselectors - Run
databricks apps validate --profile <PROFILE>
DO NOT write UI code before running typegen — types won't exist and you'll waste time on compilation errors.
Lakebase apps (--features lakebase): No SQL files or typegen. See Lakebase Guide for the tRPC pattern: initialize schema at startup, write procedures in server/server.ts, then build the React frontend.
When to Use What
- Read analytics data → display in chart/table: Use visualization components with
queryKeyprop - Read analytics data → custom display (KPIs, cards): Use
useAnalyticsQueryhook - Read analytics data → need computation before display: Still use
useAnalyticsQuery, transform client-side - Read/write persistent data (users, orders, CRUD state): Use Lakebase pool via tRPC — see Lakebase Guide
- Call ML model endpoint: Use tRPC
- ⚠️ NEVER use tRPC to run SELECT queries against the warehouse — always use SQL files in
config/queries/ - ⚠️ NEVER use
useAnalyticsQueryfor Lakebase data — it queries the SQL warehouse only
Frameworks
AppKit (Recommended)
TypeScript/React framework with type-safe SQL queries and built-in components.
Official Documentation — the source of truth for all API details:
npx @databricks/appkit docs # ← ALWAYS start here to see available pages
npx @databricks/appkit docs <query> # view a section by name or doc path
npx @databricks/appkit docs --full # full index with all API entries
npx @databricks/appkit docs "appkit-ui API reference" # example: section by name
npx @databricks/appkit docs ./docs/plugins/analytics.md # example: specific doc file
DO NOT guess doc paths. Run without args first, pick from the index. The <query> argument accepts both section names (from the index) and file paths. Docs are the authority on component props, hook signatures, and server APIs — skill files only cover anti-patterns and gotchas.
App Manifest and Scaffolding
Agent workflow for scaffolding: get the manifest first, then build the init command.
-
Get the manifest (JSON schema describing plugins and their resources):
databricks apps manifest --profile <PROFILE> # Custom template: databricks apps manifest --template <GIT_URL> --profile <PROFILE>The output defines:
- Plugins: each has a key (plugin ID for
--features), plusrequiredByTemplate, andresources. - requiredByTemplate: If true, that plugin is mandatory for this template — do not add it to
--features(it is included automatically); you must still supply all of its required resources via--set. If false or absent, the plugin is optional — add it to--featuresonly when the user's prompt indicates they want that capability (e.g. analytics/SQL), and then supply its required resources via--set. - Resources: Each plugin has
resources.requiredandresources.optional(arrays). Each item hasresourceKeyandfields(object: field name → description/env). Use--set <plugin>.<resourceKey>.<field>=<value>for each required resource field of every plugin you include.
- Plugins: each has a key (plugin ID for
-
Scaffold (DO NOT use
npx; use the CLI only):databricks apps init --name <NAME> --features <plugin1>,<plugin2> \ --set <plugin1>.<resourceKey>.<field>=<value> \ --set <plugin2>.<resourceKey>.<field>=<value> \ --description "<DESC>" --run none --profile <PROFILE> # --run none: skip auto-run after scaffolding (review code first) # With custom template: databricks apps init --template <GIT_URL> --name <NAME> --features ... --set ... --profile <PROFILE>- Required:
--name,--profile. Name: ≤26 chars, lowercase letters/numbers/hyphens only. Use--featuresonly for optional plugins the user wants (plugins withrequiredByTemplate: falseor absent); mandatory plugins must not be listed in--features. - Resources: Pass
--setfor every required resource (each field inresources.required) for (1) all plugins withrequiredByTemplate: true, and (2) any optional plugins you added to--features. Add--setforresources.optionalonly when the user requests them. - Discovery: Use the parent
databricksskill to resolve IDs (e.g. warehouse:databricks warehouses list --profile <PROFILE>ordatabricks experimental aitools tools get-default-warehouse --profile <PROFILE>).
- Required:
DO NOT guess plugin names, resource keys, or property names — always derive them from databricks apps manifest output. Example: if the manifest shows plugin analytics with a required resource resourceKey: "sql-warehouse" and fields: { "id": ... }, include --set analytics.sql-warehouse.id=<ID>.
READ AppKit Overview for project structure, workflow, and pre-implementation checklist.
Common Scaffolding Mistakes
# ❌ WRONG: name is NOT a positional argument
databricks apps init --features analytics my-app-name
# → "unknown command" error
# ✅ CORRECT: use --name flag
databricks apps init --name my-app-name --features analytics --set "..." --profile <PROFILE>
Directory Naming
databricks apps init creates directories in kebab-case matching the app name.
App names must be lowercase with hyphens only (≤26 chars).
Other Frameworks (Streamlit, FastAPI, Flask, Gradio, Dash, Next.js, etc.)
Databricks Apps supports any framework that runs as an HTTP server. LLMs already know these frameworks — the challenge is Databricks platform integration.
READ Other Frameworks Guide BEFORE building any non-AppKit app. It covers port/host configuration, app.yaml and databricks.yml setup, dependency management, networking, and framework-specific gotchas.