databricks-local-dev-loop

Configure Databricks local development with Databricks Connect, Asset Bundles, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a fast iteration cycle with Databricks. Trigger with phrases like "databricks dev setup", "databricks local", "databricks IDE", "develop with databricks", "databricks connect".

25 stars

Best use case

databricks-local-dev-loop is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Configure Databricks local development with Databricks Connect, Asset Bundles, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a fast iteration cycle with Databricks. Trigger with phrases like "databricks dev setup", "databricks local", "databricks IDE", "develop with databricks", "databricks connect".

Teams using databricks-local-dev-loop should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/databricks-local-dev-loop/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/databricks-local-dev-loop/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/databricks-local-dev-loop/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How databricks-local-dev-loop Compares

Feature / Agentdatabricks-local-dev-loopStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Configure Databricks local development with Databricks Connect, Asset Bundles, and IDE. Use when setting up a local dev environment, configuring test workflows, or establishing a fast iteration cycle with Databricks. Trigger with phrases like "databricks dev setup", "databricks local", "databricks IDE", "develop with databricks", "databricks connect".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Databricks Local Dev Loop

## Overview
Set up a fast local development workflow using Databricks Connect v2, Asset Bundles, and VS Code. Databricks Connect lets you run PySpark code locally while executing on a remote Databricks cluster, giving you IDE debugging, fast iteration, and proper test isolation.

## Prerequisites
- Completed `databricks-install-auth` setup
- Python 3.10+ (must match cluster's Python version)
- A running Databricks cluster (DBR 13.3 LTS+)
- VS Code or PyCharm

## Instructions

### Step 1: Project Structure
```
my-databricks-project/
├── src/
│   ├── __init__.py
│   ├── pipelines/
│   │   ├── __init__.py
│   │   ├── bronze.py          # Raw ingestion
│   │   ├── silver.py          # Cleansing transforms
│   │   └── gold.py            # Business aggregations
│   └── utils/
│       ├── __init__.py
│       └── helpers.py
├── tests/
│   ├── conftest.py            # Spark fixtures
│   ├── unit/
│   │   └── test_transforms.py # Local Spark tests
│   └── integration/
│       └── test_pipeline.py   # Databricks Connect tests
├── notebooks/
│   └── exploration.py
├── resources/
│   └── daily_etl.yml          # Job resource definitions
├── databricks.yml             # Asset Bundle root config
├── pyproject.toml
└── requirements.txt
```

### Step 2: Install Development Tools
```bash
set -euo pipefail

# Create virtual environment
python -m venv .venv && source .venv/bin/activate

# Databricks Connect v2 — version MUST match cluster DBR
pip install "databricks-connect==14.3.*"

# SDK and CLI
pip install databricks-sdk

# Testing
pip install pytest pytest-cov

# Verify Connect installation
databricks-connect test
```

### Step 3: Configure Databricks Connect
Databricks Connect v2 reads from standard SDK auth (env vars, `~/.databrickscfg`, or `DATABRICKS_CLUSTER_ID`).

```bash
# Set cluster for Connect to use
export DATABRICKS_HOST="https://adb-1234567890123456.7.azuredatabricks.net"
export DATABRICKS_TOKEN="dapi..."
export DATABRICKS_CLUSTER_ID="0123-456789-abcde123"
```

```python
# src/utils/spark_session.py
from databricks.connect import DatabricksSession

def get_spark():
    """Get a DatabricksSession — runs Spark on the remote cluster."""
    return DatabricksSession.builder.getOrCreate()

# Usage: df operations execute on the remote cluster
spark = get_spark()
df = spark.sql("SELECT current_timestamp() AS now")
df.show()  # Results streamed back locally
```

### Step 4: Asset Bundle Configuration
```yaml
# databricks.yml
bundle:
  name: my-databricks-project

workspace:
  host: ${DATABRICKS_HOST}

include:
  - resources/*.yml

variables:
  catalog:
    description: Unity Catalog name
    default: dev_catalog

targets:
  dev:
    default: true
    mode: development
    workspace:
      root_path: /Users/${workspace.current_user.userName}/.bundle/${bundle.name}/dev

  staging:
    workspace:
      root_path: /Shared/.bundle/${bundle.name}/staging
    variables:
      catalog: staging_catalog

  prod:
    mode: production
    workspace:
      root_path: /Shared/.bundle/${bundle.name}/prod
    variables:
      catalog: prod_catalog
```

```yaml
# resources/daily_etl.yml
resources:
  jobs:
    daily_etl:
      name: "daily-etl-${bundle.target}"
      tasks:
        - task_key: bronze
          notebook_task:
            notebook_path: src/pipelines/bronze.py
          new_cluster:
            spark_version: "14.3.x-scala2.12"
            node_type_id: "i3.xlarge"
            num_workers: 2
```

### Step 5: Test Setup
```python
# tests/conftest.py
import pytest
from pyspark.sql import SparkSession

@pytest.fixture(scope="session")
def local_spark():
    """Local SparkSession for fast unit tests (no cluster needed)."""
    return (
        SparkSession.builder
        .master("local[*]")
        .appName("unit-tests")
        .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
        .config("spark.sql.catalog.spark_catalog",
                "org.apache.spark.sql.delta.catalog.DeltaCatalog")
        .getOrCreate()
    )

@pytest.fixture(scope="session")
def remote_spark():
    """DatabricksSession for integration tests (requires running cluster)."""
    from databricks.connect import DatabricksSession
    return DatabricksSession.builder.getOrCreate()
```

```python
# tests/unit/test_transforms.py
def test_dedup_by_primary_key(local_spark):
    from src.pipelines.silver import dedup_by_key

    data = [("a", 1), ("a", 2), ("b", 3)]
    df = local_spark.createDataFrame(data, ["id", "value"])

    result = dedup_by_key(df, key_col="id", order_col="value")
    assert result.count() == 2
    # Keeps latest value per key
    assert result.filter("id = 'a'").first()["value"] == 2
```

### Step 6: Dev Workflow Commands
```bash
# Validate bundle configuration
databricks bundle validate

# Deploy dev resources to workspace
databricks bundle deploy -t dev

# Run a job
databricks bundle run daily_etl -t dev

# Sync local files to workspace (live reload)
databricks bundle sync -t dev --watch

# Run local unit tests (fast, no cluster)
pytest tests/unit/ -v

# Run integration tests (needs cluster)
pytest tests/integration/ -v --tb=short

# Full test with coverage
pytest tests/ --cov=src --cov-report=html
```

### Step 7: VS Code Configuration
```json
// .vscode/settings.json
{
  "python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
  "python.testing.pytestEnabled": true,
  "python.testing.pytestArgs": ["tests"],
  "python.envFile": "${workspaceFolder}/.env",
  "[python]": {
    "editor.defaultFormatter": "ms-python.black-formatter"
  }
}
```

## Output
- Local Python environment with Databricks Connect
- Unit tests running with local Spark (no cluster required)
- Integration tests running against remote cluster
- Asset Bundle configured for dev/staging/prod deployment
- VS Code debugging with breakpoints in PySpark code

## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `Cluster not running` | Auto-terminated | Set `DATABRICKS_CLUSTER_ID` and start it: `databricks clusters start --cluster-id ...` |
| `Version mismatch` | `databricks-connect` version differs from cluster DBR | Install matching version: `pip install "databricks-connect==14.3.*"` for DBR 14.3 |
| `SPARK_CONNECT_GRPC` error | gRPC connection blocked | Check firewall allows outbound to workspace on port 443 |
| `ModuleNotFoundError` | Missing local package install | Run `pip install -e .` for editable install |
| `Multiple SparkSessions` | Conflicting Spark instances | Always use `getOrCreate()` pattern |

## Examples

### Interactive Development Script
```python
# src/pipelines/bronze.py
from pyspark.sql import SparkSession, DataFrame
from pyspark.sql.functions import current_timestamp, input_file_name

def ingest_raw(spark: SparkSession, source_path: str, target_table: str) -> DataFrame:
    """Bronze ingestion with metadata columns."""
    return (
        spark.read.format("json").load(source_path)
        .withColumn("_ingested_at", current_timestamp())
        .withColumn("_source_file", input_file_name())
    )

if __name__ == "__main__":
    # Works locally via Databricks Connect
    from databricks.connect import DatabricksSession
    spark = DatabricksSession.builder.getOrCreate()
    df = ingest_raw(spark, "/mnt/raw/events/", "dev_catalog.bronze.events")
    df.show(5)
```

## Resources
- [Databricks Connect v2](https://docs.databricks.com/aws/en/dev-tools/databricks-connect/python/)
- [Declarative Automation Bundles](https://docs.databricks.com/aws/en/dev-tools/bundles/)
- [VS Code Extension](https://docs.databricks.com/aws/en/dev-tools/vscode-ext/)

## Next Steps
See `databricks-sdk-patterns` for production-ready code patterns.

Related Skills

exa-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Exa local development with hot reload, testing, and mock responses. Use when setting up a development environment, writing tests against Exa, or establishing a fast iteration cycle. Trigger with phrases like "exa dev setup", "exa local development", "exa test setup", "develop with exa", "mock exa".

evernote-local-dev-loop

25
from ComeOnOliver/skillshub

Set up efficient local development workflow for Evernote integrations. Use when configuring dev environment, setting up sandbox testing, or optimizing development iteration speed. Trigger with phrases like "evernote dev setup", "evernote local development", "evernote sandbox", "test evernote locally".

elevenlabs-local-dev-loop

25
from ComeOnOliver/skillshub

Configure local ElevenLabs development with mocking, hot reload, and audio testing. Use when setting up a dev environment for TTS/voice projects, configuring test workflows, or building a fast iteration cycle with ElevenLabs audio. Trigger: "elevenlabs dev setup", "elevenlabs local development", "elevenlabs dev environment", "develop with elevenlabs", "test elevenlabs locally".

documenso-local-dev-loop

25
from ComeOnOliver/skillshub

Set up local development environment and testing workflow for Documenso. Use when configuring dev environment, setting up test workflows, or establishing rapid iteration patterns with Documenso. Trigger with phrases like "documenso local dev", "documenso development", "test documenso locally", "documenso dev environment".

deepgram-local-dev-loop

25
from ComeOnOliver/skillshub

Configure Deepgram local development workflow with testing and mocks. Use when setting up development environment, configuring test fixtures, or establishing rapid iteration patterns for Deepgram integration. Trigger: "deepgram local dev", "deepgram development setup", "deepgram test environment", "deepgram dev workflow", "deepgram mock".

databricks-webhooks-events

25
from ComeOnOliver/skillshub

Configure Databricks job notifications, webhooks, and event handling. Use when setting up Slack/Teams notifications, configuring alerts, or integrating Databricks events with external systems. Trigger with phrases like "databricks webhook", "databricks notifications", "databricks alerts", "job failure notification", "databricks slack".

databricks-upgrade-migration

25
from ComeOnOliver/skillshub

Upgrade Databricks runtime versions and migrate between features. Use when upgrading DBR versions, migrating to Unity Catalog, or updating deprecated APIs and features. Trigger with phrases like "databricks upgrade", "DBR upgrade", "databricks migration", "unity catalog migration", "hive to unity".

databricks-security-basics

25
from ComeOnOliver/skillshub

Apply Databricks security best practices for secrets and access control. Use when securing API tokens, implementing least privilege access, or auditing Databricks security configuration. Trigger with phrases like "databricks security", "databricks secrets", "secure databricks", "databricks token security", "databricks scopes".

databricks-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Databricks SDK patterns for Python and REST API. Use when implementing Databricks integrations, refactoring SDK usage, or establishing team coding standards for Databricks. Trigger with phrases like "databricks SDK patterns", "databricks best practices", "databricks code patterns", "idiomatic databricks".

databricks-reference-architecture

25
from ComeOnOliver/skillshub

Implement Databricks reference architecture with best-practice project layout. Use when designing new Databricks projects, reviewing architecture, or establishing standards for Databricks applications. Trigger with phrases like "databricks architecture", "databricks best practices", "databricks project structure", "how to organize databricks", "databricks layout".

databricks-rate-limits

25
from ComeOnOliver/skillshub

Implement Databricks API rate limiting, backoff, and idempotency patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Databricks. Trigger with phrases like "databricks rate limit", "databricks throttling", "databricks 429", "databricks retry", "databricks backoff".

databricks-prod-checklist

25
from ComeOnOliver/skillshub

Execute Databricks production deployment checklist and rollback procedures. Use when deploying Databricks jobs to production, preparing for launch, or implementing go-live procedures. Trigger with phrases like "databricks production", "deploy databricks", "databricks go-live", "databricks launch checklist".