databricks-performance-tuning
Optimize Databricks cluster and query performance. Use when jobs are running slowly, optimizing Spark configurations, or improving Delta Lake query performance. Trigger with phrases like "databricks performance", "spark tuning", "databricks slow", "optimize databricks", "cluster performance".
Best use case
databricks-performance-tuning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimize Databricks cluster and query performance. Use when jobs are running slowly, optimizing Spark configurations, or improving Delta Lake query performance. Trigger with phrases like "databricks performance", "spark tuning", "databricks slow", "optimize databricks", "cluster performance".
Teams using databricks-performance-tuning should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/databricks-performance-tuning/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How databricks-performance-tuning Compares
| Feature / Agent | databricks-performance-tuning | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimize Databricks cluster and query performance. Use when jobs are running slowly, optimizing Spark configurations, or improving Delta Lake query performance. Trigger with phrases like "databricks performance", "spark tuning", "databricks slow", "optimize databricks", "cluster performance".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Databricks Performance Tuning
## Overview
Optimize Databricks cluster sizing, Spark configuration, and Delta Lake query performance. Covers workload-specific Spark configs, Adaptive Query Execution (AQE), Liquid Clustering, Z-ordering, OPTIMIZE/VACUUM maintenance, query plan analysis, and caching strategies.
## Prerequisites
- Access to cluster configuration (admin or cluster owner)
- Understanding of workload type (ETL batch, ML training, streaming, interactive)
- Query history access for identifying slow queries
## Instructions
### Step 1: Cluster Sizing by Workload
| Workload | Instance Family | Why | Workers |
|----------|----------------|-----|---------|
| ETL Batch | Compute-optimized (c5/c6) | CPU-heavy transforms | 2-8, autoscale |
| ML Training | Memory-optimized (r5/r6) | Large model fits | 4-16, fixed |
| Streaming | Compute-optimized (c5) | Sustained throughput | 2-4, fixed |
| Interactive / Ad-hoc | General-purpose (m5) | Balanced | Single node or 1-4 |
| Heavy shuffle / spill | Storage-optimized (i3) | Fast local NVMe | 4-8 |
```python
def recommend_cluster(data_size_gb: float, workload: str) -> dict:
"""Recommend cluster config based on data size and workload type."""
configs = {
"etl_batch": {"node": "c5.2xlarge", "memory_gb": 16, "multiplier": 1.5},
"ml_training": {"node": "r5.2xlarge", "memory_gb": 64, "multiplier": 2.0},
"streaming": {"node": "c5.xlarge", "memory_gb": 8, "multiplier": 1.0},
"interactive": {"node": "m5.xlarge", "memory_gb": 16, "multiplier": 1.0},
}
cfg = configs.get(workload, configs["etl_batch"])
workers = max(1, int(data_size_gb / cfg["memory_gb"] * cfg["multiplier"]))
return {
"node_type_id": cfg["node"],
"num_workers": workers,
"autoscale": {"min_workers": max(1, workers // 2), "max_workers": workers * 2},
}
```
### Step 2: Spark Configuration by Workload
```python
spark_configs = {
"etl_batch": {
"spark.sql.shuffle.partitions": "auto", # AQE handles this in DBR 14+
"spark.sql.adaptive.enabled": "true",
"spark.sql.adaptive.coalescePartitions.enabled": "true",
"spark.sql.adaptive.skewJoin.enabled": "true",
"spark.databricks.delta.optimizeWrite.enabled": "true",
"spark.databricks.delta.autoCompact.enabled": "true",
"spark.sql.files.maxPartitionBytes": "134217728", # 128MB
},
"ml_training": {
"spark.driver.memory": "16g",
"spark.executor.memory": "16g",
"spark.memory.fraction": "0.8",
"spark.memory.storageFraction": "0.3",
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.kryoserializer.buffer.max": "1024m",
},
"streaming": {
"spark.sql.streaming.schemaInference": "true",
"spark.databricks.delta.autoCompact.minNumFiles": "10",
"spark.sql.shuffle.partitions": "auto",
},
"interactive": {
"spark.sql.inMemoryColumnarStorage.compressed": "true",
"spark.databricks.cluster.profile": "singleNode",
"spark.master": "local[*]",
},
}
```
### Step 3: Delta Lake Optimization
#### OPTIMIZE with Z-Ordering
```sql
-- Compact small files and co-locate data by frequently filtered columns
OPTIMIZE prod_catalog.silver.orders ZORDER BY (order_date, customer_id);
-- Check file stats before and after
DESCRIBE DETAIL prod_catalog.silver.orders;
-- Look at: numFiles (should decrease), sizeInBytes
```
#### Liquid Clustering (DBR 13.3+ — Replaces Partitioning + Z-Order)
```sql
-- Enable Liquid Clustering — Databricks auto-optimizes data layout
ALTER TABLE prod_catalog.silver.orders CLUSTER BY (order_date, region);
-- Trigger incremental clustering
OPTIMIZE prod_catalog.silver.orders;
-- Advantages over Z-order:
-- * Incremental (only re-clusters new data)
-- * No need to choose between partitioning and Z-ordering
-- * Works with Deletion Vectors for faster DELETE/UPDATE
```
#### Predictive Optimization
```sql
-- Let Databricks auto-schedule OPTIMIZE and VACUUM
ALTER TABLE prod_catalog.silver.orders
SET TBLPROPERTIES ('delta.enableDeletionVectors' = 'true');
-- Enable at schema level for all tables
ALTER SCHEMA prod_catalog.silver
SET DBPROPERTIES ('delta.enablePredictiveOptimization' = 'true');
```
#### Compute Statistics
```sql
ANALYZE TABLE prod_catalog.silver.orders COMPUTE STATISTICS;
ANALYZE TABLE prod_catalog.silver.orders COMPUTE STATISTICS FOR COLUMNS order_date, amount, region;
```
### Step 4: Query Performance Analysis
```sql
-- Find slow queries (SQL warehouse query history)
SELECT statement_id, executed_by,
total_duration_ms / 1000 AS duration_sec,
rows_produced, bytes_scanned / 1024 / 1024 AS scanned_mb,
statement_text
FROM system.query.history
WHERE total_duration_ms > 30000 -- > 30 seconds
AND start_time > current_timestamp() - INTERVAL 24 HOURS
ORDER BY total_duration_ms DESC
LIMIT 20;
```
```python
# Analyze a query plan for bottlenecks
df = spark.table("prod_catalog.silver.orders").filter("region = 'US'")
df.explain(mode="formatted")
# Look for: BroadcastHashJoin (good), SortMergeJoin (may be slow on skewed data)
# Look for: ColumnarToRow conversion (indicates non-Photon path)
```
### Step 5: Join Optimization
```python
from pyspark.sql.functions import broadcast
# Rule of thumb: broadcast tables < 100MB
# BAD: Sort-merge join on small lookup table
result = orders.join(products, "product_id") # triggers expensive shuffle
# GOOD: Broadcast the small table
result = orders.join(broadcast(products), "product_id") # no shuffle
# For skewed keys: use AQE skew join handling
spark.conf.set("spark.sql.adaptive.skewJoin.enabled", "true")
spark.conf.set("spark.sql.adaptive.skewJoin.skewedPartitionThresholdInBytes", "256m")
```
### Step 6: Caching Strategy
```python
# Cache a frequently-accessed table
spark.table("prod_catalog.gold.daily_metrics").cache()
# Or use Delta Cache (automatic for i3/r5 instances with local SSD)
# Enable in cluster config:
# spark.databricks.io.cache.enabled = true
# spark.databricks.io.cache.maxDiskUsage = 50g
# NEVER cache Bronze tables — they're too large and change frequently
# ALWAYS cache small lookup/dimension tables used in multiple queries
```
### Step 7: VACUUM and Table Maintenance Schedule
```sql
-- Clean up old file versions (default retention: 7 days)
VACUUM prod_catalog.silver.orders RETAIN 168 HOURS;
-- Schedule via Databricks job or DLT maintenance task
-- Recommended: weekly OPTIMIZE, daily VACUUM for active tables
```
## Output
- Cluster sized appropriately for workload type
- Spark configs tuned per workload (ETL, ML, streaming, interactive)
- Delta tables optimized with Z-ordering or Liquid Clustering
- Slow queries identified via query history analysis
- Join and caching strategies applied
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| OOM during shuffle | Skewed partition | Enable AQE skew join or salt the join key |
| Slow joins | Large shuffle | `broadcast()` tables < 100MB |
| Too many small files | Frequent small writes | Run `OPTIMIZE` or enable `autoCompact` |
| VACUUM below retention | Retention < 7 days | Minimum is `168 HOURS`; set `delta.deletedFileRetentionDuration` |
| Query plan shows `ColumnarToRow` | Non-Photon code path | Use Photon-enabled runtime (suffix `-photon-scala2.12`) |
## Examples
### Quick Table Tune-Up
```sql
OPTIMIZE prod_catalog.silver.orders ZORDER BY (order_date, customer_id);
ANALYZE TABLE prod_catalog.silver.orders COMPUTE STATISTICS;
VACUUM prod_catalog.silver.orders RETAIN 168 HOURS;
```
### Before/After Comparison
```python
import time
table = "prod_catalog.silver.orders"
query = f"SELECT region, SUM(amount) FROM {table} WHERE order_date > '2024-01-01' GROUP BY region"
# Before optimization
start = time.time()
spark.sql(query).collect()
before = time.time() - start
spark.sql(f"OPTIMIZE {table} ZORDER BY (order_date, region)")
# After optimization
start = time.time()
spark.sql(query).collect()
after = time.time() - start
print(f"Before: {before:.1f}s, After: {after:.1f}s, Speedup: {before/after:.1f}x")
```
## Resources
- [Performance Guide](https://docs.databricks.com/aws/en/delta/best-practices)
- [Liquid Clustering](https://docs.databricks.com/aws/en/delta/clustering)
- [OPTIMIZE](https://docs.databricks.com/aws/en/sql/language-manual/delta-optimize)
- [AQE](https://docs.databricks.com/aws/en/sql/language-manual/sql-ref-syntax-qry-select-adaptive)
## Next Steps
For cost optimization, see `databricks-cost-tuning`.Related Skills
validating-performance-budgets
Validate application performance against defined budgets to identify regressions early. Use when checking page load times, bundle sizes, or API response times against thresholds. Trigger with phrases like "validate performance budget", "check performance metrics", or "detect performance regression".
tuning-hyperparameters
Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.
analyzing-query-performance
This skill enables Claude to analyze and optimize database query performance. It activates when the user discusses query performance issues, provides an EXPLAIN plan, or asks for optimization recommendations. The skill leverages the query-performance-analyzer plugin to interpret EXPLAIN plans, identify performance bottlenecks (e.g., slow queries, missing indexes), and suggest specific optimization strategies. It is useful for improving database query execution speed and resource utilization.
providing-performance-optimization-advice
Provide comprehensive prioritized performance optimization recommendations for frontend, backend, and infrastructure. Use when analyzing bottlenecks or seeking improvement strategies. Trigger with phrases like "optimize performance", "improve speed", or "performance recommendations".
profiling-application-performance
Execute this skill enables AI assistant to profile application performance, analyzing cpu usage, memory consumption, and execution time. it is triggered when the user requests performance analysis, bottleneck identification, or optimization recommendations. the... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
performance-testing
This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations.
detecting-performance-regressions
This skill enables Claude to automatically detect performance regressions in a CI/CD pipeline. It analyzes performance metrics, such as response time and throughput, and compares them against baselines or thresholds. Use this skill when the user requests to "detect performance regressions", "analyze performance metrics for regressions", or "find performance degradation" in a CI/CD environment. The skill is also triggered when the user mentions "baseline comparison", "statistical significance analysis", or "performance budget violations". It helps identify and report performance issues early in the development cycle.
performance-lighthouse-runner
Performance Lighthouse Runner - Auto-activating skill for Frontend Development. Triggers on: performance lighthouse runner, performance lighthouse runner Part of the Frontend Development skill category.
performance-baseline-creator
Performance Baseline Creator - Auto-activating skill for Performance Testing. Triggers on: performance baseline creator, performance baseline creator Part of the Performance Testing skill category.
optimizing-cache-performance
Execute this skill enables AI assistant to analyze and improve application caching strategies. it optimizes cache hit rates, ttl configurations, cache key design, and invalidation strategies. use this skill when the user requests to "optimize cache performance"... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
aggregating-performance-metrics
This skill enables Claude to aggregate and centralize performance metrics from various sources. It is used when the user needs to consolidate metrics from applications, systems, databases, caches, queues, and external services into a central location for monitoring and analysis. The skill is triggered by requests to "aggregate metrics", "centralize performance metrics", or similar phrases related to metrics aggregation and monitoring. It facilitates designing a metrics taxonomy, choosing appropriate aggregation tools, and setting up dashboards and alerts.
fathom-cost-tuning
Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".