clickhouse-cost-tuning

Optimize ClickHouse Cloud costs — compute scaling, storage tiering, compression, and query efficiency for lower bills. Use when analyzing ClickHouse Cloud bills, reducing storage costs, or optimizing compute utilization. Trigger: "clickhouse cost", "clickhouse billing", "reduce clickhouse spend", "clickhouse pricing", "clickhouse expensive", "clickhouse storage cost".

25 stars

Best use case

clickhouse-cost-tuning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Optimize ClickHouse Cloud costs — compute scaling, storage tiering, compression, and query efficiency for lower bills. Use when analyzing ClickHouse Cloud bills, reducing storage costs, or optimizing compute utilization. Trigger: "clickhouse cost", "clickhouse billing", "reduce clickhouse spend", "clickhouse pricing", "clickhouse expensive", "clickhouse storage cost".

Teams using clickhouse-cost-tuning should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/clickhouse-cost-tuning/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/clickhouse-cost-tuning/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/clickhouse-cost-tuning/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How clickhouse-cost-tuning Compares

Feature / Agentclickhouse-cost-tuningStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Optimize ClickHouse Cloud costs — compute scaling, storage tiering, compression, and query efficiency for lower bills. Use when analyzing ClickHouse Cloud bills, reducing storage costs, or optimizing compute utilization. Trigger: "clickhouse cost", "clickhouse billing", "reduce clickhouse spend", "clickhouse pricing", "clickhouse expensive", "clickhouse storage cost".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# ClickHouse Cost Tuning

## Overview

Reduce ClickHouse Cloud costs through storage optimization, compression tuning,
TTL policies, compute scaling, and query efficiency improvements.

## Prerequisites

- ClickHouse Cloud account with billing access
- Understanding of current data volumes and query patterns

## Instructions

### Step 1: Understand ClickHouse Cloud Pricing

| Component | Pricing Model | Key Driver |
|-----------|---------------|------------|
| Compute | Per-hour per replica | vCPU + memory tier |
| Storage | Per GB-month | Compressed data on disk |
| Network | Per GB egress | Query result sizes |
| Backups | Per GB stored | Backup retention |

**Key insight:** ClickHouse bills on **compressed** storage, and ClickHouse
compresses extremely well (often 10-20x). Your cost driver is usually compute,
not storage.

### Step 2: Analyze Storage Usage

```sql
-- Storage cost breakdown by table
SELECT
    database,
    table,
    formatReadableSize(sum(bytes_on_disk)) AS compressed_size,
    formatReadableSize(sum(data_uncompressed_bytes)) AS raw_size,
    round(sum(data_uncompressed_bytes) / sum(bytes_on_disk), 1) AS compression_ratio,
    sum(rows) AS total_rows,
    count() AS parts
FROM system.parts
WHERE active
GROUP BY database, table
ORDER BY sum(bytes_on_disk) DESC;

-- Storage by column (find bloated columns)
SELECT
    table,
    column,
    type,
    formatReadableSize(sum(column_data_compressed_bytes)) AS compressed,
    formatReadableSize(sum(column_data_uncompressed_bytes)) AS raw,
    round(sum(column_data_uncompressed_bytes) / sum(column_data_compressed_bytes), 1) AS ratio
FROM system.parts_columns
WHERE active AND database = 'analytics'
GROUP BY table, column, type
ORDER BY sum(column_data_compressed_bytes) DESC
LIMIT 30;
```

### Step 3: Improve Compression

```sql
-- Check current codec per column
SELECT name, type, compression_codec
FROM system.columns
WHERE database = 'analytics' AND table = 'events';

-- Apply better codecs to large columns
ALTER TABLE analytics.events
    MODIFY COLUMN properties String CODEC(ZSTD(3));  -- JSON blobs

ALTER TABLE analytics.events
    MODIFY COLUMN created_at DateTime CODEC(DoubleDelta, ZSTD);  -- Timestamps

ALTER TABLE analytics.events
    MODIFY COLUMN user_id UInt64 CODEC(Delta, ZSTD);  -- Sequential IDs

-- Verify improvement after next merge
OPTIMIZE TABLE analytics.events FINAL;

-- Check new compression ratio
SELECT
    column,
    formatReadableSize(sum(column_data_compressed_bytes)) AS compressed,
    round(sum(column_data_uncompressed_bytes) / sum(column_data_compressed_bytes), 1) AS ratio
FROM system.parts_columns
WHERE active AND database = 'analytics' AND table = 'events'
GROUP BY column ORDER BY sum(column_data_compressed_bytes) DESC;
```

### Step 4: TTL for Data Lifecycle

```sql
-- Expire old data automatically (reduces storage)
ALTER TABLE analytics.events
    MODIFY TTL created_at + INTERVAL 90 DAY;

-- Move old data to cheaper storage tier (ClickHouse Cloud)
ALTER TABLE analytics.events
    MODIFY TTL
        created_at + INTERVAL 30 DAY TO VOLUME 'hot',
        created_at + INTERVAL 90 DAY TO VOLUME 'cold',
        created_at + INTERVAL 365 DAY DELETE;

-- Drop entire partitions manually (fastest way to delete bulk data)
ALTER TABLE analytics.events
    DROP PARTITION '202401';   -- Drops January 2024

-- Check TTL status
SELECT database, table, result_ttl_expression
FROM system.tables
WHERE database = 'analytics';
```

### Step 5: Compute Cost Reduction

```sql
-- ClickHouse Cloud: Scale compute dynamically
-- Configure in Cloud Console:
-- - Auto-scaling: min 2 / max 8 replicas
-- - Idle timeout: 5 minutes (auto-suspend when no queries)
-- - Use "Development" tier for staging environments

-- Reduce per-query compute consumption
SET max_threads = 4;                  -- Use fewer cores per query
SET max_memory_usage = 5000000000;    -- 5GB cap per query

-- Server-side async inserts (reduces insert compute)
SET async_insert = 1;
SET async_insert_max_data_size = 10000000;  -- Flush at 10MB
SET async_insert_busy_timeout_ms = 5000;    -- or every 5 seconds
```

### Step 6: Query Efficiency = Lower Costs

```sql
-- Find the most expensive queries (by data scanned)
SELECT
    normalized_query_hash,
    count() AS executions,
    formatReadableSize(sum(read_bytes)) AS total_read,
    round(avg(query_duration_ms)) AS avg_ms,
    any(substring(query, 1, 200)) AS sample
FROM system.query_log
WHERE type = 'QueryFinish'
  AND event_time >= now() - INTERVAL 7 DAY
GROUP BY normalized_query_hash
ORDER BY sum(read_bytes) DESC
LIMIT 20;

-- Use materialized views to avoid repeated full scans
-- Instead of: SELECT count() FROM events WHERE date = today()
-- Pre-compute:
-- CREATE MATERIALIZED VIEW daily_counts_mv TO daily_counts AS
--   SELECT toDate(created_at) AS date, count() AS cnt FROM events GROUP BY date;
-- Then: SELECT cnt FROM daily_counts WHERE date = today()

-- Use PREWHERE to read less data
SELECT user_id, properties FROM analytics.events
PREWHERE event_type = 'purchase'    -- Filter first, read fewer columns
WHERE created_at >= today() - 7;
```

### Step 7: Monitor Costs

```typescript
// Track query costs in your application
async function queryWithCostTracking<T>(
  client: ReturnType<typeof import('@clickhouse/client').createClient>,
  sql: string,
): Promise<{ rows: T[]; cost: { readRows: number; readBytes: number; durationMs: number } }> {
  const start = Date.now();
  const rs = await client.query({ query: sql, format: 'JSONEachRow' });
  const rows = await rs.json<T>();
  const durationMs = Date.now() - start;

  // Log for cost analysis
  console.log({
    query: sql.slice(0, 100),
    readRows: rs.response_headers['x-clickhouse-summary']
      ? JSON.parse(rs.response_headers['x-clickhouse-summary']).read_rows
      : 'unknown',
    durationMs,
  });

  return { rows, cost: { readRows: 0, readBytes: 0, durationMs } };
}
```

## Cost Optimization Checklist

- [ ] Compression codecs applied to large columns (ZSTD, Delta, DoubleDelta)
- [ ] TTL configured for data expiration
- [ ] Auto-scaling and idle suspension enabled (Cloud)
- [ ] Development/staging on smaller tiers
- [ ] Materialized views for dashboard queries
- [ ] `max_threads` limited for non-critical queries
- [ ] `async_insert` enabled for high-frequency small inserts
- [ ] Monthly cost review with `system.query_log` analysis

## Error Handling

| Issue | Cause | Solution |
|-------|-------|----------|
| Storage growing fast | No TTL, no drops | Add TTL or schedule partition drops |
| High compute bill | Full-scan queries | Add materialized views, fix ORDER BY |
| Egress charges | Large result sets | Add LIMIT, use aggregations |
| Idle compute cost | No auto-suspend | Enable idle timeout in Cloud console |

## Resources

- [ClickHouse Cloud Pricing](https://clickhouse.com/pricing)
- [Data Compression](https://clickhouse.com/docs/sql-reference/statements/create/table#column_compression_codec)
- [TTL for Data Management](https://clickhouse.com/docs/engines/table-engines/mergetree-family/mergetree#table_engine-mergetree-ttl)

## Next Steps

For architecture patterns, see `clickhouse-reference-architecture`.

Related Skills

tuning-hyperparameters

25
from ComeOnOliver/skillshub

Optimize machine learning model hyperparameters using grid search, random search, or Bayesian optimization. Finds best parameter configurations to maximize performance. Use when asked to "tune hyperparameters" or "optimize model". Trigger with relevant phrases based on skill purpose.

optimizing-cloud-costs

25
from ComeOnOliver/skillshub

Execute use when you need to work with cloud cost optimization. This skill provides cost analysis and optimization with comprehensive guidance and automation. Trigger with phrases like "optimize costs", "analyze spending", or "reduce costs".

fathom-cost-tuning

25
from ComeOnOliver/skillshub

Optimize Fathom API usage and plan selection. Trigger with phrases like "fathom cost", "fathom pricing", "fathom plan".

exa-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Exa API performance with search type selection, caching, and parallelization. Use when experiencing slow responses, implementing caching strategies, or optimizing request throughput for Exa integrations. Trigger with phrases like "exa performance", "optimize exa", "exa latency", "exa caching", "exa slow", "exa fast".

evernote-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Evernote integration performance. Use when improving response times, reducing API calls, or scaling Evernote integrations. Trigger with phrases like "evernote performance", "optimize evernote", "evernote speed", "evernote caching".

evernote-cost-tuning

25
from ComeOnOliver/skillshub

Optimize Evernote integration costs and resource usage. Use when managing API quotas, reducing storage usage, or optimizing upload limits. Trigger with phrases like "evernote cost", "evernote quota", "evernote limits", "evernote upload".

elevenlabs-performance-tuning

25
from ComeOnOliver/skillshub

Optimize ElevenLabs TTS latency with model selection, streaming, caching, and audio format tuning. Use when experiencing slow TTS responses, implementing real-time voice features, or optimizing audio generation throughput. Trigger: "elevenlabs performance", "optimize elevenlabs", "elevenlabs latency", "elevenlabs slow", "fast TTS", "reduce elevenlabs latency", "TTS streaming".

documenso-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Documenso integration performance with caching, batching, and efficient patterns. Use when improving response times, reducing API calls, or optimizing bulk document operations. Trigger with phrases like "documenso performance", "optimize documenso", "documenso caching", "documenso batch operations".

documenso-cost-tuning

25
from ComeOnOliver/skillshub

Optimize Documenso usage costs and manage subscription efficiency. Use when analyzing costs, optimizing document usage, or managing Documenso subscription tiers. Trigger with phrases like "documenso costs", "documenso pricing", "optimize documenso spending", "documenso usage".

deepgram-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Deepgram API performance for faster transcription and lower latency. Use when improving transcription speed, reducing latency, or optimizing audio processing pipelines. Trigger: "deepgram performance", "speed up deepgram", "optimize transcription", "deepgram latency", "deepgram faster", "deepgram throughput".

deepgram-cost-tuning

25
from ComeOnOliver/skillshub

Optimize Deepgram costs and usage for budget-conscious deployments. Use when reducing transcription costs, implementing usage controls, or optimizing pricing tier utilization. Trigger: "deepgram cost", "reduce deepgram spending", "deepgram pricing", "deepgram budget", "optimize deepgram usage", "deepgram billing".

databricks-performance-tuning

25
from ComeOnOliver/skillshub

Optimize Databricks cluster and query performance. Use when jobs are running slowly, optimizing Spark configurations, or improving Delta Lake query performance. Trigger with phrases like "databricks performance", "spark tuning", "databricks slow", "optimize databricks", "cluster performance".