clickhouse-core-workflow-a
Design ClickHouse schemas with MergeTree engines, ORDER BY keys, and partitioning. Use when creating new tables, choosing engines, designing sort keys, or modeling data for analytical workloads. Trigger: "clickhouse schema design", "clickhouse table design", "clickhouse ORDER BY", "clickhouse partitioning", "MergeTree table".
Best use case
clickhouse-core-workflow-a is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Design ClickHouse schemas with MergeTree engines, ORDER BY keys, and partitioning. Use when creating new tables, choosing engines, designing sort keys, or modeling data for analytical workloads. Trigger: "clickhouse schema design", "clickhouse table design", "clickhouse ORDER BY", "clickhouse partitioning", "MergeTree table".
Teams using clickhouse-core-workflow-a should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/clickhouse-core-workflow-a/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How clickhouse-core-workflow-a Compares
| Feature / Agent | clickhouse-core-workflow-a | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Design ClickHouse schemas with MergeTree engines, ORDER BY keys, and partitioning. Use when creating new tables, choosing engines, designing sort keys, or modeling data for analytical workloads. Trigger: "clickhouse schema design", "clickhouse table design", "clickhouse ORDER BY", "clickhouse partitioning", "MergeTree table".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# ClickHouse Schema Design (Core Workflow A)
## Overview
Design ClickHouse tables with correct engine selection, ORDER BY keys,
partitioning, and codec choices for analytical workloads.
## Prerequisites
- `@clickhouse/client` connected (see `clickhouse-install-auth`)
- Understanding of your query patterns (what you filter and group on)
## Instructions
### Step 1: Choose the Right Engine
| Engine | Best For | Dedup? | Example |
|--------|----------|--------|---------|
| `MergeTree` | General analytics, append-only logs | No | Clickstream, IoT |
| `ReplacingMergeTree` | Mutable rows (upserts) | Yes (on merge) | User profiles, state |
| `SummingMergeTree` | Pre-aggregated counters | Sums numerics | Page view counts |
| `AggregatingMergeTree` | Materialized view targets | Merges states | Dashboards |
| `CollapsingMergeTree` | Stateful row updates | Collapses +-1 | Shopping carts |
**ClickHouse Cloud uses `SharedMergeTree`** — it is a drop-in replacement for
`MergeTree` on Cloud. You do not need to change your DDL.
### Step 2: Design the ORDER BY (Sort Key)
The `ORDER BY` clause is the single most important schema decision. It defines:
- **Primary index** — sparse index over sort-key granules (8192 rows default)
- **Data layout on disk** — rows sorted physically by these columns
- **Query speed** — queries filtering on ORDER BY prefix columns hit fewer granules
**Rules of thumb:**
1. Put low-cardinality filter columns first (`event_type`, `status`)
2. Then high-cardinality columns you filter on (`user_id`, `tenant_id`)
3. End with a time column if you use range filters (`created_at`)
4. Do NOT put high-cardinality columns you never filter on in ORDER BY
```sql
-- Good: filter by tenant, then by time ranges
ORDER BY (tenant_id, event_type, created_at)
-- Bad: UUID first means every query scans the full index
ORDER BY (event_id, created_at) -- event_id is random UUID
```
### Step 3: Schema Examples
#### Event Analytics Table
```sql
CREATE TABLE analytics.events (
event_id UUID DEFAULT generateUUIDv4(),
tenant_id UInt32,
event_type LowCardinality(String),
user_id UInt64,
session_id String,
properties String CODEC(ZSTD(3)), -- JSON blob, compress well
url String CODEC(ZSTD(1)),
ip_address IPv4,
country LowCardinality(FixedString(2)),
created_at DateTime64(3) DEFAULT now64(3)
)
ENGINE = MergeTree()
ORDER BY (tenant_id, event_type, toDate(created_at), user_id)
PARTITION BY toYYYYMM(created_at)
TTL created_at + INTERVAL 1 YEAR
SETTINGS index_granularity = 8192;
```
#### User Profile Table (Upserts)
```sql
CREATE TABLE analytics.users (
user_id UInt64,
email String,
plan LowCardinality(String),
mrr_cents UInt32,
properties String CODEC(ZSTD(3)),
updated_at DateTime DEFAULT now()
)
ENGINE = ReplacingMergeTree(updated_at) -- keeps latest row per ORDER BY key
ORDER BY user_id;
-- Query with FINAL to get deduplicated results
SELECT * FROM analytics.users FINAL WHERE user_id = 42;
```
#### Daily Aggregation Table
```sql
CREATE TABLE analytics.daily_stats (
date Date,
tenant_id UInt32,
event_type LowCardinality(String),
event_count UInt64,
unique_users AggregateFunction(uniq, UInt64)
)
ENGINE = AggregatingMergeTree()
ORDER BY (tenant_id, event_type, date);
```
### Step 4: Partitioning Guidelines
| Partition Expression | Typical Use | Parts Per Partition |
|---------------------|-------------|---------------------|
| `toYYYYMM(date)` | Most common — monthly | Target 10-1000 |
| `toMonday(date)` | Weekly rollups | More parts, finer drops |
| `toYYYYMMDD(date)` | Daily TTL drops | Many parts — use carefully |
| None | Small tables (<1M rows) | Fine |
**Warning:** Each partition creates separate parts on disk. Over-partitioning
(e.g., by `user_id`) creates millions of tiny parts and kills performance.
### Step 5: Codecs and Compression
```sql
-- Column-level compression codecs
column1 UInt64 CODEC(Delta, ZSTD(3)), -- Time series / sequential IDs
column2 Float64 CODEC(Gorilla, ZSTD(1)), -- Floating point (similar values)
column3 String CODEC(ZSTD(3)), -- General text / JSON
column4 DateTime CODEC(DoubleDelta, ZSTD), -- Timestamps (near-sequential)
```
## Applying Schema via Node.js
```typescript
import { createClient } from '@clickhouse/client';
const client = createClient({ url: process.env.CLICKHOUSE_HOST! });
async function applySchema() {
await client.command({ query: 'CREATE DATABASE IF NOT EXISTS analytics' });
await client.command({
query: `
CREATE TABLE IF NOT EXISTS analytics.events (
event_id UUID DEFAULT generateUUIDv4(),
tenant_id UInt32,
event_type LowCardinality(String),
user_id UInt64,
payload String CODEC(ZSTD(3)),
created_at DateTime DEFAULT now()
)
ENGINE = MergeTree()
ORDER BY (tenant_id, event_type, created_at)
PARTITION BY toYYYYMM(created_at)
`,
});
console.log('Schema applied.');
}
```
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `ORDER BY expression not in primary key` | PRIMARY KEY != ORDER BY | Remove explicit PRIMARY KEY or align |
| `Too many parts (300+)` | Over-partitioning | Use coarser partition expression |
| `Cannot convert String to UInt64` | Wrong data type | Match insert types to schema |
| `TTL expression type mismatch` | TTL on non-date column | TTL must reference DateTime column |
## Resources
- [MergeTree Engine](https://clickhouse.com/docs/engines/table-engines/mergetree-family/mergetree)
- [ReplacingMergeTree](https://clickhouse.com/docs/engines/table-engines/mergetree-family/replacingmergetree)
- [Codecs & Compression](https://clickhouse.com/docs/sql-reference/statements/create/table#column_compression_codec)
## Next Steps
For inserting and querying data, see `clickhouse-core-workflow-b`.Related Skills
step-functions-workflow
Step Functions Workflow - Auto-activating skill for AWS Skills. Triggers on: step functions workflow, step functions workflow Part of the AWS Skills skill category.
sprint-workflow
Execute this skill should be used when the user asks about "how sprints work", "sprint phases", "iteration workflow", "convergent development", "sprint lifecycle", "when to use sprints", or wants to understand the sprint execution model and its convergent diffusion approach. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
scorecard-marketing
Build quiz and assessment funnels that generate qualified leads at 30-50% conversion. Use when the user mentions "lead magnet", "quiz funnel", "assessment tool", "lead generation", or "score-based segmentation". Covers question design, dynamic results by tier, and automated follow-up sequences. For landing page conversion, see cro-methodology. For full marketing plans, see one-page-marketing. Trigger with 'scorecard', 'marketing'.
n8n-workflow-generator
N8N Workflow Generator - Auto-activating skill for Business Automation. Triggers on: n8n workflow generator, n8n workflow generator Part of the Business Automation skill category.
jira-workflow-creator
Jira Workflow Creator - Auto-activating skill for Enterprise Workflows. Triggers on: jira workflow creator, jira workflow creator Part of the Enterprise Workflows skill category.
building-gitops-workflows
This skill enables Claude to construct GitOps workflows using ArgoCD and Flux. It is designed to generate production-ready configurations, implement best practices, and ensure a security-first approach for Kubernetes deployments. Use this skill when the user explicitly requests "GitOps workflow", "ArgoCD", "Flux", or asks for help with setting up a continuous delivery pipeline using GitOps principles. The skill will generate the necessary configuration files and setup code based on the user's specific requirements and infrastructure.
git-workflow-manager
Git Workflow Manager - Auto-activating skill for DevOps Basics. Triggers on: git workflow manager, git workflow manager Part of the DevOps Basics skill category.
fathom-core-workflow-b
Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".
fathom-core-workflow-a
Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".
exa-core-workflow-b
Execute Exa findSimilar, getContents, answer, and streaming answer workflows. Use when finding pages similar to a URL, retrieving content for known URLs, or getting AI-generated answers with citations. Trigger with phrases like "exa find similar", "exa get contents", "exa answer", "exa similarity search", "findSimilarAndContents".
exa-core-workflow-a
Execute Exa neural search with contents, date filters, and domain scoping. Use when building search features, implementing RAG context retrieval, or querying the web with semantic understanding. Trigger with phrases like "exa search", "exa neural search", "search with exa", "exa searchAndContents", "exa query".
evernote-core-workflow-b
Execute Evernote secondary workflow: Search and Retrieval. Use when implementing search features, finding notes, filtering content, or building search interfaces. Trigger with phrases like "search evernote", "find evernote notes", "evernote search", "query evernote".