clickhouse-core-workflow-b

Insert, query, and aggregate data in ClickHouse with real SQL patterns. Use when writing analytical queries, inserting data at scale, building dashboards, or implementing materialized views. Trigger: "clickhouse query", "clickhouse insert", "clickhouse aggregate", "clickhouse materialized view", "clickhouse SQL".

25 stars

Best use case

clickhouse-core-workflow-b is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Insert, query, and aggregate data in ClickHouse with real SQL patterns. Use when writing analytical queries, inserting data at scale, building dashboards, or implementing materialized views. Trigger: "clickhouse query", "clickhouse insert", "clickhouse aggregate", "clickhouse materialized view", "clickhouse SQL".

Teams using clickhouse-core-workflow-b should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/clickhouse-core-workflow-b/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/clickhouse-core-workflow-b/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/clickhouse-core-workflow-b/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How clickhouse-core-workflow-b Compares

Feature / Agentclickhouse-core-workflow-bStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Insert, query, and aggregate data in ClickHouse with real SQL patterns. Use when writing analytical queries, inserting data at scale, building dashboards, or implementing materialized views. Trigger: "clickhouse query", "clickhouse insert", "clickhouse aggregate", "clickhouse materialized view", "clickhouse SQL".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# ClickHouse Insert & Query (Core Workflow B)

## Overview

Insert data efficiently and write analytical queries with aggregations,
window functions, and materialized views.

## Prerequisites

- Tables created (see `clickhouse-core-workflow-a`)
- `@clickhouse/client` connected

## Instructions

### Step 1: Bulk Insert Patterns

```typescript
import { createClient } from '@clickhouse/client';

const client = createClient({
  url: process.env.CLICKHOUSE_HOST!,
  username: process.env.CLICKHOUSE_USER ?? 'default',
  password: process.env.CLICKHOUSE_PASSWORD ?? '',
});

// Insert many rows efficiently — @clickhouse/client buffers internally
await client.insert({
  table: 'analytics.events',
  values: events,   // Array of objects matching table columns
  format: 'JSONEachRow',
});

// Insert from file (CSV, Parquet, etc.)
import { createReadStream } from 'fs';

await client.insert({
  table: 'analytics.events',
  values: createReadStream('./data/events.csv'),
  format: 'CSVWithNames',
});
```

**Insert best practices:**
- Batch rows: aim for 10K-100K rows per INSERT (not one at a time)
- ClickHouse creates a new "part" per INSERT — too many small inserts cause "too many parts"
- For real-time streams, buffer 1-5 seconds then flush

### Step 2: Analytical Queries

```sql
-- Top events by tenant in the last 7 days
SELECT
    tenant_id,
    event_type,
    count()                  AS event_count,
    uniqExact(user_id)       AS unique_users,
    min(created_at)          AS first_seen,
    max(created_at)          AS last_seen
FROM analytics.events
WHERE created_at >= now() - INTERVAL 7 DAY
GROUP BY tenant_id, event_type
ORDER BY event_count DESC
LIMIT 100;
```

```sql
-- Funnel analysis: signup → activation → purchase
SELECT
    level,
    count() AS users
FROM (
    SELECT
        user_id,
        groupArray(event_type) AS journey
    FROM analytics.events
    WHERE event_type IN ('signup', 'activation', 'purchase')
      AND created_at >= today() - 30
    GROUP BY user_id
)
ARRAY JOIN arrayEnumerate(journey) AS level
GROUP BY level
ORDER BY level;
```

```sql
-- Retention: users active this week who were also active last week
SELECT
    count(DISTINCT curr.user_id) AS retained_users
FROM analytics.events AS curr
INNER JOIN analytics.events AS prev
    ON curr.user_id = prev.user_id
WHERE curr.created_at >= toMonday(today())
  AND prev.created_at >= toMonday(today()) - 7
  AND prev.created_at < toMonday(today());
```

### Step 3: Parameterized Queries in Node.js

```typescript
// Use {param:Type} syntax for safe parameterized queries
const rs = await client.query({
  query: `
    SELECT event_type, count() AS cnt
    FROM analytics.events
    WHERE tenant_id = {tenant_id:UInt32}
      AND created_at >= {from_date:DateTime}
    GROUP BY event_type
    ORDER BY cnt DESC
  `,
  query_params: {
    tenant_id: 1,
    from_date: '2025-01-01 00:00:00',
  },
  format: 'JSONEachRow',
});
const rows = await rs.json();
```

### Step 4: Materialized Views (Pre-Aggregation)

```sql
-- Source table receives raw events
-- Materialized view aggregates automatically on INSERT

CREATE MATERIALIZED VIEW analytics.hourly_stats_mv
TO analytics.hourly_stats  -- target table
AS
SELECT
    toStartOfHour(created_at) AS hour,
    tenant_id,
    event_type,
    count()                   AS event_count,
    uniqState(user_id)        AS unique_users_state
FROM analytics.events
GROUP BY hour, tenant_id, event_type;

-- Target table uses AggregatingMergeTree
CREATE TABLE analytics.hourly_stats (
    hour              DateTime,
    tenant_id         UInt32,
    event_type        LowCardinality(String),
    event_count       UInt64,
    unique_users_state AggregateFunction(uniq, UInt64)
)
ENGINE = AggregatingMergeTree()
ORDER BY (tenant_id, event_type, hour);

-- Query the materialized view (merge aggregation states)
SELECT
    hour,
    sum(event_count)           AS events,
    uniqMerge(unique_users_state) AS unique_users
FROM analytics.hourly_stats
WHERE tenant_id = 1
GROUP BY hour
ORDER BY hour;
```

### Step 5: Window Functions

```sql
-- Running total and rank within each tenant
SELECT
    tenant_id,
    event_type,
    count()   AS cnt,
    sum(count()) OVER (PARTITION BY tenant_id ORDER BY count() DESC) AS running_total,
    row_number() OVER (PARTITION BY tenant_id ORDER BY count() DESC) AS rank
FROM analytics.events
WHERE created_at >= today() - 7
GROUP BY tenant_id, event_type
ORDER BY tenant_id, rank;
```

### Step 6: Common ClickHouse Functions

| Function | Description | Example |
|----------|-------------|---------|
| `count()` | Row count | `count()` |
| `uniq(col)` | Approximate distinct count (HyperLogLog) | `uniq(user_id)` |
| `uniqExact(col)` | Exact distinct count | `uniqExact(user_id)` |
| `quantile(0.95)(col)` | Percentile | `quantile(0.95)(latency_ms)` |
| `arrayJoin(arr)` | Unnest array to rows | `arrayJoin(tags)` |
| `JSONExtractString(col, key)` | Extract from JSON string | `JSONExtractString(properties, 'plan')` |
| `toStartOfHour(dt)` | Truncate to hour | `toStartOfHour(created_at)` |
| `formatReadableSize(n)` | Human-readable bytes | `formatReadableSize(bytes)` |
| `if(cond, then, else)` | Conditional | `if(cnt > 0, cnt, NULL)` |
| `multiIf(...)` | Multi-branch conditional | `multiIf(x>10, 'high', x>5, 'med', 'low')` |

## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| `Too many parts (300)` | Frequent small inserts | Batch inserts, increase `parts_to_throw_insert` |
| `Memory limit exceeded` | Large GROUP BY / JOIN | Add WHERE filters, increase `max_memory_usage` |
| `UNKNOWN_FUNCTION` | Wrong ClickHouse version | Check `SELECT version()` |
| `Cannot parse datetime` | Wrong format | Use `YYYY-MM-DD HH:MM:SS` format |

## Resources

- [SQL Reference](https://clickhouse.com/docs/sql-reference)
- [Aggregate Functions](https://clickhouse.com/docs/sql-reference/aggregate-functions)
- [Materialized Views Guide](https://clickhouse.com/blog/using-materialized-views-in-clickhouse)

## Next Steps

For error troubleshooting, see `clickhouse-common-errors`.

Related Skills

step-functions-workflow

25
from ComeOnOliver/skillshub

Step Functions Workflow - Auto-activating skill for AWS Skills. Triggers on: step functions workflow, step functions workflow Part of the AWS Skills skill category.

sprint-workflow

25
from ComeOnOliver/skillshub

Execute this skill should be used when the user asks about "how sprints work", "sprint phases", "iteration workflow", "convergent development", "sprint lifecycle", "when to use sprints", or wants to understand the sprint execution model and its convergent diffusion approach. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

scorecard-marketing

25
from ComeOnOliver/skillshub

Build quiz and assessment funnels that generate qualified leads at 30-50% conversion. Use when the user mentions "lead magnet", "quiz funnel", "assessment tool", "lead generation", or "score-based segmentation". Covers question design, dynamic results by tier, and automated follow-up sequences. For landing page conversion, see cro-methodology. For full marketing plans, see one-page-marketing. Trigger with 'scorecard', 'marketing'.

n8n-workflow-generator

25
from ComeOnOliver/skillshub

N8N Workflow Generator - Auto-activating skill for Business Automation. Triggers on: n8n workflow generator, n8n workflow generator Part of the Business Automation skill category.

jira-workflow-creator

25
from ComeOnOliver/skillshub

Jira Workflow Creator - Auto-activating skill for Enterprise Workflows. Triggers on: jira workflow creator, jira workflow creator Part of the Enterprise Workflows skill category.

building-gitops-workflows

25
from ComeOnOliver/skillshub

This skill enables Claude to construct GitOps workflows using ArgoCD and Flux. It is designed to generate production-ready configurations, implement best practices, and ensure a security-first approach for Kubernetes deployments. Use this skill when the user explicitly requests "GitOps workflow", "ArgoCD", "Flux", or asks for help with setting up a continuous delivery pipeline using GitOps principles. The skill will generate the necessary configuration files and setup code based on the user's specific requirements and infrastructure.

git-workflow-manager

25
from ComeOnOliver/skillshub

Git Workflow Manager - Auto-activating skill for DevOps Basics. Triggers on: git workflow manager, git workflow manager Part of the DevOps Basics skill category.

fathom-core-workflow-b

25
from ComeOnOliver/skillshub

Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".

fathom-core-workflow-a

25
from ComeOnOliver/skillshub

Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".

exa-core-workflow-b

25
from ComeOnOliver/skillshub

Execute Exa findSimilar, getContents, answer, and streaming answer workflows. Use when finding pages similar to a URL, retrieving content for known URLs, or getting AI-generated answers with citations. Trigger with phrases like "exa find similar", "exa get contents", "exa answer", "exa similarity search", "findSimilarAndContents".

exa-core-workflow-a

25
from ComeOnOliver/skillshub

Execute Exa neural search with contents, date filters, and domain scoping. Use when building search features, implementing RAG context retrieval, or querying the web with semantic understanding. Trigger with phrases like "exa search", "exa neural search", "search with exa", "exa searchAndContents", "exa query".

evernote-core-workflow-b

25
from ComeOnOliver/skillshub

Execute Evernote secondary workflow: Search and Retrieval. Use when implementing search features, finding notes, filtering content, or building search interfaces. Trigger with phrases like "search evernote", "find evernote notes", "evernote search", "query evernote".