Best use case
Apache Flink is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Apache Flink should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/flink/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Apache Flink Compares
| Feature / Agent | Apache Flink | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Apache Flink
## Overview
Flink is a distributed stream processing engine for real-time analytics. Unlike batch-first systems (Spark), Flink is stream-first — it processes events as they arrive with millisecond latency. Supports exactly-once semantics, stateful processing, and event time windowing.
## Instructions
### Step 1: PyFlink Setup
```bash
pip install apache-flink
```
### Step 2: Stream Processing
```python
# stream_job.py — Real-time event processing with PyFlink
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream.connectors.kafka import FlinkKafkaConsumer, FlinkKafkaProducer
from pyflink.common.serialization import SimpleStringSchema
from pyflink.datastream.window import TumblingEventTimeWindows
from pyflink.common.time import Time
import json
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(4)
# Read from Kafka
consumer = FlinkKafkaConsumer(
topics='clickstream',
deserialization_schema=SimpleStringSchema(),
properties={
'bootstrap.servers': 'kafka:9092',
'group.id': 'flink-analytics',
}
)
stream = env.add_source(consumer)
# Parse, filter, and aggregate
(stream
.map(lambda x: json.loads(x))
.filter(lambda e: e['event_type'] == 'page_view')
.key_by(lambda e: e['page_url'])
.window(TumblingEventTimeWindows.of(Time.minutes(5)))
.reduce(lambda a, b: {
'page_url': a['page_url'],
'view_count': a.get('view_count', 1) + 1,
'unique_users': list(set(a.get('unique_users', [a['user_id']]) + [b['user_id']])),
})
.map(lambda x: json.dumps(x))
.add_sink(FlinkKafkaProducer(
topic='page-analytics',
serialization_schema=SimpleStringSchema(),
producer_config={'bootstrap.servers': 'kafka:9092'},
))
)
env.execute('Clickstream Analytics')
```
### Step 3: Flink SQL
```python
# sql_job.py — Real-time analytics with Flink SQL
from pyflink.table import EnvironmentSettings, TableEnvironment
t_env = TableEnvironment.create(EnvironmentSettings.in_streaming_mode())
# Define Kafka source table
t_env.execute_sql("""
CREATE TABLE orders (
order_id STRING,
user_id STRING,
amount DECIMAL(10, 2),
event_time TIMESTAMP(3),
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
) WITH (
'connector' = 'kafka',
'topic' = 'orders',
'properties.bootstrap.servers' = 'kafka:9092',
'format' = 'json'
)
""")
# Real-time aggregation with tumbling windows
t_env.execute_sql("""
SELECT
TUMBLE_START(event_time, INTERVAL '1' MINUTE) as window_start,
COUNT(*) as order_count,
SUM(amount) as total_revenue,
COUNT(DISTINCT user_id) as unique_buyers
FROM orders
GROUP BY TUMBLE(event_time, INTERVAL '1' MINUTE)
""").print()
```
## Guidelines
- Flink is stream-first; Spark is batch-first with streaming added. Choose Flink for sub-second latency.
- Use event time (not processing time) for accurate windowed aggregations.
- Watermarks handle late-arriving events — configure based on your latency tolerance.
- Managed Flink: AWS Kinesis Data Analytics, Confluent Cloud, or Ververica Platform.Related Skills
flink-job-creator
Flink Job Creator - Auto-activating skill for Data Pipelines. Triggers on: flink job creator, flink job creator Part of the Data Pipelines skill category.
Apache Kafka
## Overview
KafkaJS — Apache Kafka Client for Node.js
You are an expert in KafkaJS, the pure JavaScript Apache Kafka client for Node.js. You help developers build event-driven architectures with producers, consumers, consumer groups, exactly-once semantics, SASL authentication, and admin operations — processing millions of events per second for real-time analytics, event sourcing, log aggregation, and microservices communication.
Apache Spark
## Overview
Apache Arrow — Columnar Data Format
## Overview
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.
Socratic Method: The Dialectic Engine
This skill transforms Claude into a Socratic agent — a cognitive partner who guides
Sokratische Methode: Die Dialektik-Maschine
Dieser Skill verwandelt Claude in einen sokratischen Agenten — einen kognitiven Partner, der Nutzende durch systematisches Fragen zur Wissensentdeckung führt, anstatt direkt zu instruieren.
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
College Basketball Data (CBB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.
Betting Analysis
Before writing queries, consult `references/api-reference.md` for odds formats, command parameters, and key concepts.
Research Proposal Generator
Generate high-quality academic research proposals for PhD applications following Nature Reviews-style academic writing conventions.