deploying-kafka-k8s
Deploys Apache Kafka on Kubernetes using the Strimzi operator with KRaft mode. Use when setting up Kafka for event-driven microservices, message queuing, or pub/sub patterns. Covers operator installation, cluster creation, topic management, and producer/consumer testing. NOT when using managed Kafka (Confluent Cloud, MSK) or local development without K8s.
Best use case
deploying-kafka-k8s is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Deploys Apache Kafka on Kubernetes using the Strimzi operator with KRaft mode. Use when setting up Kafka for event-driven microservices, message queuing, or pub/sub patterns. Covers operator installation, cluster creation, topic management, and producer/consumer testing. NOT when using managed Kafka (Confluent Cloud, MSK) or local development without K8s.
Deploys Apache Kafka on Kubernetes using the Strimzi operator with KRaft mode. Use when setting up Kafka for event-driven microservices, message queuing, or pub/sub patterns. Covers operator installation, cluster creation, topic management, and producer/consumer testing. NOT when using managed Kafka (Confluent Cloud, MSK) or local development without K8s.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "deploying-kafka-k8s" skill to help with this workflow task. Context: Deploys Apache Kafka on Kubernetes using the Strimzi operator with KRaft mode. Use when setting up Kafka for event-driven microservices, message queuing, or pub/sub patterns. Covers operator installation, cluster creation, topic management, and producer/consumer testing. NOT when using managed Kafka (Confluent Cloud, MSK) or local development without K8s.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/deploying-kafka-k8s/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How deploying-kafka-k8s Compares
| Feature / Agent | deploying-kafka-k8s | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Deploys Apache Kafka on Kubernetes using the Strimzi operator with KRaft mode. Use when setting up Kafka for event-driven microservices, message queuing, or pub/sub patterns. Covers operator installation, cluster creation, topic management, and producer/consumer testing. NOT when using managed Kafka (Confluent Cloud, MSK) or local development without K8s.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Deploying Kafka on Kubernetes
Deploy production-ready Apache Kafka clusters using Strimzi operator (v0.49.1+) with KRaft mode.
## Quick Start
```bash
# 1. Create namespace
kubectl create namespace kafka
# 2. Install Strimzi operator
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
# 3. Wait for operator
kubectl wait deployment/strimzi-cluster-operator --for=condition=Available -n kafka --timeout=300s
# 4. Deploy Kafka cluster
kubectl apply -f https://strimzi.io/examples/latest/kafka/kraft/kafka-single-node.yaml -n kafka
# 5. Wait for ready
kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
```
## Strimzi Operator Installation
### Standard Install (Cluster-wide)
```bash
kubectl create namespace kafka
kubectl create -f 'https://strimzi.io/install/latest?namespace=kafka' -n kafka
kubectl get pods -n kafka -w
```
### Namespace-scoped Install
```bash
# Download and modify for single namespace
curl -L https://strimzi.io/install/latest?namespace=kafka > strimzi-install.yaml
# Edit RoleBindings and ClusterRoles as needed
kubectl apply -f strimzi-install.yaml -n kafka
```
## Kafka Cluster Configurations
### Single Node (Development)
```yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
namespace: kafka
spec:
kafka:
version: 3.9.0
replicas: 1
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
default.replication.factor: 1
min.insync.replicas: 1
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
```
### Production Cluster (3 Nodes + KRaft)
```yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafka-production
namespace: kafka
spec:
kafka:
version: 3.9.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
- name: external
port: 9094
type: nodeport
tls: false
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.9"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
deleteClaim: false
resources:
requests:
memory: 2Gi
cpu: "500m"
limits:
memory: 4Gi
cpu: "2"
entityOperator:
topicOperator: {}
userOperator: {}
```
## Topic Management
### Create Topic via CRD
```yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: task-events
namespace: kafka
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 1
config:
retention.ms: 604800000 # 7 days
segment.bytes: 1073741824 # 1GB
```
### List and Describe Topics
```bash
# List topics
kubectl -n kafka run kafka-topics -ti --rm --restart=Never \
--image=quay.io/strimzi/kafka:0.49.1-kafka-3.9.0 -- \
bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --list
# Describe topic
kubectl -n kafka run kafka-topics -ti --rm --restart=Never \
--image=quay.io/strimzi/kafka:0.49.1-kafka-3.9.0 -- \
bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 \
--describe --topic task-events
```
## Producer/Consumer Testing
### Console Producer
```bash
kubectl -n kafka run kafka-producer -ti --rm --restart=Never \
--image=quay.io/strimzi/kafka:0.49.1-kafka-3.9.0 -- \
bin/kafka-console-producer.sh \
--bootstrap-server my-cluster-kafka-bootstrap:9092 \
--topic my-topic
```
### Console Consumer
```bash
kubectl -n kafka run kafka-consumer -ti --rm --restart=Never \
--image=quay.io/strimzi/kafka:0.49.1-kafka-3.9.0 -- \
bin/kafka-console-consumer.sh \
--bootstrap-server my-cluster-kafka-bootstrap:9092 \
--topic my-topic --from-beginning
```
## Service Discovery
Kafka bootstrap services for client connections:
| Service | Port | Use |
|---------|------|-----|
| `my-cluster-kafka-bootstrap:9092` | Plain | Internal cluster apps |
| `my-cluster-kafka-bootstrap:9093` | TLS | Secure internal apps |
| `my-cluster-kafka-0.my-cluster-kafka-brokers:9092` | Plain | Direct broker access |
### Connect from Another Namespace
```yaml
# In your app deployment
env:
- name: KAFKA_BOOTSTRAP_SERVERS
value: "my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
```
## Monitoring
### Enable Prometheus Metrics
```yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
metricsConfig:
type: jmxPrometheusExporter
valueFrom:
configMapKeyRef:
name: kafka-metrics
key: kafka-metrics-config.yml
```
### Check Cluster Status
```bash
kubectl get kafka -n kafka
kubectl describe kafka my-cluster -n kafka
kubectl get pods -n kafka -l strimzi.io/cluster=my-cluster
```
## Troubleshooting
### Operator Not Starting
```bash
kubectl logs deployment/strimzi-cluster-operator -n kafka
kubectl describe pod -l name=strimzi-cluster-operator -n kafka
```
### Kafka Pods Not Ready
```bash
kubectl describe pod my-cluster-kafka-0 -n kafka
kubectl logs my-cluster-kafka-0 -n kafka
kubectl get events -n kafka --sort-by='.lastTimestamp'
```
### Common Issues
| Error | Cause | Fix |
|-------|-------|-----|
| PVC pending | No storage class | Add `storageClassName` or use ephemeral |
| Pods OOMKilled | Insufficient memory | Increase resource limits |
| Connection refused | Wrong bootstrap URL | Use `cluster-kafka-bootstrap:9092` |
## Cleanup
```bash
# Delete cluster
kubectl -n kafka delete kafka my-cluster
# Delete PVCs (data)
kubectl delete pvc -l strimzi.io/name=my-cluster-kafka -n kafka
# Remove operator
kubectl -n kafka delete -f 'https://strimzi.io/install/latest?namespace=kafka'
# Delete namespace
kubectl delete namespace kafka
```
## Integration with Dapr
For Dapr pub/sub integration, see `configuring-dapr-pubsub` skill:
```yaml
# Dapr component pointing to Strimzi Kafka
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: kafka-pubsub
spec:
type: pubsub.kafka
metadata:
- name: brokers
value: "my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9092"
- name: authType
value: "none"
```
## Verification
Run: `python scripts/verify.py`
## Related Skills
- `operating-k8s-local` - Local Minikube cluster setup
- `configuring-dapr-pubsub` - Dapr Kafka pub/sub integration
- `scaffolding-fastapi-dapr` - FastAPI services with Kafka eventsRelated Skills
deploying-to-production
Automate creating a GitHub repository and deploying a web project to Vercel. Use when the user asks to deploy a website/app to production, publish a project, or set up GitHub + Vercel deployment.
when-deploying-cloud-swarm-use-flow-nexus-swarm
Deploy cloud-based AI agent swarms with event-driven workflow automation using Flow Nexus platform. Supports hierarchical, mesh, ring, and star topologies with E2B sandbox distribution.
deploying-postgres-k8s
Deploys PostgreSQL on Kubernetes using the CloudNativePG operator with automated failover. Use when setting up PostgreSQL for production workloads, high availability, or local K8s development. Covers operator installation, cluster creation, connection secrets, and backup configuration. NOT when using managed Postgres (Neon, RDS, Cloud SQL) or simple Docker containers.
deploying-cloud-k8s
Deploys applications to cloud Kubernetes (AKS/GKE/DOKS) with CI/CD pipelines. Use when deploying to production, setting up GitHub Actions, troubleshooting deployments. Covers build-time vs runtime vars, architecture matching, and battle-tested debugging.
azure-quotas
Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".
raindrop-io
Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.
zlibrary-to-notebooklm
自动从 Z-Library 下载书籍并上传到 Google NotebookLM。支持 PDF/EPUB 格式,自动转换,一键创建知识库。
discover-skills
当你发现当前可用的技能都不够合适(或用户明确要求你寻找技能)时使用。本技能会基于任务目标和约束,给出一份精简的候选技能清单,帮助你选出最适配当前任务的技能。
web-performance-seo
Fix PageSpeed Insights/Lighthouse accessibility "!" errors caused by contrast audit failures (CSS filters, OKLCH/OKLAB, low opacity, gradient text, image backgrounds). Use for accessibility-driven SEO/performance debugging and remediation.
project-to-obsidian
将代码项目转换为 Obsidian 知识库。当用户提到 obsidian、项目文档、知识库、分析项目、转换项目 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入规则(默认到 00_Inbox/AI/、追加式、统一 Schema) 3. 执行 STEP 0: 使用 AskUserQuestion 询问用户确认 4. 用户确认后才开始 STEP 1 项目扫描 5. 严格按 STEP 0 → 1 → 2 → 3 → 4 顺序执行 【禁止行为】: - 禁止不读 SKILL.md 就开始分析项目 - 禁止跳过 STEP 0 用户确认 - 禁止直接在 30_Resources 创建(先到 00_Inbox/AI/) - 禁止自作主张决定输出位置
obsidian-helper
Obsidian 智能笔记助手。当用户提到 obsidian、日记、笔记、知识库、capture、review 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入三条硬规矩(00_Inbox/AI/、追加式、白名单字段) 3. 按 STEP 0 → STEP 1 → ... 顺序执行 4. 不要跳过任何步骤,不要自作主张 【禁止行为】: - 禁止不读 SKILL.md 就开始工作 - 禁止跳过用户确认步骤 - 禁止在非 00_Inbox/AI/ 位置创建新笔记(除非用户明确指定)
internationalizing-websites
Adds multi-language support to Next.js websites with proper SEO configuration including hreflang tags, localized sitemaps, and language-specific content. Use when adding new languages, setting up i18n, optimizing for international SEO, or when user mentions localization, translation, multi-language, or specific languages like Japanese, Korean, Chinese.