setting-up-log-aggregation
Execute use when setting up log aggregation solutions using ELK, Loki, or Splunk. Trigger with phrases like "setup log aggregation", "deploy ELK stack", "configure Loki", or "install Splunk". Generates production-ready configurations for data ingestion, processing, storage, and visualization with proper security and scalability.
Best use case
setting-up-log-aggregation is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Execute use when setting up log aggregation solutions using ELK, Loki, or Splunk. Trigger with phrases like "setup log aggregation", "deploy ELK stack", "configure Loki", or "install Splunk". Generates production-ready configurations for data ingestion, processing, storage, and visualization with proper security and scalability.
Teams using setting-up-log-aggregation should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/setting-up-log-aggregation/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How setting-up-log-aggregation Compares
| Feature / Agent | setting-up-log-aggregation | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Execute use when setting up log aggregation solutions using ELK, Loki, or Splunk. Trigger with phrases like "setup log aggregation", "deploy ELK stack", "configure Loki", or "install Splunk". Generates production-ready configurations for data ingestion, processing, storage, and visualization with proper security and scalability.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Setting Up Log Aggregation ## Overview Deploy centralized log aggregation platforms (ELK Stack, Grafana Loki, Splunk) with ingestion pipelines, structured parsing, retention policies, visualization dashboards, and alerting. Configure log shippers (Filebeat, Promtail, Fluentd) to collect from applications, containers, and system logs with proper security and scalability. ## Prerequisites - Target infrastructure identified: Kubernetes, Docker Compose, or VMs - Storage requirements calculated: estimate daily log volume and multiply by retention period - Network connectivity between log sources and aggregation platform (typically ports 9200, 3100, 8088) - Authentication mechanism defined (LDAP, OAuth, API tokens, or basic auth) - Resource allocation planned: Elasticsearch needs significant heap memory (minimum 4GB per node) ## Instructions 1. Select the log aggregation platform: ELK for full-text search and complex queries, Loki for lightweight Kubernetes-native logging, Splunk for enterprise with advanced analytics 2. Deploy the storage backend: Elasticsearch cluster, Loki with object storage (S3/GCS), or Splunk indexers 3. Configure log shippers on sources: Filebeat for ELK, Promtail for Loki, Fluentd/Fluent Bit for multi-destination 4. Define parsing rules: Logstash grok patterns for unstructured logs, JSON parsing for structured logs, multiline handling for stack traces 5. Set retention policies: Index Lifecycle Management (ILM) for Elasticsearch, chunk retention for Loki, index rotation for Splunk 6. Deploy visualization: Kibana dashboards for ELK, Grafana dashboards for Loki, Splunk Search for Splunk 7. Configure alerting: define log-based alerts for error spikes, application exceptions, and security events 8. Implement RBAC: restrict dashboard access and log visibility by team and environment 9. Test the full pipeline: generate test logs, verify ingestion, confirm parsing, and validate dashboard display ## Output - Docker Compose or Kubernetes manifests for the log aggregation stack - Log shipper configuration files (Filebeat YAML, Promtail config, Fluentd conf) - Parsing and field extraction rules (Logstash pipeline, grok patterns) - Retention policy configuration (ILM, lifecycle rules) - Dashboard JSON exports for Kibana or Grafana - Alert rule definitions for error rate monitoring ## Error Handling | Error | Cause | Solution | |-------|-------|---------| | `Elasticsearch heap space exhausted` | JVM heap too small for index volume | Increase `ES_JAVA_OPTS` heap size (set to 50% of available RAM, max 32GB) or add nodes | | `Cannot connect to Elasticsearch` | Network issue or Elasticsearch not started | Verify Elasticsearch is running and healthy; check firewall rules and bind address | | `Failed to create index` | Disk space full or index template misconfigured | Check disk usage with `df -h`; review index template settings and shard allocation | | `Failed to parse log line` | Grok pattern mismatch or unexpected log format | Test grok patterns with Kibana Grok Debugger; add fallback pattern for unmatched lines | | `Promtail: too many open files` | System file descriptor limit too low for log tailing | Increase `ulimit -n` to 65536; reduce the number of watched paths | ## Examples - "Deploy an ELK stack on Docker Compose with Filebeat collecting Nginx and application logs, Logstash parsing with grok, and a Kibana dashboard for 5xx error monitoring." - "Set up Loki + Promtail on Kubernetes with 14-day retention, basic auth, and a Grafana dashboard showing logs per namespace." - "Configure Fluentd to ship logs from 20 application servers to both Elasticsearch (hot storage, 7 days) and S3 (cold storage, 1 year)." ## Resources - Elastic Stack guide: https://www.elastic.co/guide/ - Grafana Loki: https://grafana.com/docs/loki/latest/ - Fluentd documentation: https://docs.fluentd.org/ - Promtail configuration: https://grafana.com/docs/loki/latest/send-data/promtail/
Related Skills
setting-up-synthetic-monitoring
This skill automates the setup of synthetic monitoring for applications. It allows Claude to proactively track performance and availability by configuring uptime, transaction, and API monitoring. Use this skill when the user requests to "set up synthetic monitoring", "configure uptime monitoring", "track application performance", or needs help with "proactive performance tracking". The skill helps to identify critical endpoints and user journeys, design monitoring scenarios, and configure alerts and dashboards.
setting-up-experiment-tracking
Implement machine learning experiment tracking using MLflow or Weights & Biases. Configures environment and provides code for logging parameters, metrics, and artifacts. Use when asked to "setup experiment tracking" or "initialize MLflow". Trigger with relevant phrases based on skill purpose.
setting-up-distributed-tracing
Execute this skill automates the setup of distributed tracing for microservices. it helps developers implement end-to-end request visibility by configuring context propagation, span creation, trace collection, and analysis. use this skill when the user re... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
cursor-privacy-settings
Configure Cursor privacy mode, data handling, telemetry, and sensitive file exclusion. Triggers on "cursor privacy", "cursor data", "cursor security", "privacy mode", "cursor telemetry", "cursor data retention".
aggregation-helper
Aggregation Helper - Auto-activating skill for Data Analytics. Triggers on: aggregation helper, aggregation helper Part of the Data Analytics skill category.
claude-settings-audit
Analyze a repository to generate recommended Claude Code settings.json permissions. Use when setting up a new project, auditing existing settings, or determining which read-only bash commands to allow. Detects tech stack, build tools, and monorepo structure.
when-setting-network-security-use-network-security-setup
Configure Claude Code sandbox network isolation with trusted domains, custom access policies, and environment variables for secure network communication.
plugin-settings
This skill should be used when the user asks about "plugin settings", "store plugin configuration", "user-configurable plugin", ".local.md files", "plugin state files", "read YAML frontmatter", "per-project plugin settings", or wants to make plugin behavior configurable. Documents the .claude/plugin-name.local.md pattern for storing plugin-specific configuration with YAML frontmatter and markdown content.
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.
Socratic Method: The Dialectic Engine
This skill transforms Claude into a Socratic agent — a cognitive partner who guides
Sokratische Methode: Die Dialektik-Maschine
Dieser Skill verwandelt Claude in einen sokratischen Agenten — einen kognitiven Partner, der Nutzende durch systematisches Fragen zur Wissensentdeckung führt, anstatt direkt zu instruieren.
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.