monitoring-apis

Build real-time API monitoring dashboards with metrics, alerts, and health checks. Use when tracking API health and performance metrics. Trigger with phrases like "monitor the API", "add API metrics", or "setup API monitoring".

25 stars

Best use case

monitoring-apis is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Build real-time API monitoring dashboards with metrics, alerts, and health checks. Use when tracking API health and performance metrics. Trigger with phrases like "monitor the API", "add API metrics", or "setup API monitoring".

Teams using monitoring-apis should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/monitoring-apis/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/monitoring-apis/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/monitoring-apis/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How monitoring-apis Compares

Feature / Agentmonitoring-apisStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Build real-time API monitoring dashboards with metrics, alerts, and health checks. Use when tracking API health and performance metrics. Trigger with phrases like "monitor the API", "add API metrics", or "setup API monitoring".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Monitoring APIs

## Overview

Build real-time API monitoring with metrics collection (request rate, latency percentiles, error rates), health check endpoints, and alerting rules. Instrument API middleware to emit Prometheus metrics or StatsD counters, configure Grafana dashboards with SLO tracking, and implement synthetic monitoring probes for uptime verification.

## Prerequisites

- Prometheus + Grafana stack, or Datadog/New Relic/CloudWatch for metrics and dashboards
- Metrics client library: `prom-client` (Node.js), `prometheus_client` (Python), or Micrometer (Java)
- Alerting channel configured: PagerDuty, Slack webhook, or email for alert routing
- Structured logging library: Winston, Pino (Node.js), structlog (Python), or Logback (Java)
- Synthetic monitoring tool: Checkly, Uptime Robot, or custom cron-based health probes

## Instructions

1. Examine existing middleware and logging setup using Grep and Read to identify current observability coverage and gaps.
2. Implement metrics middleware that records per-request data: `http_request_duration_seconds` histogram (with method, path, status labels), `http_requests_total` counter, and `http_requests_in_flight` gauge.
3. Create a `/health` endpoint returning structured health status including dependency checks (database connectivity, cache availability, external service reachability) with response time for each.
4. Add a `/ready` endpoint separate from health that returns 503 during startup initialization and graceful shutdown, for load balancer integration.
5. Configure histogram buckets aligned with SLO targets: [0.01, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] seconds for comprehensive latency distribution.
6. Build Grafana dashboard panels: request rate (QPS), p50/p95/p99 latency, error rate percentage, active connections, and per-endpoint breakdown.
7. Define alerting rules: error rate > 5% for 5 minutes (critical), p99 latency > 2s for 10 minutes (warning), health check failure for 3 consecutive probes (critical).
8. Implement synthetic monitoring that sends periodic requests to critical endpoints from external locations, measuring availability and latency from the consumer perspective.
9. Add SLO tracking with error budget calculation: define SLO (99.9% availability, p95 < 500ms), compute burn rate, and alert when error budget consumption exceeds projected pace.

See `${CLAUDE_SKILL_DIR}/references/implementation.md` for the full implementation guide.

## Output

- `${CLAUDE_SKILL_DIR}/src/middleware/metrics.js` - Prometheus metrics collection middleware
- `${CLAUDE_SKILL_DIR}/src/routes/health.js` - Health check and readiness endpoints
- `${CLAUDE_SKILL_DIR}/monitoring/dashboards/` - Grafana dashboard JSON definitions
- `${CLAUDE_SKILL_DIR}/monitoring/alerts/` - Alerting rule definitions (Prometheus AlertManager or Grafana)
- `${CLAUDE_SKILL_DIR}/monitoring/synthetic/` - Synthetic monitoring probe scripts
- `${CLAUDE_SKILL_DIR}/monitoring/slo.yaml` - SLO definitions and error budget configuration

## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Metrics cardinality explosion | High-cardinality labels (user ID, request ID) on metrics | Use bounded label values only (method, status code, endpoint group); aggregate user-level data in logs |
| Health check false positive | Health endpoint returns 200 but dependent service is degraded | Include dependency checks with individual status; use structured response with `degraded` state |
| Alert fatigue | Too many low-severity alerts firing during normal operations | Tune alert thresholds using historical baselines; implement alert grouping and deduplication |
| Dashboard data gap | Metrics not collected during deployment rollout window | Configure Prometheus scrape interval < deployment duration; use push-based metrics during deploys |
| SLO miscalculation | Error budget calculation uses wrong time window or includes planned maintenance | Exclude maintenance windows from SLO calculation; align window with business reporting period |

Refer to `${CLAUDE_SKILL_DIR}/references/errors.md` for comprehensive error patterns.

## Examples

**RED method dashboard**: Request rate, Error rate, and Duration panels per endpoint, with drill-down from overview to individual endpoint detail, including top-10 slowest endpoints by p99.

**SLO-based alerting**: Define 99.9% availability SLO with 30-day rolling window, alert when 1-hour burn rate exceeds 14.4x (consuming daily error budget in 1 hour), with PagerDuty escalation.

**Dependency health matrix**: Dashboard showing real-time health status of all downstream dependencies (database, cache, external APIs) with latency sparklines and circuit breaker state indicators.

See `${CLAUDE_SKILL_DIR}/references/examples.md` for additional examples.

## Resources

- Google SRE Book: Monitoring Distributed Systems chapter
- Prometheus documentation: https://prometheus.io/docs/
- Grafana dashboards: https://grafana.com/docs/grafana/latest/
- USE Method and RED Method for metrics design

Related Skills

versioning-apis

25
from ComeOnOliver/skillshub

Implement API versioning with backward compatibility, deprecation notices, and migration paths. Use when managing API versions and backward compatibility. Trigger with phrases like "version the API", "manage API versions", or "handle API versioning".

throttling-apis

25
from ComeOnOliver/skillshub

Implement API throttling policies to protect backend services from overload. Use when controlling API request rates. Trigger with phrases like "throttle API", "control request rate", or "add throttling".

setting-up-synthetic-monitoring

25
from ComeOnOliver/skillshub

This skill automates the setup of synthetic monitoring for applications. It allows Claude to proactively track performance and availability by configuring uptime, transaction, and API monitoring. Use this skill when the user requests to "set up synthetic monitoring", "configure uptime monitoring", "track application performance", or needs help with "proactive performance tracking". The skill helps to identify critical endpoints and user journeys, design monitoring scenarios, and configure alerts and dashboards.

implementing-real-user-monitoring

25
from ComeOnOliver/skillshub

This skill assists in implementing Real User Monitoring (RUM) to capture and analyze actual user performance data. It helps set up tracking for key metrics like Core Web Vitals, page load times, and custom performance events. Use this skill when the user asks to "setup RUM", "implement real user monitoring", "track user experience", or needs assistance with "performance monitoring". It guides the user through choosing a RUM platform, designing an instrumentation strategy, and implementing the necessary tracking code.

rate-limiting-apis

25
from ComeOnOliver/skillshub

Implement sophisticated rate limiting with sliding windows, token buckets, and quotas. Use when protecting APIs from excessive requests. Trigger with phrases like "add rate limiting", "limit API requests", or "implement rate limits".

pipeline-monitoring-setup

25
from ComeOnOliver/skillshub

Pipeline Monitoring Setup - Auto-activating skill for Data Pipelines. Triggers on: pipeline monitoring setup, pipeline monitoring setup Part of the Data Pipelines skill category.

monitoring-whale-activity

25
from ComeOnOliver/skillshub

Track large cryptocurrency transactions and whale wallet movements in real-time. Use when tracking large holder movements, exchange flows, or wallet activity. Trigger with phrases like "track whales", "monitor large transfers", "check whale activity", "exchange inflows", or "watch wallet".

deploying-monitoring-stacks

25
from ComeOnOliver/skillshub

This skill deploys monitoring stacks, including Prometheus, Grafana, and Datadog. It is used when the user needs to set up or configure monitoring infrastructure for applications or systems. The skill generates production-ready configurations, implements best practices, and supports multi-platform deployments. Use this when the user explicitly requests to deploy a monitoring stack, or mentions Prometheus, Grafana, or Datadog in the context of infrastructure setup.

monitoring-error-rates

25
from ComeOnOliver/skillshub

Monitor and analyze application error rates to improve reliability. Use when tracking errors in applications including HTTP errors, exceptions, and database issues. Trigger with phrases like "monitor error rates", "track application errors", or "analyze error patterns".

monitoring-database-transactions

25
from ComeOnOliver/skillshub

Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".

monitoring-database-health

25
from ComeOnOliver/skillshub

Monitor use when you need to work with monitoring and observability. This skill provides health monitoring and alerting with comprehensive guidance and automation. Trigger with phrases like "monitor system health", "set up alerts", or "track metrics".

monitoring-cross-chain-bridges

25
from ComeOnOliver/skillshub

Monitor cross-chain bridge TVL, volume, fees, and transaction status across networks. Use when researching bridges, comparing routes, or tracking bridge transactions. Trigger with phrases like "monitor bridges", "compare bridge fees", "track bridge tx", "bridge TVL", or "cross-chain transfer status".