logging-api-requests
Monitor and log API requests with correlation IDs, performance metrics, and security audit trails. Use when auditing API requests and responses. Trigger with phrases like "log API requests", "add API logging", or "track API calls".
Best use case
logging-api-requests is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Monitor and log API requests with correlation IDs, performance metrics, and security audit trails. Use when auditing API requests and responses. Trigger with phrases like "log API requests", "add API logging", or "track API calls".
Teams using logging-api-requests should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/logging-api-requests/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How logging-api-requests Compares
| Feature / Agent | logging-api-requests | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Monitor and log API requests with correlation IDs, performance metrics, and security audit trails. Use when auditing API requests and responses. Trigger with phrases like "log API requests", "add API logging", or "track API calls".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Logging API Requests
## Overview
Implement structured API request logging with correlation IDs, performance timing, security audit trails, and PII redaction. Capture request/response metadata in JSON format suitable for aggregation in ELK Stack, Loki, or CloudWatch Logs, enabling debugging, performance analysis, and compliance auditing across distributed services.
## Prerequisites
- Structured logging library: Pino or Winston (Node.js), structlog (Python), Logback with JSON encoder (Java)
- Log aggregation system: ELK Stack (Elasticsearch, Logstash, Kibana), Grafana Loki, or CloudWatch Logs
- Correlation ID propagation mechanism (middleware-injected or from incoming `X-Request-ID` header)
- PII data classification for the API domain (which fields contain personal data requiring redaction)
- Log retention and rotation policy defined per compliance requirements
## Instructions
1. Examine existing logging configuration using Grep and Read to identify current log format, output destinations, and any structured logging already in place.
2. Implement request logging middleware that captures: timestamp (ISO 8601), correlation ID, HTTP method, URL path (without query string PII), status code, response time (ms), request size, response size, and client IP.
3. Generate a unique correlation ID (`X-Request-ID`) for each request if not provided by the caller, and propagate it to all downstream service calls and log entries within the request scope.
4. Add PII redaction rules that mask sensitive fields (passwords, tokens, SSNs, email addresses) in logged request/response bodies using configurable field-path patterns.
5. Implement log levels per context: `info` for successful requests, `warn` for 4xx client errors, `error` for 5xx server errors with stack traces, and `debug` for request/response bodies (development only).
6. Configure response body logging for error responses only (4xx/5xx), capturing the error payload for debugging while skipping successful response bodies to reduce log volume.
7. Add security audit logging for sensitive operations: authentication attempts, permission changes, data exports, and admin actions, tagged with `audit: true` for separate indexing.
8. Set up log rotation and retention policies: 30 days for application logs, 90 days for audit logs, with automatic compression of logs older than 7 days.
9. Write tests verifying that PII redaction works correctly, correlation IDs propagate through nested calls, and log output matches expected JSON structure.
See `${CLAUDE_SKILL_DIR}/references/implementation.md` for the full implementation guide.
## Output
- `${CLAUDE_SKILL_DIR}/src/middleware/request-logger.js` - Structured request/response logging middleware
- `${CLAUDE_SKILL_DIR}/src/middleware/correlation-id.js` - Correlation ID generation and propagation
- `${CLAUDE_SKILL_DIR}/src/utils/pii-redactor.js` - Field-level PII redaction with configurable patterns
- `${CLAUDE_SKILL_DIR}/src/utils/audit-logger.js` - Security audit event logger for sensitive operations
- `${CLAUDE_SKILL_DIR}/src/config/logging.js` - Log level, format, and output destination configuration
- `${CLAUDE_SKILL_DIR}/tests/logging/` - Logging middleware tests including PII redaction verification
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| Log volume overwhelming storage | High-traffic endpoint logging full request/response bodies | Log bodies only for errors; sample successful request bodies at configurable rate (1%) |
| PII leak in logs | New field added to API response containing personal data not covered by redaction rules | Maintain allowlist of loggable fields rather than blocklist; audit log output regularly |
| Correlation ID missing | Upstream service does not propagate X-Request-ID header | Generate new correlation ID when header is absent; log warning about missing upstream propagation |
| Log parsing failure | Log message contains unescaped characters breaking JSON structure | Use structured logging library that handles serialization; never concatenate user input into log strings |
| Audit log gap | Async logging dropped events during high-load period | Use synchronous logging for audit events; implement write-ahead buffer for audit trail completeness |
Refer to `${CLAUDE_SKILL_DIR}/references/errors.md` for comprehensive error patterns.
## Examples
**Structured JSON log entry**: `{"timestamp":"2026-03-10T14:30:00Z","correlationId":"abc-123","method":"POST","path":"/api/users","status":201,"durationMs":45,"userId":"usr_456","audit":false}` -- every field queryable in log aggregation.
**Distributed tracing correlation**: Propagate `X-Request-ID` from API gateway through 3 microservices, enabling a single Kibana query to show the complete request lifecycle across all services.
**Compliance audit trail**: Tag all data modification operations (POST, PUT, DELETE) with `audit: true`, capturing the authenticated user, modified resource ID, and change summary for SOC 2 compliance evidence.
See `${CLAUDE_SKILL_DIR}/references/examples.md` for additional examples.
## Resources
- Structured logging best practices (12-Factor App: Logs)
- ELK Stack documentation: https://www.elastic.co/guide/
- Pino logger: https://getpino.io/
- OpenTelemetry for distributed tracing context propagationRelated Skills
implementing-database-audit-logging
Process use when you need to track database changes for compliance and security monitoring. This skill implements audit logging using triggers, application-level logging, CDC, or native logs. Trigger with phrases like "implement database audit logging", "add audit trails", "track database changes", or "monitor database activity for compliance".
cloud-logging-sink-setup
Cloud Logging Sink Setup - Auto-activating skill for GCP Skills. Triggers on: cloud logging sink setup, cloud logging sink setup Part of the GCP Skills skill category.
gh-review-requests
Fetch unread GitHub notifications for open PRs where review is requested from a specified team or opened by a team member. Use when asked to "find PRs I need to review", "show my review requests", "what needs my review", "fetch GitHub review requests", or "check team review queue".
go-logging
Use when choosing a logging approach, configuring slog, writing structured log statements, or deciding log levels in Go. Also use when setting up production logging, adding request-scoped context to logs, or migrating from log to slog, even if the user doesn't explicitly mention logging. Does not cover error handling strategy (see go-error-handling).
structured-logging
Guide for writing effective log messages using wide events / canonical log lines. Use when writing logging code, adding instrumentation, improving observability, or reviewing log statements. Teaches high-cardinality, high-dimensionality structured logging that enables debugging.
reviewing-pull-requests
Pull request workflow and review expertise. Auto-invokes when PRs, code review, merge, or pull request operations are mentioned. Integrates with self-improvement plugin for quality validation.
creating-pull-requests
Creates pull requests with generated descriptions. Triggered when: PR creation, pull request, merge request, code review preparation.
golang-logging
Standards for structured logging and observability in Golang. Use when adding structured logging or tracing to Go services. (triggers: go.mod, pkg/logger/**, logging, slog, structured logging, zap)
logging-best-practices
Logging best practices focused on wide events (canonical log lines) for powerful debugging and analytics
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.
Socratic Method: The Dialectic Engine
This skill transforms Claude into a Socratic agent — a cognitive partner who guides
Sokratische Methode: Die Dialektik-Maschine
Dieser Skill verwandelt Claude in einen sokratischen Agenten — einen kognitiven Partner, der Nutzende durch systematisches Fragen zur Wissensentdeckung führt, anstatt direkt zu instruieren.