anth-observability

Set up observability for Claude API integrations with metrics, logging, and alerting for latency, cost, errors, and token usage. Trigger with phrases like "anthropic monitoring", "claude observability", "anthropic metrics", "track claude usage", "claude dashboard".

25 stars

Best use case

anth-observability is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Set up observability for Claude API integrations with metrics, logging, and alerting for latency, cost, errors, and token usage. Trigger with phrases like "anthropic monitoring", "claude observability", "anthropic metrics", "track claude usage", "claude dashboard".

Teams using anth-observability should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/anth-observability/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/anth-observability/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/anth-observability/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How anth-observability Compares

Feature / Agentanth-observabilityStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Set up observability for Claude API integrations with metrics, logging, and alerting for latency, cost, errors, and token usage. Trigger with phrases like "anthropic monitoring", "claude observability", "anthropic metrics", "track claude usage", "claude dashboard".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Anthropic Observability

## Overview

Instrument Claude API calls with structured logging, Prometheus metrics, and cost tracking. Every API response includes `usage` data and rate limit headers — capture these for dashboards and alerting.

## Structured Logging

```python
import anthropic
import logging
import time
import json

logger = logging.getLogger("claude")

def create_with_logging(client: anthropic.Anthropic, **kwargs) -> anthropic.types.Message:
    start = time.monotonic()
    request_meta = {
        "model": kwargs.get("model"),
        "max_tokens": kwargs.get("max_tokens"),
        "tool_count": len(kwargs.get("tools", [])),
        "stream": kwargs.get("stream", False),
    }

    try:
        response = client.messages.create(**kwargs)
        duration_ms = int((time.monotonic() - start) * 1000)

        logger.info(json.dumps({
            "event": "claude.request",
            "request_id": response._request_id,
            "model": response.model,
            "input_tokens": response.usage.input_tokens,
            "output_tokens": response.usage.output_tokens,
            "cache_read_tokens": getattr(response.usage, "cache_read_input_tokens", 0),
            "stop_reason": response.stop_reason,
            "duration_ms": duration_ms,
            "content_blocks": len(response.content),
        }))
        return response

    except anthropic.APIStatusError as e:
        duration_ms = int((time.monotonic() - start) * 1000)
        logger.error(json.dumps({
            "event": "claude.error",
            "status": e.status_code,
            "error_type": getattr(e, "type", "unknown"),
            "duration_ms": duration_ms,
            "request_id": e.response.headers.get("request-id", "unknown"),
        }))
        raise
```

## Prometheus Metrics

```python
from prometheus_client import Counter, Histogram, Gauge

claude_requests = Counter(
    "claude_requests_total", "Total Claude API requests",
    ["model", "stop_reason", "status"]
)
claude_latency = Histogram(
    "claude_latency_seconds", "Claude API latency",
    ["model"], buckets=[0.5, 1, 2, 5, 10, 30, 60]
)
claude_tokens = Counter(
    "claude_tokens_total", "Token usage",
    ["model", "direction"]  # direction: input|output|cache_read
)
claude_cost = Counter(
    "claude_cost_usd", "Estimated cost in USD",
    ["model"]
)
claude_rate_limit_remaining = Gauge(
    "claude_rate_limit_remaining", "Remaining rate limit",
    ["dimension"]  # dimension: requests|tokens
)

def track_metrics(response, duration: float):
    model = response.model
    claude_requests.labels(model=model, stop_reason=response.stop_reason, status="ok").inc()
    claude_latency.labels(model=model).observe(duration)
    claude_tokens.labels(model=model, direction="input").inc(response.usage.input_tokens)
    claude_tokens.labels(model=model, direction="output").inc(response.usage.output_tokens)

    # Cost estimation
    pricing = {"claude-haiku-4-20250514": (0.80, 4.0), "claude-sonnet-4-20250514": (3.0, 15.0)}
    rates = pricing.get(model, (3.0, 15.0))
    cost = (response.usage.input_tokens * rates[0] + response.usage.output_tokens * rates[1]) / 1e6
    claude_cost.labels(model=model).inc(cost)
```

## Key Metrics Dashboard

| Metric | Description | Alert Threshold |
|--------|-------------|-----------------|
| `claude_requests_total{status="error"}` | Error count | > 5% of total |
| `claude_latency_seconds` p99 | Tail latency | > 10s |
| `claude_cost_usd` daily | Daily spend | > 80% budget |
| `claude_rate_limit_remaining{dimension="requests"}` | RPM headroom | < 10% remaining |
| `claude_tokens_total{direction="output"}` rate | Output throughput | Spike detection |

## Usage API (Server-Side)

```python
# Anthropic's Usage & Cost API for billing reconciliation
# GET https://api.anthropic.com/v1/usage
# Returns daily token usage and cost per model
```

## Error Handling

| Observability Gap | Risk | Fix |
|-------------------|------|-----|
| No request_id logged | Can't debug with support | Capture `response._request_id` |
| Missing cost tracking | Budget surprise | Track per-request cost |
| No latency histogram | Can't spot slow queries | Add Prometheus/Datadog histograms |

## Resources

- [Usage & Cost API](https://docs.anthropic.com/en/api/usage-cost-api)
- [Rate Limits](https://docs.anthropic.com/en/api/rate-limits)
- [API Status](https://status.anthropic.com)

## Next Steps

For incident response, see `anth-incident-runbook`.

Related Skills

exa-observability

25
from ComeOnOliver/skillshub

Set up monitoring, metrics, and alerting for Exa search integrations. Use when implementing monitoring for Exa operations, building dashboards, or configuring alerting for search quality and latency. Trigger with phrases like "exa monitoring", "exa metrics", "exa observability", "monitor exa", "exa alerts", "exa dashboard".

evernote-observability

25
from ComeOnOliver/skillshub

Implement observability for Evernote integrations. Use when setting up monitoring, logging, tracing, or alerting for Evernote applications. Trigger with phrases like "evernote monitoring", "evernote logging", "evernote metrics", "evernote observability".

documenso-observability

25
from ComeOnOliver/skillshub

Implement monitoring, logging, and tracing for Documenso integrations. Use when setting up observability, implementing metrics collection, or debugging production issues. Trigger with phrases like "documenso monitoring", "documenso metrics", "documenso logging", "documenso tracing", "documenso observability".

deepgram-observability

25
from ComeOnOliver/skillshub

Set up comprehensive observability for Deepgram integrations. Use when implementing monitoring, setting up dashboards, or configuring alerting for Deepgram integration health. Trigger: "deepgram monitoring", "deepgram metrics", "deepgram observability", "monitor deepgram", "deepgram alerts", "deepgram dashboard".

databricks-observability

25
from ComeOnOliver/skillshub

Set up comprehensive observability for Databricks with metrics, traces, and alerts. Use when implementing monitoring for Databricks jobs, setting up dashboards, or configuring alerting for pipeline health. Trigger with phrases like "databricks monitoring", "databricks metrics", "databricks observability", "monitor databricks", "databricks alerts", "databricks logging".

customerio-observability

25
from ComeOnOliver/skillshub

Set up Customer.io monitoring and observability. Use when implementing metrics, structured logging, alerting, or Grafana dashboards for Customer.io integrations. Trigger: "customer.io monitoring", "customer.io metrics", "customer.io dashboard", "customer.io alerts", "customer.io observability".

coreweave-observability

25
from ComeOnOliver/skillshub

Set up GPU monitoring and observability for CoreWeave workloads. Use when implementing GPU metrics dashboards, configuring alerts, or tracking inference latency and throughput. Trigger with phrases like "coreweave monitoring", "coreweave observability", "coreweave gpu metrics", "coreweave grafana".

cohere-observability

25
from ComeOnOliver/skillshub

Set up comprehensive observability for Cohere API v2 with metrics, traces, and alerts. Use when implementing monitoring for Chat/Embed/Rerank operations, setting up dashboards, or configuring alerts for Cohere integrations. Trigger with phrases like "cohere monitoring", "cohere metrics", "cohere observability", "monitor cohere", "cohere alerts", "cohere tracing".

coderabbit-observability

25
from ComeOnOliver/skillshub

Monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts. Use when tracking review coverage, measuring comment acceptance rates, or building dashboards for CodeRabbit adoption across your organization. Trigger with phrases like "coderabbit monitoring", "coderabbit metrics", "coderabbit observability", "monitor coderabbit", "coderabbit alerts", "coderabbit dashboard".

clickup-observability

25
from ComeOnOliver/skillshub

Monitor ClickUp API integrations with metrics, tracing, structured logging, and alerting using Prometheus, OpenTelemetry, and Grafana. Trigger: "clickup monitoring", "clickup metrics", "clickup observability", "monitor clickup", "clickup alerts", "clickup tracing", "clickup dashboard".

clickhouse-observability

25
from ComeOnOliver/skillshub

Monitor ClickHouse with Prometheus metrics, Grafana dashboards, system table queries, and alerting for query performance, merge health, and resource usage. Use when setting up ClickHouse monitoring, building Grafana dashboards, or configuring alerts for production ClickHouse deployments. Trigger: "clickhouse monitoring", "clickhouse metrics", "clickhouse Grafana", "clickhouse observability", "monitor clickhouse", "clickhouse Prometheus".

clerk-observability

25
from ComeOnOliver/skillshub

Implement monitoring, logging, and observability for Clerk authentication. Use when setting up monitoring, debugging auth issues in production, or implementing audit logging. Trigger with phrases like "clerk monitoring", "clerk logging", "clerk observability", "clerk metrics", "clerk audit log".