Traceloop (OpenLLMetry) — LLM Observability via OpenTelemetry
You are an expert in Traceloop and its OpenLLMetry SDK, the open-source observability framework that extends OpenTelemetry for LLM applications. You help developers instrument AI pipelines with automatic tracing for OpenAI, Anthropic, Cohere, LangChain, LlamaIndex, vector databases, and frameworks — exporting to any OpenTelemetry-compatible backend (Grafana Tempo, Jaeger, Datadog, Honeycomb, Traceloop Cloud).
Best use case
Traceloop (OpenLLMetry) — LLM Observability via OpenTelemetry is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
You are an expert in Traceloop and its OpenLLMetry SDK, the open-source observability framework that extends OpenTelemetry for LLM applications. You help developers instrument AI pipelines with automatic tracing for OpenAI, Anthropic, Cohere, LangChain, LlamaIndex, vector databases, and frameworks — exporting to any OpenTelemetry-compatible backend (Grafana Tempo, Jaeger, Datadog, Honeycomb, Traceloop Cloud).
Teams using Traceloop (OpenLLMetry) — LLM Observability via OpenTelemetry should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/traceloop/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Traceloop (OpenLLMetry) — LLM Observability via OpenTelemetry Compares
| Feature / Agent | Traceloop (OpenLLMetry) — LLM Observability via OpenTelemetry | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
You are an expert in Traceloop and its OpenLLMetry SDK, the open-source observability framework that extends OpenTelemetry for LLM applications. You help developers instrument AI pipelines with automatic tracing for OpenAI, Anthropic, Cohere, LangChain, LlamaIndex, vector databases, and frameworks — exporting to any OpenTelemetry-compatible backend (Grafana Tempo, Jaeger, Datadog, Honeycomb, Traceloop Cloud).
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Traceloop (OpenLLMetry) — LLM Observability via OpenTelemetry
You are an expert in Traceloop and its OpenLLMetry SDK, the open-source observability framework that extends OpenTelemetry for LLM applications. You help developers instrument AI pipelines with automatic tracing for OpenAI, Anthropic, Cohere, LangChain, LlamaIndex, vector databases, and frameworks — exporting to any OpenTelemetry-compatible backend (Grafana Tempo, Jaeger, Datadog, Honeycomb, Traceloop Cloud).
## Core Capabilities
### Auto-Instrumentation
```python
# One line — instruments everything
from traceloop.sdk import Traceloop
Traceloop.init(
app_name="my-ai-app",
api_endpoint="https://api.traceloop.com", # Or any OTLP endpoint
api_key="your-key",
disable_batch=False, # Batch for performance
)
# All OpenAI/Anthropic/LangChain calls now traced automatically
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}],
)
# Trace captured: model, tokens, latency, cost, prompt, completion
```
### Workflow and Task Decorators
```python
from traceloop.sdk.decorators import workflow, task, agent, tool
@workflow(name="customer-support-pipeline")
async def handle_support_ticket(ticket: dict):
"""Top-level workflow — groups all child spans."""
intent = await classify_intent(ticket["message"])
if intent == "technical":
return await technical_support(ticket)
return await general_support(ticket)
@task(name="classify-intent")
async def classify_intent(message: str) -> str:
"""Task span — individual step in workflow."""
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": f"Classify intent: {message}"}],
)
return response.choices[0].message.content
@agent(name="tech-support-agent")
async def technical_support(ticket: dict):
"""Agent span — autonomous agent with tool use."""
docs = await search_knowledge_base(ticket["message"])
response = await generate_response(ticket, docs)
return response
@tool(name="knowledge-base-search")
async def search_knowledge_base(query: str):
"""Tool span — external tool invocation."""
embedding = await client.embeddings.create(model="text-embedding-3-small", input=query)
results = await pinecone_index.query(vector=embedding.data[0].embedding, top_k=5)
return [r.metadata["text"] for r in results.matches]
```
### TypeScript
```typescript
import * as traceloop from "@traceloop/node-server-sdk";
traceloop.initialize({
appName: "my-ai-app",
apiKey: process.env.TRACELOOP_API_KEY,
disableBatch: false,
});
import { withWorkflow, withTask } from "@traceloop/node-server-sdk";
const handleQuery = withWorkflow({ name: "rag-query" }, async (query: string) => {
const docs = await withTask({ name: "retrieve" }, () => retrieveDocs(query));
const answer = await withTask({ name: "generate" }, () => generateAnswer(query, docs));
return answer;
});
```
### Export to Any Backend
```python
# Send to Grafana Tempo
Traceloop.init(
app_name="my-app",
api_endpoint="http://tempo:4318", # OTLP HTTP endpoint
headers={}, # No auth for self-hosted
)
# Send to Datadog
Traceloop.init(
app_name="my-app",
api_endpoint="https://trace.agent.datadoghq.com",
headers={"DD-API-KEY": "your-dd-key"},
)
# Send to Honeycomb
Traceloop.init(
app_name="my-app",
api_endpoint="https://api.honeycomb.io",
headers={"x-honeycomb-team": "your-key"},
)
```
## Installation
```bash
# Python
pip install traceloop-sdk
# TypeScript
npm install @traceloop/node-server-sdk
```
## Best Practices
1. **Semantic conventions** — Use `@workflow`, `@task`, `@agent`, `@tool` decorators; creates meaningful trace hierarchy
2. **OpenTelemetry native** — Standard OTLP export; works with existing observability stack (Grafana, Datadog, etc.)
3. **Auto-instrumentation** — `Traceloop.init()` patches all supported libraries; no per-call code changes
4. **Association properties** — Set user ID, session ID, conversation ID for filtering and grouping traces
5. **Prompt management** — Track prompt versions; correlate prompt changes with quality metrics
6. **Cost tracking** — Automatic cost calculation per model; aggregate by workflow, user, or feature
7. **Vendor-agnostic** — Switch from Traceloop Cloud to self-hosted Jaeger/Tempo without code changes
8. **OpenLLMetry standard** — Extends OpenTelemetry semantic conventions for AI; community-driven specRelated Skills
coderabbit-observability
Monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts. Use when tracking review coverage, measuring comment acceptance rates, or building dashboards for CodeRabbit adoption across your organization. Trigger with phrases like "coderabbit monitoring", "coderabbit metrics", "coderabbit observability", "monitor coderabbit", "coderabbit alerts", "coderabbit dashboard".
clickup-observability
Monitor ClickUp API integrations with metrics, tracing, structured logging, and alerting using Prometheus, OpenTelemetry, and Grafana. Trigger: "clickup monitoring", "clickup metrics", "clickup observability", "monitor clickup", "clickup alerts", "clickup tracing", "clickup dashboard".
clickhouse-observability
Monitor ClickHouse with Prometheus metrics, Grafana dashboards, system table queries, and alerting for query performance, merge health, and resource usage. Use when setting up ClickHouse monitoring, building Grafana dashboards, or configuring alerts for production ClickHouse deployments. Trigger: "clickhouse monitoring", "clickhouse metrics", "clickhouse Grafana", "clickhouse observability", "monitor clickhouse", "clickhouse Prometheus".
clerk-observability
Implement monitoring, logging, and observability for Clerk authentication. Use when setting up monitoring, debugging auth issues in production, or implementing audit logging. Trigger with phrases like "clerk monitoring", "clerk logging", "clerk observability", "clerk metrics", "clerk audit log".
clay-observability
Monitor Clay enrichment pipeline health, credit consumption, and data quality metrics. Use when setting up dashboards for Clay operations, configuring alerts for credit burn, or tracking enrichment success rates. Trigger with phrases like "clay monitoring", "clay metrics", "clay observability", "monitor clay", "clay alerts", "clay dashboard", "clay credit tracking".
clade-observability
Monitor Claude API calls — log tokens, latency, costs, errors, and Use when working with observability patterns. set up alerts for production Claude integrations. Trigger with "anthropic monitoring", "claude observability", "track claude usage", "anthropic logging".
apple-notes-observability
Monitor Apple Notes automation health and performance metrics. Trigger: "apple notes monitoring".
apollo-observability
Set up Apollo.io monitoring and observability. Use when implementing logging, metrics, tracing, and alerting for Apollo integrations. Trigger with phrases like "apollo monitoring", "apollo metrics", "apollo observability", "apollo logging", "apollo alerts".
anth-observability
Set up observability for Claude API integrations with metrics, logging, and alerting for latency, cost, errors, and token usage. Trigger with phrases like "anthropic monitoring", "claude observability", "anthropic metrics", "track claude usage", "claude dashboard".
algolia-observability
Set up observability for Algolia: Prometheus metrics for search latency/errors, OpenTelemetry tracing, structured logging, and Grafana dashboards. Trigger: "algolia monitoring", "algolia metrics", "algolia observability", "monitor algolia", "algolia alerts", "algolia tracing", "algolia dashboard".
adobe-observability
Set up comprehensive observability for Adobe API integrations with Prometheus metrics, OpenTelemetry traces, structured logging, and alert rules covering Firefly, PDF Services, and Photoshop APIs. Trigger with phrases like "adobe monitoring", "adobe metrics", "adobe observability", "monitor adobe", "adobe alerts", "adobe tracing".
../../../engineering/observability-designer/SKILL.md
No description provided.