algolia-observability
Set up observability for Algolia: Prometheus metrics for search latency/errors, OpenTelemetry tracing, structured logging, and Grafana dashboards. Trigger: "algolia monitoring", "algolia metrics", "algolia observability", "monitor algolia", "algolia alerts", "algolia tracing", "algolia dashboard".
Best use case
algolia-observability is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Set up observability for Algolia: Prometheus metrics for search latency/errors, OpenTelemetry tracing, structured logging, and Grafana dashboards. Trigger: "algolia monitoring", "algolia metrics", "algolia observability", "monitor algolia", "algolia alerts", "algolia tracing", "algolia dashboard".
Teams using algolia-observability should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/algolia-observability/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How algolia-observability Compares
| Feature / Agent | algolia-observability | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Set up observability for Algolia: Prometheus metrics for search latency/errors, OpenTelemetry tracing, structured logging, and Grafana dashboards. Trigger: "algolia monitoring", "algolia metrics", "algolia observability", "monitor algolia", "algolia alerts", "algolia tracing", "algolia dashboard".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Algolia Observability
## Overview
Algolia provides built-in analytics in the dashboard, but production systems need application-level observability: latency histograms, error rate counters, distributed traces, and alerts. This skill instruments the `algoliasearch` v5 client with Prometheus, OpenTelemetry, and structured logging.
## Key Metrics to Track
| Metric | Type | Why It Matters |
|--------|------|---------------|
| Search latency (P50/P95/P99) | Histogram | User experience, SLA compliance |
| Search requests/sec | Counter | Capacity planning, cost tracking |
| Error rate by type | Counter | Detect API issues before users report |
| Index freshness (last updated) | Gauge | Data pipeline health |
| Record count | Gauge | Cost monitoring, data integrity |
## Instructions
### Step 1: Instrumented Algolia Client Wrapper
```typescript
// src/algolia/instrumented-client.ts
import { algoliasearch, ApiError } from 'algoliasearch';
import { Counter, Histogram, Gauge, Registry } from 'prom-client';
const registry = new Registry();
const searchLatency = new Histogram({
name: 'algolia_search_duration_seconds',
help: 'Algolia search request duration in seconds',
labelNames: ['index', 'status'],
buckets: [0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5],
registers: [registry],
});
const searchTotal = new Counter({
name: 'algolia_search_requests_total',
help: 'Total Algolia search requests',
labelNames: ['index', 'status'],
registers: [registry],
});
const searchErrors = new Counter({
name: 'algolia_errors_total',
help: 'Total Algolia errors by type',
labelNames: ['index', 'error_type', 'status_code'],
registers: [registry],
});
const indexRecords = new Gauge({
name: 'algolia_index_records',
help: 'Number of records in Algolia index',
labelNames: ['index'],
registers: [registry],
});
const client = algoliasearch(process.env.ALGOLIA_APP_ID!, process.env.ALGOLIA_ADMIN_KEY!);
export async function instrumentedSearch<T = any>(
indexName: string,
searchParams: Record<string, any>
) {
const timer = searchLatency.startTimer({ index: indexName });
try {
const result = await client.searchSingleIndex<T>({ indexName, searchParams });
timer({ status: 'success' });
searchTotal.inc({ index: indexName, status: 'success' });
return result;
} catch (error) {
timer({ status: 'error' });
searchTotal.inc({ index: indexName, status: 'error' });
if (error instanceof ApiError) {
searchErrors.inc({
index: indexName,
error_type: error.status === 429 ? 'rate_limit' : 'api_error',
status_code: String(error.status),
});
} else {
searchErrors.inc({
index: indexName,
error_type: 'network',
status_code: '0',
});
}
throw error;
}
}
// Periodic index stats collection (run every 5 minutes)
export async function collectIndexMetrics() {
const { items } = await client.listIndices();
for (const idx of items) {
indexRecords.set({ index: idx.name }, idx.entries || 0);
}
}
export { registry };
```
### Step 2: Prometheus Metrics Endpoint
```typescript
// src/api/metrics.ts (Express example)
import express from 'express';
import { registry, collectIndexMetrics } from '../algolia/instrumented-client';
const app = express();
app.get('/metrics', async (_req, res) => {
res.set('Content-Type', registry.contentType);
res.send(await registry.metrics());
});
// Collect index stats every 5 minutes
setInterval(collectIndexMetrics, 5 * 60 * 1000);
```
### Step 3: OpenTelemetry Distributed Tracing
```typescript
// src/algolia/tracing.ts
import { trace, SpanStatusCode, type Span } from '@opentelemetry/api';
const tracer = trace.getTracer('algolia-service', '1.0.0');
export async function tracedSearch<T>(
indexName: string,
query: string,
searchParams: Record<string, any> = {}
): Promise<T> {
return tracer.startActiveSpan(`algolia.search ${indexName}`, async (span: Span) => {
span.setAttribute('algolia.index', indexName);
span.setAttribute('algolia.query', query);
span.setAttribute('algolia.hitsPerPage', searchParams.hitsPerPage || 20);
try {
const result = await client.searchSingleIndex<T>({
indexName,
searchParams: { query, ...searchParams },
});
span.setAttribute('algolia.nbHits', result.nbHits);
span.setAttribute('algolia.processingTimeMS', result.processingTimeMS);
span.setStatus({ code: SpanStatusCode.OK });
return result as T;
} catch (error: any) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
span.recordException(error);
throw error;
} finally {
span.end();
}
});
}
```
### Step 4: Structured Logging
```typescript
// src/algolia/logger.ts
import pino from 'pino';
const logger = pino({ name: 'algolia', level: process.env.LOG_LEVEL || 'info' });
export function logSearch(params: {
index: string;
query: string;
nbHits: number;
processingTimeMS: number;
page: number;
userId?: string;
}) {
logger.info({
event: 'algolia.search',
index: params.index,
query: params.query,
hits: params.nbHits,
latency_ms: params.processingTimeMS,
page: params.page,
user: params.userId,
});
}
export function logSearchError(params: {
index: string;
query: string;
error: string;
statusCode?: number;
}) {
logger.error({
event: 'algolia.search.error',
index: params.index,
query: params.query,
error: params.error,
status_code: params.statusCode,
});
}
```
### Step 5: Alert Rules (Prometheus AlertManager)
```yaml
# alerts/algolia.yml
groups:
- name: algolia
rules:
- alert: AlgoliaHighErrorRate
expr: |
rate(algolia_errors_total[5m]) /
rate(algolia_search_requests_total[5m]) > 0.05
for: 5m
labels: { severity: warning }
annotations:
summary: "Algolia error rate > 5% for 5 minutes"
- alert: AlgoliaHighLatency
expr: |
histogram_quantile(0.95,
rate(algolia_search_duration_seconds_bucket[5m])
) > 0.5
for: 5m
labels: { severity: warning }
annotations:
summary: "Algolia P95 search latency > 500ms"
- alert: AlgoliaRateLimited
expr: rate(algolia_errors_total{error_type="rate_limit"}[5m]) > 0
for: 2m
labels: { severity: critical }
annotations:
summary: "Algolia returning 429 rate limit errors"
- alert: AlgoliaIndexStale
expr: algolia_index_records == 0
for: 10m
labels: { severity: warning }
annotations:
summary: "Algolia index has 0 records — possible sync failure"
```
## Grafana Dashboard Queries
```
# Search rate: rate(algolia_search_requests_total[5m])
# Error rate: rate(algolia_errors_total[5m]) / rate(algolia_search_requests_total[5m])
# P50 latency: histogram_quantile(0.5, rate(algolia_search_duration_seconds_bucket[5m]))
# P95 latency: histogram_quantile(0.95, rate(algolia_search_duration_seconds_bucket[5m]))
# Records per index: algolia_index_records
```
## Error Handling
| Issue | Cause | Solution |
|-------|-------|----------|
| Missing metrics | Client not instrumented | Use `instrumentedSearch` wrapper |
| High cardinality | Too many label values | Don't use query text as label |
| Trace gaps | Missing context propagation | Ensure OTel context flows through async |
| Alert storms | Thresholds too sensitive | Add `for: 5m` minimum duration |
## Resources
- [Prometheus Client](https://www.npmjs.com/package/prom-client)
- [OpenTelemetry JS](https://opentelemetry.io/docs/languages/js/)
- [Algolia Dashboard Analytics](https://www.algolia.com/doc/guides/getting-analytics/search-analytics/)
- [pino Logger](https://getpino.io/)
## Next Steps
For incident response, see `algolia-incident-runbook`.Related Skills
exa-observability
Set up monitoring, metrics, and alerting for Exa search integrations. Use when implementing monitoring for Exa operations, building dashboards, or configuring alerting for search quality and latency. Trigger with phrases like "exa monitoring", "exa metrics", "exa observability", "monitor exa", "exa alerts", "exa dashboard".
evernote-observability
Implement observability for Evernote integrations. Use when setting up monitoring, logging, tracing, or alerting for Evernote applications. Trigger with phrases like "evernote monitoring", "evernote logging", "evernote metrics", "evernote observability".
documenso-observability
Implement monitoring, logging, and tracing for Documenso integrations. Use when setting up observability, implementing metrics collection, or debugging production issues. Trigger with phrases like "documenso monitoring", "documenso metrics", "documenso logging", "documenso tracing", "documenso observability".
deepgram-observability
Set up comprehensive observability for Deepgram integrations. Use when implementing monitoring, setting up dashboards, or configuring alerting for Deepgram integration health. Trigger: "deepgram monitoring", "deepgram metrics", "deepgram observability", "monitor deepgram", "deepgram alerts", "deepgram dashboard".
databricks-observability
Set up comprehensive observability for Databricks with metrics, traces, and alerts. Use when implementing monitoring for Databricks jobs, setting up dashboards, or configuring alerting for pipeline health. Trigger with phrases like "databricks monitoring", "databricks metrics", "databricks observability", "monitor databricks", "databricks alerts", "databricks logging".
customerio-observability
Set up Customer.io monitoring and observability. Use when implementing metrics, structured logging, alerting, or Grafana dashboards for Customer.io integrations. Trigger: "customer.io monitoring", "customer.io metrics", "customer.io dashboard", "customer.io alerts", "customer.io observability".
coreweave-observability
Set up GPU monitoring and observability for CoreWeave workloads. Use when implementing GPU metrics dashboards, configuring alerts, or tracking inference latency and throughput. Trigger with phrases like "coreweave monitoring", "coreweave observability", "coreweave gpu metrics", "coreweave grafana".
cohere-observability
Set up comprehensive observability for Cohere API v2 with metrics, traces, and alerts. Use when implementing monitoring for Chat/Embed/Rerank operations, setting up dashboards, or configuring alerts for Cohere integrations. Trigger with phrases like "cohere monitoring", "cohere metrics", "cohere observability", "monitor cohere", "cohere alerts", "cohere tracing".
coderabbit-observability
Monitor CodeRabbit review effectiveness with metrics, dashboards, and alerts. Use when tracking review coverage, measuring comment acceptance rates, or building dashboards for CodeRabbit adoption across your organization. Trigger with phrases like "coderabbit monitoring", "coderabbit metrics", "coderabbit observability", "monitor coderabbit", "coderabbit alerts", "coderabbit dashboard".
clickup-observability
Monitor ClickUp API integrations with metrics, tracing, structured logging, and alerting using Prometheus, OpenTelemetry, and Grafana. Trigger: "clickup monitoring", "clickup metrics", "clickup observability", "monitor clickup", "clickup alerts", "clickup tracing", "clickup dashboard".
clickhouse-observability
Monitor ClickHouse with Prometheus metrics, Grafana dashboards, system table queries, and alerting for query performance, merge health, and resource usage. Use when setting up ClickHouse monitoring, building Grafana dashboards, or configuring alerts for production ClickHouse deployments. Trigger: "clickhouse monitoring", "clickhouse metrics", "clickhouse Grafana", "clickhouse observability", "monitor clickhouse", "clickhouse Prometheus".
clerk-observability
Implement monitoring, logging, and observability for Clerk authentication. Use when setting up monitoring, debugging auth issues in production, or implementing audit logging. Trigger with phrases like "clerk monitoring", "clerk logging", "clerk observability", "clerk metrics", "clerk audit log".