Best use case
Metrics Definer is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Trigger
Teams using Metrics Definer should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/metrics-definer/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Metrics Definer Compares
| Feature / Agent | Metrics Definer | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Trigger
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Metrics Definer ## Trigger Activate on "define metrics", "what should I measure", "success metrics for [feature]", "KPIs for [initiative]". ## Behavior ### Step 1: Get Context Ask: 1. What feature or initiative? 2. What's the goal? 3. What can we currently measure? ### Step 2: Define Metrics **Primary Metric** - Name, exact definition, measurement method, target, timeframe **Secondary Metrics (2-3)** - Name, definition, why it matters **Guardrail Metrics (2-3)** - What should NOT get worse. Current baseline and acceptable range. **Leading Indicators** - What to measure in week 1 that predicts long-term success **Anti-Metrics** - What metric going UP would actually be bad ## Example **Bad metrics (vague, unmeasurable):** ``` Primary Metric: Engagement Secondary: User satisfaction Guardrail: Performance ``` **Good metrics (precise, measurable, useful):** ``` Primary Metric: - Name: 7-day feature activation rate - Definition: % of users who complete at least one [action] within 7 days of first exposure to the feature - Measurement: Event tracking via Mixpanel. Event: "feature_action_completed" - Baseline: N/A (new feature) - Target: 30% within 90 days of launch - Timeframe: Measured weekly, evaluated at 90 days Guardrail Metrics: - Overall page load time stays under 2s (p95). Currently: 1.4s. Acceptable range: up to 2.0s. Beyond 2.0s = performance regression, pause rollout. - Support ticket volume for this feature area stays below 50/week. Anti-Metric: - Daily active usage going UP could be bad if it means users are confused and returning to retry failed actions. Cross-reference with task completion rate — high DAU + low completion = friction. ``` ## Rules - Every metric needs a precise definition. "Engagement" without defining what counts is not a metric. - Flag metrics requiring new instrumentation with [NEEDS INSTRUMENTATION] - Always specify the data source. No metric exists without a measurement method. - Anti-metrics are mandatory. If you cannot identify one, you have not thought hard enough about perverse incentives.
Related Skills
model-evaluation-metrics
Model Evaluation Metrics - Auto-activating skill for ML Training. Triggers on: model evaluation metrics, model evaluation metrics Part of the ML Training skill category.
aggregating-performance-metrics
This skill enables Claude to aggregate and centralize performance metrics from various sources. It is used when the user needs to consolidate metrics from applications, systems, databases, caches, queues, and external services into a central location for monitoring and analysis. The skill is triggered by requests to "aggregate metrics", "centralize performance metrics", or similar phrases related to metrics aggregation and monitoring. It facilitates designing a metrics taxonomy, choosing appropriate aggregation tools, and setting up dashboards and alerts.
collecting-infrastructure-metrics
This skill enables Claude to collect comprehensive infrastructure performance metrics across compute, storage, network, containers, load balancers, and databases. It is triggered when the user requests "collect infrastructure metrics", "monitor server performance", "set up performance dashboards", or needs to analyze system resource utilization. The skill configures metrics collection, sets up aggregation, and helps create infrastructure dashboards for health monitoring and capacity tracking. It supports configuration for Prometheus, Datadog, and CloudWatch.
copilot-usage-metrics
Retrieve and display GitHub Copilot usage metrics for organizations and enterprises using the GitHub CLI and REST API.
saas-metrics-coach
SaaS financial health advisor. Use when a user shares revenue or customer numbers, or mentions ARR, MRR, churn, LTV, CAC, NRR, or asks how their SaaS business is doing.
startup-metrics-framework
This skill should be used when the user asks about "key startup metrics", "SaaS metrics", "CAC and LTV", "unit economics", "burn multiple", "rule of 40", "marketplace metrics", or requests guidance on tracking and optimizing business performance metrics.
risk-metrics-calculation
Calculate portfolio risk metrics including VaR, CVaR, Sharpe, Sortino, and drawdown analysis. Use when measuring portfolio risk, implementing risk limits, or building risk monitoring systems.
VictoriaMetrics
## Overview
Product Analytics — Metrics, Funnels, and Growth
## Overview
Azure AI Metrics Advisor Skill
This skill provides expert guidance for Azure AI Metrics Advisor. Covers decision making, security, configuration, and integrations & coding patterns. It combines local quick-reference content with remote documentation fetching capabilities.
Statsmodels: Statistical Modeling and Econometrics
## Overview
asc-metrics
When the user wants to analyze their own app's actual performance data from App Store Connect — real downloads, revenue, IAP, subscriptions, trials, or country breakdowns synced via Appeeky Connect. Use when the user asks about "my downloads", "my revenue", "how is my app performing", "ASC data", "sales and trends", "my subscription numbers", "App Store Connect metrics", or wants to compare periods or top markets. For third-party app estimates, see app-analytics. For subscription analytics depth, see monetization-strategy.