qe-quality-metrics

Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

Best use case

qe-quality-metrics is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

Teams using qe-quality-metrics should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/qe-quality-metrics/SKILL.md --create-dirs "https://raw.githubusercontent.com/proffesor-for-testing/agentic-qe/main/.kiro/skills/qe-quality-metrics/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/qe-quality-metrics/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How qe-quality-metrics Compares

Feature / Agentqe-quality-metricsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Quality Metrics

<default_to_action>
When measuring quality or building dashboards:
1. MEASURE outcomes (bug escape rate, MTTD) not activities (test count)
2. FOCUS on DORA metrics: Deployment frequency, Lead time, MTTD, MTTR, Change failure rate
3. AVOID vanity metrics: 100% coverage means nothing if tests don't catch bugs
4. SET thresholds that drive behavior (quality gates block bad code)
5. TREND over time: Direction matters more than absolute numbers

**Quick Metric Selection:**
- Speed: Deployment frequency, lead time for changes
- Stability: Change failure rate, MTTR
- Quality: Bug escape rate, defect density, test effectiveness
- Process: Code review time, flaky test rate

**Critical Success Factors:**
- Metrics without action are theater
- What you measure is what you optimize
- Trends matter more than snapshots
</default_to_action>

## Quick Reference Card

### When to Use
- Building quality dashboards
- Defining quality gates
- Evaluating testing effectiveness
- Justifying quality investments

### Meaningful vs Vanity Metrics
| ✅ Meaningful | ❌ Vanity |
|--------------|-----------|
| Bug escape rate | Test case count |
| MTTD (detection) | Lines of test code |
| MTTR (recovery) | Test executions |
| Change failure rate | Coverage % (alone) |
| Lead time for changes | Requirements traced |

### DORA Metrics
| Metric | Elite | High | Medium | Low |
|--------|-------|------|--------|-----|
| Deploy Frequency | On-demand | Weekly | Monthly | Yearly |
| Lead Time | < 1 hour | < 1 week | < 1 month | > 6 months |
| Change Failure Rate | < 5% | < 15% | < 30% | > 45% |
| MTTR | < 1 hour | < 1 day | < 1 week | > 1 month |

### Quality Gate Thresholds
| Metric | Blocking Threshold | Warning |
|--------|-------------------|---------|
| Test pass rate | 100% | - |
| Critical coverage | > 80% | > 70% |
| Security critical | 0 | - |
| Performance p95 | < 200ms | < 500ms |
| Flaky tests | < 2% | < 5% |

---

## Core Metrics

### Bug Escape Rate
```
Bug Escape Rate = (Production Bugs / Total Bugs Found) × 100

Target: < 10% (90% caught before production)
```

### Test Effectiveness
```
Test Effectiveness = (Bugs Found by Tests / Total Bugs) × 100

Target: > 70%
```

### Defect Density
```
Defect Density = Defects / KLOC

Good: < 1 defect per KLOC
```

### Mean Time to Detect (MTTD)
```
MTTD = Time(Bug Reported) - Time(Bug Introduced)

Target: < 1 day for critical, < 1 week for others
```

---

## Dashboard Design

```typescript
// Agent generates quality dashboard
await Task("Generate Dashboard", {
  metrics: {
    delivery: ['deployment-frequency', 'lead-time', 'change-failure-rate'],
    quality: ['bug-escape-rate', 'test-effectiveness', 'defect-density'],
    stability: ['mttd', 'mttr', 'availability'],
    process: ['code-review-time', 'flaky-test-rate', 'coverage-trend']
  },
  visualization: 'grafana',
  alerts: {
    critical: { bug_escape_rate: '>20%', mttr: '>24h' },
    warning: { coverage: '<70%', flaky_rate: '>5%' }
  }
}, "qe-quality-analyzer");
```

---

## Quality Gate Configuration

```json
{
  "qualityGates": {
    "commit": {
      "coverage": { "min": 80, "blocking": true },
      "lint": { "errors": 0, "blocking": true }
    },
    "pr": {
      "tests": { "pass": "100%", "blocking": true },
      "security": { "critical": 0, "blocking": true },
      "coverage_delta": { "min": 0, "blocking": false }
    },
    "release": {
      "e2e": { "pass": "100%", "blocking": true },
      "performance_p95": { "max_ms": 200, "blocking": true },
      "bug_escape_rate": { "max": "10%", "blocking": false }
    }
  }
}
```

---

## Agent-Assisted Metrics

```typescript
// Calculate quality trends
await Task("Quality Trend Analysis", {
  timeframe: '90d',
  metrics: ['bug-escape-rate', 'mttd', 'test-effectiveness'],
  compare: 'previous-90d',
  predictNext: '30d'
}, "qe-quality-analyzer");

// Evaluate quality gate
await Task("Quality Gate Evaluation", {
  buildId: 'build-123',
  environment: 'staging',
  metrics: currentMetrics,
  policy: qualityPolicy
}, "qe-quality-gate");
```

---

## Agent Coordination Hints

### Memory Namespace
```
aqe/quality-metrics/
├── dashboards/*         - Dashboard configurations
├── trends/*             - Historical metric data
├── gates/*              - Gate evaluation results
└── alerts/*             - Triggered alerts
```

### Fleet Coordination
```typescript
const metricsFleet = await FleetManager.coordinate({
  strategy: 'quality-metrics',
  agents: [
    'qe-quality-analyzer',         // Trend analysis
    'qe-test-executor',            // Test metrics
    'qe-coverage-analyzer',        // Coverage data
    'qe-production-intelligence',  // Production metrics
    'qe-quality-gate'              // Gate decisions
  ],
  topology: 'mesh'
});
```

---

## Common Traps

| Trap | Problem | Solution |
|------|---------|----------|
| Coverage worship | 100% coverage, bugs still escape | Measure bug escape rate instead |
| Test count focus | Many tests, slow feedback | Measure execution time |
| Activity metrics | Busy work, no outcomes | Measure outcomes (MTTD, MTTR) |
| Point-in-time | Snapshot without context | Track trends over time |

---

## Related Skills
- [agentic-quality-engineering](../agentic-quality-engineering/) - Agent coordination
- [cicd-pipeline-qe-orchestrator](../cicd-pipeline-qe-orchestrator/) - Quality gates
- [risk-based-testing](../risk-based-testing/) - Risk-informed metrics
- [shift-right-testing](../shift-right-testing/) - Production metrics

---

## Remember

**Measure outcomes, not activities.** Bug escape rate > test count. MTTD/MTTR > coverage %. Trends > snapshots. Set gates that block bad code. What you measure is what you optimize.

**With Agents:** Agents track metrics automatically, analyze trends, trigger alerts, and make gate decisions. Use agents to maintain continuous quality visibility.

Related Skills

qe-verification-quality

298
from proffesor-for-testing/agentic-qe

Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.

qe-code-review-quality

298
from proffesor-for-testing/agentic-qe

Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices.

qe-agentic-quality-engineering

298
from proffesor-for-testing/agentic-qe

AI agents as force multipliers for quality work. Core skill for all 19 QE agents using PACT principles.

verification-quality

298
from proffesor-for-testing/agentic-qe

Verifies agent outputs against expected results and validates code changes pass quality checks before merge. Use when verifying agent outputs are correct, validating code changes before merge, or configuring automatic rollback for failed quality checks.

test-metrics-dashboard

298
from proffesor-for-testing/agentic-qe

Use when querying test history, analyzing flakiness rates, tracking MTTR, or building quality trend dashboards from test execution data.

quality-metrics

298
from proffesor-for-testing/agentic-qe

Tracks quality metrics including defect density, test effectiveness ratio, DORA metrics, and mean time to detection. Use when establishing quality dashboards, defining KPIs, evaluating test suite effectiveness, or reporting quality trends to stakeholders.

qe-quality-assessment

298
from proffesor-for-testing/agentic-qe

Evaluates code quality through complexity analysis, lint results, code smell detection, and test health metrics. Use when assessing deployment readiness, configuring quality gates, scoring a codebase for release, or generating quality reports with pass/fail verdicts.

code-review-quality

298
from proffesor-for-testing/agentic-qe

Conduct context-driven code reviews focusing on quality, testability, and maintainability. Use when reviewing code, providing feedback, or establishing review practices.

agentic-quality-engineering

298
from proffesor-for-testing/agentic-qe

Use when orchestrating QE agents, understanding PACT principles, configuring the AQE v3 fleet, or leveraging AI agents as force multipliers for quality work.

qe-visual-testing-advanced

298
from proffesor-for-testing/agentic-qe

Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.

qe-testability-scoring

298
from proffesor-for-testing/agentic-qe

AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.

qe-test-reporting-analytics

298
from proffesor-for-testing/agentic-qe

Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.