coverage-drop-investigator

Use when test coverage has dropped and you need to find which changes caused it and what tests to add. Traces coverage regressions to specific commits and files.

Best use case

coverage-drop-investigator is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Use when test coverage has dropped and you need to find which changes caused it and what tests to add. Traces coverage regressions to specific commits and files.

Teams using coverage-drop-investigator should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/coverage-drop-investigator/SKILL.md --create-dirs "https://raw.githubusercontent.com/proffesor-for-testing/agentic-qe/main/.claude/skills/coverage-drop-investigator/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/coverage-drop-investigator/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How coverage-drop-investigator Compares

Feature / Agentcoverage-drop-investigatorStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Use when test coverage has dropped and you need to find which changes caused it and what tests to add. Traces coverage regressions to specific commits and files.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Coverage Drop Investigator

Runbook-style skill for investigating coverage regressions. Identifies which changes caused coverage to drop and recommends targeted tests.

## Activation

```
/coverage-drop-investigator
```

## Investigation Flow

### Step 1: Measure Current Coverage

```bash
# Generate coverage report
npx jest --coverage --coverageReporters=json-summary

# View summary
cat coverage/coverage-summary.json | jq '.total'
```

### Step 2: Find When Coverage Dropped

```bash
# Compare coverage with main branch
git stash && npx jest --coverage --coverageReporters=json-summary
mv coverage/coverage-summary.json coverage/baseline.json
git stash pop && npx jest --coverage --coverageReporters=json-summary

# Compare
jq -s '.[0].total.statements.pct as $baseline | .[1].total.statements.pct as $current | {baseline: $baseline, current: $current, delta: ($current - $baseline)}' coverage/baseline.json coverage/coverage-summary.json
```

### Step 3: Identify Uncovered Files

```bash
# Find files with lowest coverage
cat coverage/coverage-summary.json | jq 'to_entries | map(select(.key != "total")) | sort_by(.value.statements.pct) | .[0:10] | .[] | {file: .key, statements: .value.statements.pct, branches: .value.branches.pct}'
```

### Step 4: Map to Recent Changes

```bash
# Find recently changed files with low coverage
git diff --name-only main...HEAD | while read file; do
  jq --arg f "$file" '.[$f] // empty | {file: $f, statements: .statements.pct}' coverage/coverage-summary.json
done
```

### Step 5: Recommend Tests

For each uncovered file, identify:
1. **Uncovered functions** — need new test cases
2. **Uncovered branches** — need conditional test cases (if/else paths)
3. **Uncovered lines** — may indicate dead code or missing edge cases

### Step 6: Report

```markdown
## Coverage Drop Report
- **Current**: {{current_pct}}%
- **Baseline (main)**: {{baseline_pct}}%
- **Delta**: {{delta}}%
- **Files causing drop**:
  | File | Coverage | Changed Lines | Tests Needed |
  |------|----------|--------------|-------------|
  | {{file}} | {{pct}}% | {{lines}} | {{count}} |
- **Recommended action**: {{write_tests / accept_drop / mark_as_excluded}}
```

## Composition

After investigation:
- **`/qe-test-generation`** — generate tests for uncovered files
- **`/mutation-testing`** — verify existing tests actually catch bugs
- **`/coverage-guard`** — prevent future drops

## Gotchas

- Coverage can drop because NEW code was added without tests, not because tests were removed
- 100% coverage is not always the goal — focus on critical paths, not getters/setters
- Branch coverage drops are more concerning than line coverage drops — branches indicate logic paths
- Coverage tools may count generated code or type definitions — exclude with coveragePathIgnorePatterns

Related Skills

test-failure-investigator

298
from proffesor-for-testing/agentic-qe

Use when a test is failing and you need to determine root cause: is it flaky, an environment issue, or a real regression? Traces failure from symptom to fix.

qe-coverage-analysis

298
from proffesor-for-testing/agentic-qe

Analyzes test coverage data (Istanbul, c8, lcov) to identify uncovered lines, branches, and functions with risk-weighted gap detection. Use when analyzing coverage reports, identifying coverage gaps, comparing coverage between branches, or prioritizing which untested code to cover first.

coverage-guard

298
from proffesor-for-testing/agentic-qe

Use when you want to prevent coverage regressions during development. Activate with /coverage-guard to warn when coverage drops below threshold after code changes.

qe-visual-testing-advanced

298
from proffesor-for-testing/agentic-qe

Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.

qe-verification-quality

298
from proffesor-for-testing/agentic-qe

Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.

qe-testability-scoring

298
from proffesor-for-testing/agentic-qe

AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.

qe-test-reporting-analytics

298
from proffesor-for-testing/agentic-qe

Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.

qe-test-idea-rewriting

298
from proffesor-for-testing/agentic-qe

Transform passive 'Verify X' test descriptions into active, observable test actions. Use when test ideas lack specificity, use vague language, or fail quality validation. Converts to action-verb format for clearer, more testable descriptions.

qe-test-environment-management

298
from proffesor-for-testing/agentic-qe

Test environment provisioning, infrastructure as code for testing, Docker/Kubernetes for test environments, service virtualization, and cost optimization. Use when managing test infrastructure, ensuring environment parity, or optimizing testing costs.

qe-test-design-techniques

298
from proffesor-for-testing/agentic-qe

Systematic test design with boundary value analysis, equivalence partitioning, decision tables, state transition testing, and combinatorial testing. Use when designing comprehensive test cases, reducing redundant tests, or ensuring systematic coverage.

qe-test-data-management

298
from proffesor-for-testing/agentic-qe

Strategic test data generation, management, and privacy compliance. Use when creating test data, handling PII, ensuring GDPR/CCPA compliance, or scaling data generation for realistic testing scenarios.

qe-test-automation-strategy

298
from proffesor-for-testing/agentic-qe

Design and implement effective test automation with proper pyramid, patterns, and CI/CD integration. Use when building automation frameworks or improving test efficiency.