coverage-guard
Use when you want to prevent coverage regressions during development. Activate with /coverage-guard to warn when coverage drops below threshold after code changes.
Best use case
coverage-guard is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Use when you want to prevent coverage regressions during development. Activate with /coverage-guard to warn when coverage drops below threshold after code changes.
Teams using coverage-guard should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/coverage-guard/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How coverage-guard Compares
| Feature / Agent | coverage-guard | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Use when you want to prevent coverage regressions during development. Activate with /coverage-guard to warn when coverage drops below threshold after code changes.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Coverage Guard Mode
When activated, checks coverage after test runs and warns if it drops below the configured threshold.
## What It Does
After any test execution (via Bash tool), compares coverage to the threshold in config.json. Warns (doesn't block) if coverage decreased.
## Activation
```
/coverage-guard
```
## Configuration
Edit `config.json` in this skill directory to set thresholds:
```json
{
"thresholds": {
"statements": 80,
"branches": 70,
"functions": 75,
"lines": 80
},
"coverageCommand": "npx jest --coverage --coverageReporters=json-summary",
"coverageFile": "coverage/coverage-summary.json"
}
```
## Hook Configuration
```json
{
"hooks": {
"PostToolUse": [
{
"matcher": "Bash",
"hook": ".claude/skills/coverage-guard/scripts/check-coverage.sh",
"condition": "command contains 'jest' OR command contains 'vitest' OR command contains 'npm test'"
}
]
}
}
```
## Enforcement Logic
```bash
#!/bin/bash
# check-coverage.sh
COVERAGE_FILE="coverage/coverage-summary.json"
THRESHOLD=80
if [ -f "$COVERAGE_FILE" ]; then
STATEMENTS=$(jq '.total.statements.pct' "$COVERAGE_FILE")
BRANCHES=$(jq '.total.branches.pct' "$COVERAGE_FILE")
if (( $(echo "$STATEMENTS < $THRESHOLD" | bc -l) )); then
echo "WARNING: Statement coverage ($STATEMENTS%) below threshold ($THRESHOLD%)"
echo "Coverage dropped — check which files lost coverage."
fi
if (( $(echo "$BRANCHES < 70" | bc -l) )); then
echo "WARNING: Branch coverage ($BRANCHES%) below 70%"
fi
fi
```
## Gotchas
- Coverage check runs AFTER the test command — if tests crash, no coverage report is generated
- Coverage-summary.json must be configured as a reporter — default Jest config may not include it
- Threshold comparisons use floating point — `79.999%` will trigger below `80%` threshold
- Branch coverage is typically 10-15% lower than line coverage — set thresholds accordinglyRelated Skills
qe-coverage-analysis
Analyzes test coverage data (Istanbul, c8, lcov) to identify uncovered lines, branches, and functions with risk-weighted gap detection. Use when analyzing coverage reports, identifying coverage gaps, comparing coverage between branches, or prioritizing which untested code to cover first.
coverage-drop-investigator
Use when test coverage has dropped and you need to find which changes caused it and what tests to add. Traces coverage regressions to specific commits and files.
qe-visual-testing-advanced
Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.
qe-verification-quality
Comprehensive truth scoring, code quality verification, and automatic rollback system with 0.95 accuracy threshold for ensuring high-quality agent outputs and codebase reliability.
qe-testability-scoring
AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.
qe-test-reporting-analytics
Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.
qe-test-idea-rewriting
Transform passive 'Verify X' test descriptions into active, observable test actions. Use when test ideas lack specificity, use vague language, or fail quality validation. Converts to action-verb format for clearer, more testable descriptions.
qe-test-environment-management
Test environment provisioning, infrastructure as code for testing, Docker/Kubernetes for test environments, service virtualization, and cost optimization. Use when managing test infrastructure, ensuring environment parity, or optimizing testing costs.
qe-test-design-techniques
Systematic test design with boundary value analysis, equivalence partitioning, decision tables, state transition testing, and combinatorial testing. Use when designing comprehensive test cases, reducing redundant tests, or ensuring systematic coverage.
qe-test-data-management
Strategic test data generation, management, and privacy compliance. Use when creating test data, handling PII, ensuring GDPR/CCPA compliance, or scaling data generation for realistic testing scenarios.
qe-test-automation-strategy
Design and implement effective test automation with proper pyramid, patterns, and CI/CD integration. Use when building automation frameworks or improving test efficiency.
qe-technical-writing
Write clear, engaging technical content from real experience. Use when writing blog posts, documentation, tutorials, or technical articles.