qe-test-reporting-analytics
Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.
Best use case
qe-test-reporting-analytics is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.
Teams using qe-test-reporting-analytics should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/qe-test-reporting-analytics/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How qe-test-reporting-analytics Compares
| Feature / Agent | qe-test-reporting-analytics | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Advanced test reporting, quality dashboards, predictive analytics, trend analysis, and executive reporting for QE metrics. Use when communicating quality status, tracking trends, or making data-driven decisions.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Test Reporting & Analytics
<default_to_action>
When building test reports:
1. DEFINE audience (dev team vs executives)
2. CHOOSE key metrics (max 5-7)
3. SHOW trends (not just snapshots)
4. HIGHLIGHT actions (what to do about it)
5. AUTOMATE generation
**Dashboard Quick Setup:**
```
+------------------+------------------+------------------+
| Tests Passed | Code Coverage | Flaky Tests |
| 1,247/1,250 ✅ | 82.3% ⬆️ +2.1% | 1.2% ⬇️ -0.3% |
+------------------+------------------+------------------+
| Critical Bugs | Deploy Freq | MTTR |
| 0 open ✅ | 12x/day ⬆️ | 2.3h ⬇️ |
+------------------+------------------+------------------+
```
**Key Metrics by Audience:**
- **Dev Team**: Pass rate, flaky %, execution time, coverage gaps
- **QE Team**: Defect detection rate, test velocity, automation ROI
- **Leadership**: Escaped defects, deployment frequency, quality cost
</default_to_action>
## Quick Reference Card
### Essential Metrics
| Category | Metric | Target |
|----------|--------|--------|
| **Execution** | Pass Rate | >98% |
| **Execution** | Flaky Test % | <2% |
| **Execution** | Suite Duration | <10 min |
| **Coverage** | Line Coverage | >80% |
| **Coverage** | Branch Coverage | >70% |
| **Quality** | Escaped Defects | <5/release |
| **Quality** | MTTR | <4 hours |
| **Efficiency** | Automation Rate | >90% |
### Trend Indicators
| Symbol | Meaning | Action |
|--------|---------|--------|
| ⬆️ | Improving | Continue current approach |
| ⬇️ | Declining | Investigate root cause |
| ➡️ | Stable | Maintain or improve |
| ⚠️ | Threshold breach | Immediate attention |
---
## Report Types
### Real-Time Dashboard
```
Live quality status for CI/CD
- Build status (green/red)
- Test results (pass/fail counts)
- Coverage delta
- Flaky test alerts
```
### Sprint Summary
```markdown
## Sprint 47 Quality Summary
### Metrics
| Metric | Value | Trend |
|--------|-------|-------|
| Tests Added | +47 | ⬆️ |
| Coverage | 82.3% | ⬆️ +2.1% |
| Bugs Found | 12 | ➡️ |
| Escaped | 0 | ✅ |
### Highlights
- ✅ Zero escaped defects
- ⚠️ E2E suite now 45min (target: 30min)
### Actions
1. Optimize slow E2E tests
2. Add coverage for payment module
```
### Executive Report
```markdown
## Monthly Quality Report - Oct 2025
### Executive Summary
✅ Production uptime: 99.97% (target: 99.95%)
✅ Deploy frequency: 12x/day (up from 8x)
⚠️ Coverage: 82.3% (target: 85%)
### Business Impact
- Automation saves 120 hrs/month
- Bug cost: $150/bug found vs $5,000 escaped
- Estimated annual savings: $450K
### Recommendations
1. Invest in performance testing tooling
2. Hire senior QE for mobile coverage
```
---
## Predictive Analytics
```typescript
// Predict test failures
const prediction = await Task("Predict Failures", {
codeChanges: prDiff,
historicalData: last90Days,
model: 'gradient-boosting'
}, "qe-quality-analyzer");
// Returns:
// {
// failureProbability: 0.73,
// likelyFailingTests: ['payment.test.ts'],
// suggestedAction: 'Review payment module carefully',
// confidence: 0.89
// }
// Trend analysis with anomaly detection
const trends = await Task("Analyze Trends", {
metrics: ['passRate', 'coverage', 'flakyRate'],
period: '30d',
detectAnomalies: true
}, "qe-quality-analyzer");
```
---
## Agent Integration
```typescript
// Generate comprehensive quality report
const report = await Task("Generate Quality Report", {
period: 'sprint',
audience: 'executive',
includeROI: true,
includeTrends: true
}, "qe-quality-analyzer");
// Real-time quality gate check
const gateResult = await Task("Quality Gate Check", {
metrics: currentMetrics,
thresholds: qualityPolicy,
environment: 'production'
}, "qe-quality-gate");
```
---
## Agent Coordination Hints
### Memory Namespace
```
aqe/reporting/
├── dashboards/* - Dashboard configurations
├── reports/* - Generated reports
├── trends/* - Trend analysis data
└── predictions/* - Predictive model outputs
```
### Fleet Coordination
```typescript
const reportingFleet = await FleetManager.coordinate({
strategy: 'quality-reporting',
agents: [
'qe-quality-analyzer', // Metrics aggregation
'qe-quality-gate', // Threshold validation
'qe-deployment-readiness' // Release readiness
],
topology: 'parallel'
});
```
---
## Related Skills
- [quality-metrics](../quality-metrics/) - Metric definitions
- [shift-right-testing](../shift-right-testing/) - Production metrics
- [consultancy-practices](../consultancy-practices/) - Client reporting
---
## Remember
**Measure to improve. Report to communicate.**
Good reports:
- Answer "so what?" (actionable insights)
- Show trends (not just snapshots)
- Match audience needs
- Automate where possible
**Data without action is noise. Action without data is guessing.**Related Skills
qe-visual-testing-advanced
Advanced visual regression testing with pixel-perfect comparison, AI-powered diff analysis, responsive design validation, and cross-browser visual consistency. Use when detecting UI regressions, validating designs, or ensuring visual consistency.
qe-testability-scoring
AI-powered testability assessment using 10 principles of intrinsic testability with Playwright and optional Vibium integration. Evaluates web applications against Observability, Controllability, Algorithmic Simplicity, Transparency, Stability, Explainability, Unbugginess, Smallness, Decomposability, and Similarity. Use when assessing software testability, evaluating test readiness, identifying testability improvements, or generating testability reports.
qe-test-idea-rewriting
Transform passive 'Verify X' test descriptions into active, observable test actions. Use when test ideas lack specificity, use vague language, or fail quality validation. Converts to action-verb format for clearer, more testable descriptions.
qe-test-environment-management
Test environment provisioning, infrastructure as code for testing, Docker/Kubernetes for test environments, service virtualization, and cost optimization. Use when managing test infrastructure, ensuring environment parity, or optimizing testing costs.
qe-test-design-techniques
Systematic test design with boundary value analysis, equivalence partitioning, decision tables, state transition testing, and combinatorial testing. Use when designing comprehensive test cases, reducing redundant tests, or ensuring systematic coverage.
qe-test-data-management
Strategic test data generation, management, and privacy compliance. Use when creating test data, handling PII, ensuring GDPR/CCPA compliance, or scaling data generation for realistic testing scenarios.
qe-test-automation-strategy
Design and implement effective test automation with proper pyramid, patterns, and CI/CD integration. Use when building automation frameworks or improving test efficiency.
qe-shift-right-testing
Testing in production with feature flags, canary deployments, synthetic monitoring, and chaos engineering. Use when implementing production observability or progressive delivery.
qe-shift-left-testing
Move testing activities earlier in the development lifecycle to catch defects when they're cheapest to fix. Use when implementing TDD, CI/CD, or early quality practices.
qe-security-visual-testing
Security-first visual testing combining URL validation, PII detection, and visual regression with parallel viewport support. Use when testing web applications that handle sensitive data, need visual regression coverage, or require WCAG accessibility compliance.
qe-security-testing
Test for security vulnerabilities using OWASP principles. Use when conducting security audits, testing auth, or implementing security practices.
qe-risk-based-testing
Focus testing effort on highest-risk areas using risk assessment and prioritization. Use when planning test strategy, allocating testing resources, or making coverage decisions.