analyzing-test-quality

Automatically activated when user asks about test quality, code coverage, test reliability, test maintainability, or wants to analyze their test suite. Provides framework-agnostic test quality analysis and improvement recommendations. Does NOT provide framework-specific patterns - use jest-testing or playwright-testing for those.

25 stars

Best use case

analyzing-test-quality is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Automatically activated when user asks about test quality, code coverage, test reliability, test maintainability, or wants to analyze their test suite. Provides framework-agnostic test quality analysis and improvement recommendations. Does NOT provide framework-specific patterns - use jest-testing or playwright-testing for those.

Teams using analyzing-test-quality should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/analyzing-test-quality/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/c0ntr0lledcha0s/analyzing-test-quality/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/analyzing-test-quality/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How analyzing-test-quality Compares

Feature / Agentanalyzing-test-qualityStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Automatically activated when user asks about test quality, code coverage, test reliability, test maintainability, or wants to analyze their test suite. Provides framework-agnostic test quality analysis and improvement recommendations. Does NOT provide framework-specific patterns - use jest-testing or playwright-testing for those.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Analyzing Test Quality

You are an expert in test quality analysis with deep knowledge of testing principles, patterns, and metrics that apply across all testing frameworks.

## Your Capabilities

1. **Quality Metrics**: Coverage, mutation score, test effectiveness
2. **Test Patterns**: AAA, GWT, fixtures, factories, page objects
3. **Anti-Patterns**: Flaky tests, test pollution, over-mocking
4. **Maintainability**: DRY, readability, test organization
5. **Reliability**: Determinism, isolation, independence
6. **Coverage Analysis**: Statement, branch, function, line coverage

## When to Use This Skill

Claude should automatically invoke this skill when:
- The user asks about test quality or test effectiveness
- Code coverage reports or metrics are discussed
- Test reliability or flakiness is mentioned
- Test organization or refactoring is needed
- General test improvement is requested

## How to Use This Skill

### Accessing Resources

Use `{baseDir}` to reference files in this skill directory:
- Scripts: `{baseDir}/scripts/`
- Documentation: `{baseDir}/references/`
- Templates: `{baseDir}/assets/`

## Available Resources

This skill includes ready-to-use resources in `{baseDir}`:

- **references/quality-checklist.md** - Printable test quality checklist with scoring guide
- **assets/quality-report.template.md** - Complete template for test quality assessment reports
- **scripts/calculate-metrics.sh** - Calculates test metrics (test count, ratios, patterns, assertions)

## Test Quality Dimensions

### 1. Correctness
Tests accurately verify intended behavior:
- Tests match requirements
- Assertions are complete
- Edge cases are covered
- Error scenarios are tested

### 2. Readability
Tests are easy to understand:
- Clear naming (what is being tested)
- Proper structure (AAA/GWT pattern)
- Minimal setup noise
- Self-documenting code

### 3. Maintainability
Tests are easy to modify:
- DRY with appropriate helpers
- Focused tests (single responsibility)
- Proper abstraction level
- Clear dependencies

### 4. Reliability
Tests produce consistent results:
- No timing dependencies
- Proper isolation
- Deterministic data
- Independent execution

### 5. Speed
Tests run efficiently:
- Appropriate test pyramid
- Efficient setup/teardown
- Proper mocking strategy
- Parallel execution

## Test Quality Checklist

### Structure
- [ ] Uses AAA (Arrange-Act-Assert) or GWT pattern
- [ ] One logical assertion per test
- [ ] Descriptive test names
- [ ] Proper describe/context nesting
- [ ] Appropriate setup/teardown

### Coverage
- [ ] Happy path scenarios
- [ ] Error/edge cases
- [ ] Boundary conditions
- [ ] Integration points
- [ ] Security scenarios

### Reliability
- [ ] No timing dependencies
- [ ] Proper async handling
- [ ] Isolated tests (no shared state)
- [ ] Deterministic data
- [ ] Order-independent

### Maintainability
- [ ] Reusable fixtures/factories
- [ ] Clear variable naming
- [ ] Focused assertions
- [ ] Appropriate abstraction
- [ ] No magic numbers/strings

## Common Anti-Patterns

### Test Pollution
```typescript
// BAD: Shared mutable state
let count = 0;
beforeEach(() => count++);

// GOOD: Reset in setup
let count: number;
beforeEach(() => { count = 0; });
```

### Over-Mocking

Mocking too much hides bugs and makes tests brittle.

```typescript
// BAD: Mock everything - test only verifies mocks
// Jest
jest.mock('./dep1');
jest.mock('./dep2');
jest.mock('./dep3');

// Vitest
vi.mock('./dep1');
vi.mock('./dep2');
vi.mock('./dep3');

// GOOD: Mock boundaries only
// Mock external services, keep internal logic real
mock('./api'); // External service only
// Test actual business logic
```

### Flaky Assertions
```typescript
// BAD: Timing dependent
await delay(100);
expect(element).toBeVisible();

// GOOD: Wait for condition
// Testing Library
await waitFor(() => expect(element).toBeVisible());

// Playwright
await expect(element).toBeVisible();
```

### Mystery Guest
```typescript
// BAD: Hidden dependencies
test('should process', () => {
  const result = process(); // Uses global data
  expect(result).toBe(42);
});

// GOOD: Explicit setup
test('should process input', () => {
  const input = createInput({ value: 21 });
  const result = process(input);
  expect(result).toBe(42);
});
```

### Assertion Roulette
```typescript
// BAD: Multiple unrelated assertions
test('should work', () => {
  expect(user.name).toBe('John');
  expect(items.length).toBe(3);
  expect(total).toBe(100);
});

// GOOD: Focused assertions
test('should set user name', () => {
  expect(user.name).toBe('John');
});

test('should have correct item count', () => {
  expect(items).toHaveLength(3);
});
```

## Mutation Testing

Mutation testing validates test effectiveness by modifying code and checking if tests catch the changes.

### Concept

1. **Mutants** are created by modifying source code (changing operators, values, etc.)
2. **Tests run** against each mutant
3. **Killed mutants** = tests caught the change (good!)
4. **Survived mutants** = tests missed the change (weak tests)

### Stryker Setup

```bash
# Install Stryker
npm install -D @stryker-mutator/core

# For specific frameworks
npm install -D @stryker-mutator/jest-runner      # Jest
npm install -D @stryker-mutator/vitest-runner    # Vitest
npm install -D @stryker-mutator/mocha-runner     # Mocha

# Initialize configuration
npx stryker init
```

### Stryker Configuration

```javascript
// stryker.conf.js
module.exports = {
  packageManager: 'npm',
  reporters: ['html', 'clear-text', 'progress'],
  testRunner: 'jest',
  coverageAnalysis: 'perTest',

  // What to mutate
  mutate: [
    'src/**/*.ts',
    '!src/**/*.test.ts',
    '!src/**/*.spec.ts',
  ],

  // Mutation types to use
  mutator: {
    excludedMutations: [
      'StringLiteral', // Skip string mutations
    ],
  },

  // Thresholds
  thresholds: {
    high: 80,
    low: 60,
    break: 50, // Fail CI if below this
  },
};
```

### Interpreting Results

```
Mutation score: 85%
Killed: 170 | Survived: 30 | Timeout: 5 | No coverage: 10
```

**High score (>80%)**: Tests are effective
**Medium score (60-80%)**: Some weak areas
**Low score (<60%)**: Tests need significant improvement

### Common Surviving Mutations

**Boundary mutations**: `<` changed to `<=`
```typescript
// Mutation survives if tests don't check boundary
if (value < 10) { ... }  // Changed to: value <= 10
```

**Arithmetic mutations**: `+` changed to `-`
```typescript
// Mutation survives if result isn't precisely checked
return a + b;  // Changed to: a - b
```

**Boolean mutations**: `&&` changed to `||`
```typescript
// Mutation survives if both conditions aren't tested
if (a && b) { ... }  // Changed to: a || b
```

### CI Integration

```yaml
# GitHub Actions
- name: Run mutation tests
  run: npx stryker run

- name: Upload Stryker report
  uses: actions/upload-artifact@v3
  with:
    name: stryker-report
    path: reports/mutation/
```

## Coverage Metrics

### Types of Coverage
- **Statement**: Lines executed
- **Branch**: Decision paths taken
- **Function**: Functions called
- **Line**: Lines covered

### Coverage Thresholds
```javascript
// Recommended minimums
{
  statements: 80,
  branches: 75,
  functions: 80,
  lines: 80
}
```

### Coverage Pitfalls
- High coverage ≠ good tests
- Can miss logical errors
- Doesn't test interactions
- Can incentivize bad tests

## Mutation Testing

### Concept
Mutation testing modifies code to check if tests catch the changes:
- Tests should fail when code is mutated
- Surviving mutants indicate weak tests
- Higher kill rate = better tests

### Types of Mutations
- Arithmetic operators (+, -, *, /)
- Comparison operators (<, >, ==)
- Boolean operators (&&, ||, !)
- Return values
- Constants

## Test Pyramid

### Unit Tests (Base)
- Fast execution
- Isolated components
- High coverage
- Many tests

### Integration Tests (Middle)
- Component interactions
- Database/API calls
- Moderate coverage
- Medium quantity

### E2E Tests (Top)
- Full user flows
- Real browser
- Critical paths only
- Few tests

## Analysis Workflow

When analyzing test quality:

1. **Gather Metrics**
   - Run coverage report
   - Count test/code ratio
   - Measure test execution time

2. **Identify Patterns**
   - Check test structure
   - Look for anti-patterns
   - Assess naming quality

3. **Evaluate Reliability**
   - Check for flaky indicators
   - Assess isolation
   - Review async handling

4. **Provide Recommendations**
   - Prioritize by impact
   - Give specific examples
   - Include code samples

## Examples

### Example 1: Coverage Analysis
When analyzing coverage:
1. Run coverage tool
2. Identify uncovered lines
3. Prioritize critical paths
4. Suggest test cases

### Example 2: Reliability Audit
When auditing for reliability:
1. Search for timing patterns
2. Check shared state usage
3. Review async assertions
4. Identify order dependencies

## Important Notes

- Quality is more important than quantity
- Coverage is a starting point, not a goal
- Fast feedback enables TDD
- Readable tests serve as documentation
- Test maintenance cost should be low

Related Skills

analyzing-logs

25
from ComeOnOliver/skillshub

This skill enables Claude to analyze logs for performance insights and issue detection. It is triggered when the user requests log analysis, performance troubleshooting, or debugging assistance. The skill identifies slow requests, error patterns, resource warnings, and other key performance indicators within log files. Use this skill when the user mentions "analyze logs", "performance issues", "error patterns in logs", "slow requests", or requests help with "log aggregation". It helps identify performance bottlenecks and improve application stability by analyzing log data.

locust-test-creator

25
from ComeOnOliver/skillshub

Locust Test Creator - Auto-activating skill for Performance Testing. Triggers on: locust test creator, locust test creator Part of the Performance Testing skill category.

load-testing-apis

25
from ComeOnOliver/skillshub

Execute comprehensive load and stress testing to validate API performance and scalability. Use when validating API performance under load. Trigger with phrases like "load test the API", "stress test API", or "benchmark API performance".

load-test-scenario-planner

25
from ComeOnOliver/skillshub

Load Test Scenario Planner - Auto-activating skill for Performance Testing. Triggers on: load test scenario planner, load test scenario planner Part of the Performance Testing skill category.

running-load-tests

25
from ComeOnOliver/skillshub

This skill enables Claude to create and execute load tests for performance validation. It is designed to generate load test scripts using tools like k6, JMeter, and Artillery, based on specified test scenarios. Use this skill when the user requests to create a "load test", conduct "performance testing", validate "application performance", or needs a "stress test" to identify breaking points in the application. The skill helps define performance thresholds and provides execution instructions.

testing-load-balancers

25
from ComeOnOliver/skillshub

This skill enables Claude to test load balancing strategies. It validates traffic distribution across backend servers, tests failover scenarios when servers become unavailable, verifies sticky sessions, and assesses health check functionality. Use this skill when the user asks to "test load balancer", "validate traffic distribution", "test failover", "verify sticky sessions", or "test health checks". It is specifically designed for testing load balancing configurations using the `load-balancer-tester` plugin.

keyboard-navigation-tester

25
from ComeOnOliver/skillshub

Keyboard Navigation Tester - Auto-activating skill for Frontend Development. Triggers on: keyboard navigation tester, keyboard navigation tester Part of the Frontend Development skill category.

jmeter-test-plan-creator

25
from ComeOnOliver/skillshub

Jmeter Test Plan Creator - Auto-activating skill for Performance Testing. Triggers on: jmeter test plan creator, jmeter test plan creator Part of the Performance Testing skill category.

jest-test-generator

25
from ComeOnOliver/skillshub

Jest Test Generator - Auto-activating skill for Test Automation. Triggers on: jest test generator, jest test generator Part of the Test Automation skill category.

integration-test-setup

25
from ComeOnOliver/skillshub

Integration Test Setup - Auto-activating skill for Test Automation. Triggers on: integration test setup, integration test setup Part of the Test Automation skill category.

running-integration-tests

25
from ComeOnOliver/skillshub

This skill enables Claude to run and manage integration test suites. It automates environment setup, database seeding, service orchestration, and cleanup. Use this skill when the user asks to "run integration tests", "execute integration tests", or any command that implies running integration tests for a project, including specifying particular test suites or options like code coverage. It is triggered by phrases such as "/run-integration", "/rit", or requests mentioning "integration tests". The plugin handles database creation, migrations, seeding, and dependent service management.

integration-test-generator

25
from ComeOnOliver/skillshub

Integration Test Generator - Auto-activating skill for API Integration. Triggers on: integration test generator, integration test generator Part of the API Integration skill category.