test-fixing
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
Best use case
test-fixing is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
Teams using test-fixing should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/test-fixing/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How test-fixing Compares
| Feature / Agent | test-fixing | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to make tests pass.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Test Fixing
Systematically identify and fix all failing tests using smart grouping strategies.
## When to Use
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
## Systematic Approach
### 1. Initial Test Run
Run `make test` to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
### 2. Smart Error Grouping
Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failure
- **Root cause**: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
### 3. Systematic Fixing Process
For each group (starting with highest impact):
1. **Identify root cause**
- Read relevant code
- Check recent changes with `git diff`
- Understand the error pattern
2. **Implement fix**
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
3. **Verify fix**
- Run subset of tests for this group
- Use pytest markers or file patterns:
```bash
uv run pytest tests/path/to/test_file.py -v
uv run pytest -k "pattern" -v
```
- Ensure group passes before moving on
4. **Move to next group**
### 4. Fix Order Strategy
**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues
**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions
**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling
### 5. Final Verification
After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact
## Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
## Example Workflow
User: "The tests are failing after my refactor"
1. Run `make test` → 15 failures identified
2. Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓Related Skills
vitest-test-creator
Vitest Test Creator - Auto-activating skill for Test Automation. Triggers on: vitest test creator, vitest test creator Part of the Test Automation skill category.
performing-visual-regression-testing
This skill enables Claude to execute visual regression tests using tools like Percy, Chromatic, and BackstopJS. It captures screenshots, compares them against baselines, and analyzes visual differences to identify unintended UI changes. Use this skill when the user requests visual testing, UI change verification, or regression testing for a web application or component. Trigger phrases include "visual test," "UI regression," "check visual changes," or "/visual-test".
generating-unit-tests
This skill enables Claude to automatically generate comprehensive unit tests from source code. It is triggered when the user requests unit tests, test cases, or test suites for specific files or code snippets. The skill supports multiple testing frameworks including Jest, pytest, JUnit, and others, intelligently detecting the appropriate framework or using one specified by the user. Use this skill when the user asks to "generate tests", "create unit tests", or uses the shortcut "gut" followed by a file path.
train-test-splitter
Train Test Splitter - Auto-activating skill for ML Training. Triggers on: train test splitter, train test splitter Part of the ML Training skill category.
test-retry-config
Test Retry Config - Auto-activating skill for Test Automation. Triggers on: test retry config, test retry config Part of the Test Automation skill category.
generating-test-reports
This skill generates comprehensive test reports with coverage metrics, trends, and stakeholder-friendly formats (HTML, PDF, JSON). It aggregates test results from various frameworks, calculates key metrics (coverage, pass rate, duration), and performs trend analysis. Use this skill when the user requests a test report, coverage analysis, failure analysis, or historical comparisons of test runs. Trigger terms include "test report", "coverage report", "testing trends", "failure analysis", and "historical test data".
test-parallelizer
Test Parallelizer - Auto-activating skill for Test Automation. Triggers on: test parallelizer, test parallelizer Part of the Test Automation skill category.
test-organization-helper
Test Organization Helper - Auto-activating skill for Test Automation. Triggers on: test organization helper, test organization helper Part of the Test Automation skill category.
test-naming-enforcer
Test Naming Enforcer - Auto-activating skill for Test Automation. Triggers on: test naming enforcer, test naming enforcer Part of the Test Automation skill category.
managing-test-environments
This skill enables Claude to manage isolated test environments using Docker Compose, Testcontainers, and environment variables. It is used to create consistent, reproducible testing environments for software projects. Claude should use this skill when the user needs to set up a test environment with specific configurations, manage Docker Compose files for test infrastructure, set up programmatic container management with Testcontainers, manage environment variables for tests, or ensure cleanup after tests. Trigger terms include "test environment", "docker compose", "testcontainers", "environment variables", "isolated environment", "env-setup", and "test setup".
generating-test-doubles
This skill uses the test-doubles-generator plugin to automatically create mocks, stubs, spies, and fakes for unit testing. It analyzes dependencies in the code and generates appropriate test doubles based on the chosen testing framework, such as Jest, Sinon, or others. Use this skill when you need to generate test doubles, mocks, stubs, spies, or fakes to isolate units of code during testing. Trigger this skill by requesting test double generation or using the `/gen-doubles` or `/gd` command.
generating-test-data
This skill enables Claude to generate realistic test data for software development. It uses the test-data-generator plugin to create users, products, orders, and custom schemas for comprehensive testing. Use this skill when you need to populate databases, simulate user behavior, or create fixtures for automated tests. Trigger phrases include "generate test data", "create fake users", "populate database", "generate product data", "create test orders", or "generate data based on schema". This skill is especially useful for populating testing environments or creating sample data for demonstrations.