test-fixing
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to ma...
Best use case
test-fixing is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to ma...
Teams using test-fixing should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/test-fixing/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How test-fixing Compares
| Feature / Agent | test-fixing | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Run tests and systematically fix all failing tests using smart error grouping. Use when user asks to fix failing tests, mentions test failures, runs test suite and failures occur, or requests to ma...
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Test Fixing
Systematically identify and fix all failing tests using smart grouping strategies.
## When to Use
- Explicitly asks to fix tests ("fix these tests", "make tests pass")
- Reports test failures ("tests are failing", "test suite is broken")
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
## Systematic Approach
### 1. Initial Test Run
Run `make test` to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
### 2. Smart Error Grouping
Group similar failures by:
- **Error type**: ImportError, AttributeError, AssertionError, etc.
- **Module/file**: Same file causing multiple test failure
- **Root cause**: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
### 3. Systematic Fixing Process
For each group (starting with highest impact):
1. **Identify root cause**
- Read relevant code
- Check recent changes with `git diff`
- Understand the error pattern
2. **Implement fix**
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
3. **Verify fix**
- Run subset of tests for this group
- Use pytest markers or file patterns:
```bash
uv run pytest tests/path/to/test_file.py -v
uv run pytest -k "pattern" -v
```
- Ensure group passes before moving on
4. **Move to next group**
### 4. Fix Order Strategy
**Infrastructure first:**
- Import errors
- Missing dependencies
- Configuration issues
**Then API changes:**
- Function signature changes
- Module reorganization
- Renamed variables/functions
**Finally, logic issues:**
- Assertion failures
- Business logic bugs
- Edge case handling
### 5. Final Verification
After all groups fixed:
- Run complete test suite: `make test`
- Verify no regressions
- Check test coverage remains intact
## Best Practices
- Fix one group at a time
- Run focused tests after each fix
- Use `git diff` to understand recent changes
- Look for patterns in failures
- Don't move to next group until current passes
- Keep changes minimal and focused
## Example Workflow
User: "The tests are failing after my refactor"
1. Run `make test` → 15 failures identified
2. Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
3. Fix ImportErrors first → Run subset → Verify
4. Fix AttributeErrors → Run subset → Verify
5. Fix AssertionErrors → Run subset → Verify
6. Run full suite → All pass ✓Related Skills
webapp-testing
Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browse...
web3-testing
Test smart contracts comprehensively using Hardhat and Foundry with unit tests, integration tests, and mainnet forking. Use when testing Solidity contracts, setting up blockchain test suites, or va...
vue-testing-best-practices
Use for Vue.js testing. Covers Vitest, Vue Test Utils, component testing, mocking, testing patterns, and Playwright for E2E testing.
screen-reader-testing
Test web applications with screen readers including VoiceOver, NVDA, and JAWS. Use when validating screen reader compatibility, debugging accessibility issues, or ensuring assistive technology supp...
unit-testing-test-generate
Generate comprehensive, maintainable unit tests across languages with strong coverage and edge case focus.
test-strategy
Generates a test strategy document for a project or module. Analyzes existing coverage, identifies gaps, prioritizes what to test, and recommends test types (unit, integration, e2e) per module. Use before add-tests for informed testing decisions.
tdd:test-driven-development
Use when implementing any feature or bugfix, before writing implementation code - write the test first, watch it fail, write minimal code to pass; ensures tests actually verify behavior by requiring failure first
test-automator
Master AI-powered test automation with modern frameworks, self-healing tests, and comprehensive quality engineering. Build scalable testing strategies with advanced CI/CD integration.
perf-theory-tester
Use when running controlled perf experiments to validate hypotheses.
pentest-commands
This skill should be used when the user asks to "run pentest commands", "scan with nmap", "use metasploit exploits", "crack passwords with hydra or john", "scan web vulnerabilities ...
pentest-checklist
This skill should be used when the user asks to "plan a penetration test", "create a security assessment checklist", "prepare for penetration testing", "define pentest scope", "foll...
ab-test-setup
Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness.