test-orchestrator

Coordinates testing strategy and execution across all test types. Use when creating test plans, implementing tests (unit/integration/E2E), or enforcing coverage requirements (80% minimum). Applies testing-requirements.md.

25 stars

Best use case

test-orchestrator is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Coordinates testing strategy and execution across all test types. Use when creating test plans, implementing tests (unit/integration/E2E), or enforcing coverage requirements (80% minimum). Applies testing-requirements.md.

Teams using test-orchestrator should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/test-orchestrator/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/brownbull/test-orchestrator/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/test-orchestrator/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How test-orchestrator Compares

Feature / Agenttest-orchestratorStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Coordinates testing strategy and execution across all test types. Use when creating test plans, implementing tests (unit/integration/E2E), or enforcing coverage requirements (80% minimum). Applies testing-requirements.md.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Test Orchestrator Skill

## Role
Acts as QA Lead, coordinating all testing activities across the system.

## Responsibilities

1. **Test Strategy**
   - Define test plans
   - Coordinate test execution
   - Manage test environments
   - Track coverage metrics

2. **Test Automation**
   - Unit test coordination
   - Integration test suites
   - E2E test scenarios
   - Performance testing

3. **Quality Gates**
   - Define acceptance criteria
   - Enforce coverage thresholds
   - Block failing builds
   - Report quality metrics

4. **Context Maintenance**
   ```
   ai-state/active/testing/
   ├── test-plans.json     # Test strategies
   ├── coverage.json       # Coverage metrics
   ├── results.json        # Test results
   └── tasks/             # Active test tasks
   ```

## Skill Coordination

### Available Test Skills
- `unit-test-skill` - Unit test creation
- `integration-test-skill` - Integration testing
- `e2e-test-skill` - End-to-end scenarios
- `performance-test-skill` - Load/stress testing
- `security-test-skill` - Security validation

### Context Package to Skills
```yaml
context:
  task_id: "task-004-testing"
  component: "authentication"
  test_requirements:
    unit: ["all public methods", ">80% coverage"]
    integration: ["database operations", "API calls"]
    e2e: ["login flow", "password reset"]
    performance: ["100 concurrent users", "<200ms response"]
  standards:
    - "testing-requirements.md"
  existing_tests:
    coverage: 65%
    failing: ["test_login_invalid"]
```

## Task Processing Flow

1. **Receive Task**
   - Identify component
   - Review requirements
   - Check existing tests

2. **Create Test Plan**
   - Define test scenarios
   - Set coverage goals
   - Identify test data

3. **Assign to Skills**
   - Distribute test types
   - Set priorities
   - Define timelines

4. **Execute Tests**
   - Run test suites
   - Monitor execution
   - Collect results

5. **Validate Quality**
   - Check coverage
   - Review failures
   - Verify fixes
   - Generate reports

## Test Categories

### Unit Testing
- [ ] All public methods tested
- [ ] Edge cases covered
- [ ] Mocks properly used
- [ ] Fast execution (<1s)
- [ ] Isolated tests
- [ ] Clear assertions

### Integration Testing
- [ ] Component interactions
- [ ] Database operations
- [ ] API integrations
- [ ] Message queues
- [ ] File operations
- [ ] External services

### E2E Testing
- [ ] User workflows
- [ ] Critical paths
- [ ] Cross-browser
- [ ] Mobile responsive
- [ ] Error scenarios
- [ ] Recovery flows

### Performance Testing
- [ ] Load testing
- [ ] Stress testing
- [ ] Spike testing
- [ ] Volume testing
- [ ] Endurance testing
- [ ] Scalability testing

## Test Standards

### Test Quality Checklist
- [ ] Descriptive test names
- [ ] AAA pattern (Arrange, Act, Assert)
- [ ] Single assertion focus
- [ ] No test interdependencies
- [ ] Deterministic results
- [ ] Meaningful failures

### Coverage Requirements
- **Unit Tests:** >80% code coverage
- **Integration:** All APIs tested
- **E2E:** Critical paths covered
- **Performance:** Meets SLAs
- **Security:** OWASP top 10

## Integration Points

### With Development Orchestrators
- Test requirements from tasks
- Failure feedback loops
- Coverage reporting
- Quality gates

### With CI/CD Pipeline
- Automated test execution
- Build blocking on failures
- Test result reporting
- Coverage trends

### With Human-Docs
Updates testing documentation:
- Test plan changes
- Coverage reports
- Quality metrics
- Test guidelines

## Event Communication

### Listening For
```json
{
  "event": "code.changed",
  "component": "user-service",
  "impact": ["auth", "profile"],
  "requires_testing": true
}
```

### Broadcasting
```json
{
  "event": "tests.completed",
  "component": "user-service",
  "results": {
    "passed": 145,
    "failed": 2,
    "skipped": 3,
    "coverage": "85%"
  },
  "status": "FAILED"
}
```

## Test Execution Strategy

### Parallel Execution
```python
class TestOrchestrator:
    def run_tests(self, suites):
        # 1. Identify independent tests
        # 2. Distribute across workers
        # 3. Collect results
        # 4. Aggregate coverage
        # 5. Generate report
```

### Test Retry Logic
```python
def retry_failed_tests(failures):
    MAX_RETRIES = 3
    for test in failures:
        for attempt in range(MAX_RETRIES):
            if run_test(test).passed:
                break
        else:
            mark_as_flaky(test)
```

## Success Metrics

- Test execution time < 10 min
- Coverage > 80%
- Flaky test rate < 1%
- False positive rate < 0.1%
- Test maintenance time < 10%

## Test Data Management

### Strategies
1. **Fixtures** - Predefined test data
2. **Factories** - Dynamic data generation
3. **Snapshots** - Baseline comparisons
4. **Mocks** - External service simulation
5. **Stubs** - Simplified implementations

### Best Practices
- Isolate test data
- Clean up after tests
- Use realistic data
- Version test data
- Document data requirements

## Common Testing Patterns

### Page Object Pattern (E2E)
```typescript
class LoginPage {
  async login(email: string, password: string) {
    await this.emailInput.fill(email);
    await this.passwordInput.fill(password);
    await this.submitButton.click();
  }
}
```

### Test Builder Pattern
```python
def test_user_creation():
    user = UserBuilder()
        .with_email("test@example.com")
        .with_role("admin")
        .build()

    assert user.is_valid()
```

## Anti-Patterns to Avoid

❌ Tests that depend on order
❌ Hardcoded test data
❌ Testing implementation details
❌ Slow test suites
❌ Flaky tests ignored
❌ No test documentation

Related Skills

robotics-testing

25
from ComeOnOliver/skillshub

Testing strategies, patterns, and tools for robotics software. Use this skill when writing unit tests, integration tests, simulation tests, or hardware-in-the-loop tests for robot systems. Trigger whenever the user mentions testing ROS nodes, pytest with ROS, launch_testing, simulation testing, CI/CD for robotics, test fixtures for sensors, mock hardware, deterministic replay, regression testing for robot behaviors, or validating perception/planning/control pipelines. Also covers property-based testing for kinematics, fuzz testing for message handlers, and golden-file testing for trajectories.

Test Skill B

25
from ComeOnOliver/skillshub

A test skill in scoped pkg-b

Test Skill C

25
from ComeOnOliver/skillshub

A test skill in pkg-c (not in package.json)

../../../engineering-team/playwright-pro/skills/testrail/SKILL.md

25
from ComeOnOliver/skillshub

No description provided.

skill-tester

25
from ComeOnOliver/skillshub

Skill Tester

../../../engineering/skill-tester/assets/sample-skill/SKILL.md

25
from ComeOnOliver/skillshub

No description provided.

../../../engineering/api-test-suite-builder/SKILL.md

25
from ComeOnOliver/skillshub

No description provided.

master-orchestrator

25
from ComeOnOliver/skillshub

全自动总指挥:串联热点抓取、内容生成与爆款验证的全流程技能。

testing-strategies

25
from ComeOnOliver/skillshub

Design comprehensive testing strategies for software quality assurance. Use when planning test coverage, implementing test pyramids, or setting up testing infrastructure. Handles unit testing, integration testing, E2E testing, TDD, and testing best practices.

bmad-orchestrator

25
from ComeOnOliver/skillshub

Orchestrates BMAD workflows for structured AI-driven development. Routes work across Analysis, Planning, Solutioning, and Implementation phases.

backend-testing

25
from ComeOnOliver/skillshub

Write comprehensive backend tests including unit tests, integration tests, and API tests. Use when testing REST APIs, database operations, authentication flows, or business logic. Handles Jest, Pytest, Mocha, testing strategies, mocking, and test coverage.

qa-test-planner

25
from ComeOnOliver/skillshub

Generate comprehensive test plans, manual test cases, regression test suites, and bug reports for QA engineers. Includes Figma MCP integration for design validation.