tdd-workflows-tdd-cycle
Use when working with tdd workflows tdd cycle
Best use case
tdd-workflows-tdd-cycle is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Use when working with tdd workflows tdd cycle
Teams using tdd-workflows-tdd-cycle should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/tdd-workflows-tdd-cycle/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How tdd-workflows-tdd-cycle Compares
| Feature / Agent | tdd-workflows-tdd-cycle | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Use when working with tdd workflows tdd cycle
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
## Use this skill when - Working on tdd workflows tdd cycle tasks or workflows - Needing guidance, best practices, or checklists for tdd workflows tdd cycle ## Do not use this skill when - The task is unrelated to tdd workflows tdd cycle - You need a different domain or tool outside this scope ## Instructions - Clarify goals, constraints, and required inputs. - Apply relevant best practices and validate outcomes. - Provide actionable steps and verification. - If detailed examples are required, open `resources/implementation-playbook.md`. Execute a comprehensive Test-Driven Development (TDD) workflow with strict red-green-refactor discipline: [Extended thinking: This workflow enforces test-first development through coordinated agent orchestration. Each phase of the TDD cycle is strictly enforced with fail-first verification, incremental implementation, and continuous refactoring. The workflow supports both single test and test suite approaches with configurable coverage thresholds.] ## Configuration ### Coverage Thresholds - Minimum line coverage: 80% - Minimum branch coverage: 75% - Critical path coverage: 100% ### Refactoring Triggers - Cyclomatic complexity > 10 - Method length > 20 lines - Class length > 200 lines - Duplicate code blocks > 3 lines ## Phase 1: Test Specification and Design ### 1. Requirements Analysis - Use Task tool with subagent_type="comprehensive-review::architect-review" - Prompt: "Analyze requirements for: $ARGUMENTS. Define acceptance criteria, identify edge cases, and create test scenarios. Output a comprehensive test specification." - Output: Test specification, acceptance criteria, edge case matrix - Validation: Ensure all requirements have corresponding test scenarios ### 2. Test Architecture Design - Use Task tool with subagent_type="unit-testing::test-automator" - Prompt: "Design test architecture for: $ARGUMENTS based on test specification. Define test structure, fixtures, mocks, and test data strategy. Ensure testability and maintainability." - Output: Test architecture, fixture design, mock strategy - Validation: Architecture supports isolated, fast, reliable tests ## Phase 2: RED - Write Failing Tests ### 3. Write Unit Tests (Failing) - Use Task tool with subagent_type="unit-testing::test-automator" - Prompt: "Write FAILING unit tests for: $ARGUMENTS. Tests must fail initially. Include edge cases, error scenarios, and happy paths. DO NOT implement production code." - Output: Failing unit tests, test documentation - **CRITICAL**: Verify all tests fail with expected error messages ### 4. Verify Test Failure - Use Task tool with subagent_type="tdd-workflows::code-reviewer" - Prompt: "Verify that all tests for: $ARGUMENTS are failing correctly. Ensure failures are for the right reasons (missing implementation, not test errors). Confirm no false positives." - Output: Test failure verification report - **GATE**: Do not proceed until all tests fail appropriately ## Phase 3: GREEN - Make Tests Pass ### 5. Minimal Implementation - Use Task tool with subagent_type="backend-development::backend-architect" - Prompt: "Implement MINIMAL code to make tests pass for: $ARGUMENTS. Focus only on making tests green. Do not add extra features or optimizations. Keep it simple." - Output: Minimal working implementation - Constraint: No code beyond what's needed to pass tests ### 6. Verify Test Success - Use Task tool with subagent_type="unit-testing::test-automator" - Prompt: "Run all tests for: $ARGUMENTS and verify they pass. Check test coverage metrics. Ensure no tests were accidentally broken." - Output: Test execution report, coverage metrics - **GATE**: All tests must pass before proceeding ## Phase 4: REFACTOR - Improve Code Quality ### 7. Code Refactoring - Use Task tool with subagent_type="tdd-workflows::code-reviewer" - Prompt: "Refactor implementation for: $ARGUMENTS while keeping tests green. Apply SOLID principles, remove duplication, improve naming, and optimize performance. Run tests after each refactoring." - Output: Refactored code, refactoring report - Constraint: Tests must remain green throughout ### 8. Test Refactoring - Use Task tool with subagent_type="unit-testing::test-automator" - Prompt: "Refactor tests for: $ARGUMENTS. Remove test duplication, improve test names, extract common fixtures, and enhance test readability. Ensure tests still provide same coverage." - Output: Refactored tests, improved test structure - Validation: Coverage metrics unchanged or improved ## Phase 5: Integration and System Tests ### 9. Write Integration Tests (Failing First) - Use Task tool with subagent_type="unit-testing::test-automator" - Prompt: "Write FAILING integration tests for: $ARGUMENTS. Test component interactions, API contracts, and data flow. Tests must fail initially." - Output: Failing integration tests - Validation: Tests fail due to missing integration logic ### 10. Implement Integration - Use Task tool with subagent_type="backend-development::backend-architect" - Prompt: "Implement integration code for: $ARGUMENTS to make integration tests pass. Focus on component interaction and data flow." - Output: Integration implementation - Validation: All integration tests pass ## Phase 6: Continuous Improvement Cycle ### 11. Performance and Edge Case Tests - Use Task tool with subagent_type="unit-testing::test-automator" - Prompt: "Add performance tests and additional edge case tests for: $ARGUMENTS. Include stress tests, boundary tests, and error recovery tests." - Output: Extended test suite - Metric: Increased test coverage and scenario coverage ### 12. Final Code Review - Use Task tool with subagent_type="comprehensive-review::architect-review" - Prompt: "Perform comprehensive review of: $ARGUMENTS. Verify TDD process was followed, check code quality, test quality, and coverage. Suggest improvements." - Output: Review report, improvement suggestions - Action: Implement critical suggestions while maintaining green tests ## Incremental Development Mode For test-by-test development: 1. Write ONE failing test 2. Make ONLY that test pass 3. Refactor if needed 4. Repeat for next test Use this approach by adding `--incremental` flag to focus on one test at a time. ## Test Suite Mode For comprehensive test suite development: 1. Write ALL tests for a feature/module (failing) 2. Implement code to pass ALL tests 3. Refactor entire module 4. Add integration tests Use this approach by adding `--suite` flag for batch test development. ## Validation Checkpoints ### RED Phase Validation - [ ] All tests written before implementation - [ ] All tests fail with meaningful error messages - [ ] Test failures are due to missing implementation - [ ] No test passes accidentally ### GREEN Phase Validation - [ ] All tests pass - [ ] No extra code beyond test requirements - [ ] Coverage meets minimum thresholds - [ ] No test was modified to make it pass ### REFACTOR Phase Validation - [ ] All tests still pass after refactoring - [ ] Code complexity reduced - [ ] Duplication eliminated - [ ] Performance improved or maintained - [ ] Test readability improved ## Coverage Reports Generate coverage reports after each phase: - Line coverage - Branch coverage - Function coverage - Statement coverage ## Failure Recovery If TDD discipline is broken: 1. **STOP** immediately 2. Identify which phase was violated 3. Rollback to last valid state 4. Resume from correct phase 5. Document lesson learned ## TDD Metrics Tracking Track and report: - Time in each phase (Red/Green/Refactor) - Number of test-implementation cycles - Coverage progression - Refactoring frequency - Defect escape rate ## Anti-Patterns to Avoid - Writing implementation before tests - Writing tests that already pass - Skipping the refactor phase - Writing multiple features without tests - Modifying tests to make them pass - Ignoring failing tests - Writing tests after implementation ## Success Criteria - 100% of code written test-first - All tests pass continuously - Coverage exceeds thresholds - Code complexity within limits - Zero defects in covered code - Clear test documentation - Fast test execution (< 5 seconds for unit tests) ## Notes - Enforce strict RED-GREEN-REFACTOR discipline - Each phase must be completed before moving to next - Tests are the specification - If a test is hard to write, the design needs improvement - Refactoring is NOT optional - Keep test execution fast - Tests should be independent and isolated TDD implementation for: $ARGUMENTS
Related Skills
tdd-workflows-tdd-refactor
Use when working with tdd workflows tdd refactor
tdd-workflows-tdd-red
Generate failing tests for the TDD red phase to define expected behavior and edge cases.
tdd-workflows-tdd-green
Implement the minimal code needed to make failing tests pass in the TDD green phase.
expo-cicd-workflows
Helps understand and write EAS workflow YAML files for Expo projects. Use this skill when the user asks about CI/CD or workflows in an Expo or EAS context, mentions .eas/workflows/, or wants help with EAS build pipelines or deployment automation.
antigravity-workflows
Orchestrate multiple Antigravity skills through guided workflows for SaaS MVP delivery, security audits, AI agent builds, and browser QA.
shabbat-times
Access Jewish calendar data and Shabbat times via Hebcal API. Use when building apps with Shabbat times, Jewish holidays, Hebrew dates, or Zmanim. Triggers on Shabbat times, Hebcal, Jewish calendar, Hebrew date, Zmanim.
mcp:setup-serena-mcp
Guide for setup Serena MCP server for semantic code retrieval and editing capabilities
mcp:setup-context7-mcp
Guide for setup Context7 MCP server to load documentation for specific technologies.
server-management
Server management principles and decision-making. Process management, monitoring strategy, and scaling decisions. Teaches thinking, not commands.
serpapi-automation
Automate Serpapi tasks via Rube MCP (Composio). Always search tools first for current schemas.
segment-cdp
Expert patterns for Segment Customer Data Platform including Analytics.js, server-side tracking, tracking plans with Protocols, identity resolution, destinations configuration, and data governance ...
seatbelt-sandboxer
Generates minimal macOS Seatbelt sandbox configurations. Use when sandboxing, isolating, or restricting macOS applications with allowlist-based profiles.