test-scaffolding

Generate test file scaffolds from source analysis with language-appropriate templates.

25 stars

Best use case

test-scaffolding is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Generate test file scaffolds from source analysis with language-appropriate templates.

Teams using test-scaffolding should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/test-scaffolding/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/consiliency/test-scaffolding/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/test-scaffolding/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How test-scaffolding Compares

Feature / Agenttest-scaffoldingStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Generate test file scaffolds from source analysis with language-appropriate templates.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Test Scaffolding Skill

Generate test file scaffolds for source files, enabling TDD workflows. Scaffolds contain TODO stubs that the test-engineer agent fills during lane execution.

## Variables

| Variable | Default | Description |
|----------|---------|-------------|
| SOURCE_FILES | [] | List of source files to scaffold tests for |
| TEST_FRAMEWORK | auto | Framework to use (auto-detects from manifest) |
| OUTPUT_DIR | tests/ | Where to place generated test files |
| NAMING_CONVENTION | language-default | Test file naming pattern |
| INCLUDE_FIXTURES | true | Generate fixture stubs |
| STUB_STYLE | todo | `todo` (TODO comments) or `skip` (skip markers) |

## Workflow (Mandatory)

1. **Detect stack**: Read package manifest (`pyproject.toml`, `package.json`, `go.mod`, `Cargo.toml`)
2. **Identify framework**: Match test dependencies (pytest, vitest, jest, testing, cargo test)
3. **Analyze sources**: Extract public functions, classes, methods from each source file
4. **Map to tests**: Apply naming convention to determine test file paths
5. **Generate scaffolds**: Use language template, insert TODO stubs for each testable unit
6. **Return manifest**: JSON with generated files, skipped files, and unit counts

## Supported Frameworks

| Language | Frameworks | Detection |
|----------|------------|-----------|
| Python | pytest, unittest | `pyproject.toml` → `[tool.pytest]` or `pytest` in deps |
| TypeScript | vitest, jest | `package.json` → `vitest` or `jest` in devDeps |
| JavaScript | vitest, jest | `package.json` → `vitest` or `jest` in devDeps |
| Go | testing | `go.mod` → built-in testing package |
| Rust | cargo test | `Cargo.toml` → built-in test harness |
| Dart | flutter_test, test | `pubspec.yaml` → `flutter_test` or `test` in dev_deps |

## Naming Conventions

| Language | Source | Test File |
|----------|--------|-----------|
| Python | `src/auth/login.py` | `tests/auth/test_login.py` |
| TypeScript | `src/auth/login.ts` | `src/auth/login.test.ts` or `tests/auth/login.test.ts` |
| Go | `pkg/auth/login.go` | `pkg/auth/login_test.go` |
| Rust | `src/auth/login.rs` | inline `#[cfg(test)]` module |
| Dart | `lib/auth/login.dart` | `test/auth/login_test.dart` |

## Source Analysis Heuristics

### Python
- Detect `def function_name(` where name doesn't start with `_`
- Detect `class ClassName:` for public classes
- Extract method signatures within classes
- Skip `__init__`, `__str__`, etc. (dunder methods)

### TypeScript/JavaScript
- Detect `export function`, `export const`, `export class`
- Detect `export default function/class`
- Parse JSDoc/TSDoc for parameter types

### Go
- Detect exported functions (capitalized names)
- Detect exported methods on structs
- Detect exported types

### Rust
- Detect `pub fn`, `pub struct`, `pub enum`
- Detect `impl` blocks with public methods

## Output Schema

```json
{
  "format": "scaffold-manifest/v1",
  "generated_at": "<ISO-8601 UTC>",
  "framework": "pytest",
  "generated": [
    {
      "source": "src/auth/login.py",
      "test": "tests/auth/test_login.py",
      "units": ["login", "logout", "refresh_token"],
      "unit_count": 3
    }
  ],
  "skipped": [
    {
      "source": "src/auth/utils.py",
      "reason": "test file exists"
    }
  ],
  "total_units": 12
}
```

## Red Flags (Stop & Verify)

- No package manifest found → prompt user for framework
- Source file has no public functions → skip with warning
- Test file already exists → skip unless `--force` specified
- Unable to parse source file → log warning, continue with others

## Integration Points

### With `/ai-dev-kit:plan-phase`
- Called to auto-populate `Tests Owned Files` column
- Uses `Owned Artifacts` from impl tasks as source files

### With `/ai-dev-kit:execute-lane`
- Called before test-engineer agent runs
- Scaffolds committed with `chore(P{n}-{lane}): scaffold test files`

### With `test-engineer` agent
- Agent detects TODO markers in scaffolds
- Fills in test implementations
- Removes TODO markers when complete

## Provider Notes

- Use this skill when `/ai-dev-kit:scaffold-tests` is invoked
- Prefer TODO-style stubs over skip markers for visibility
- Preserve source file structure in test organization
- Include proper imports based on detected framework

Related Skills

vitest-test-creator

25
from ComeOnOliver/skillshub

Vitest Test Creator - Auto-activating skill for Test Automation. Triggers on: vitest test creator, vitest test creator Part of the Test Automation skill category.

performing-visual-regression-testing

25
from ComeOnOliver/skillshub

This skill enables Claude to execute visual regression tests using tools like Percy, Chromatic, and BackstopJS. It captures screenshots, compares them against baselines, and analyzes visual differences to identify unintended UI changes. Use this skill when the user requests visual testing, UI change verification, or regression testing for a web application or component. Trigger phrases include "visual test," "UI regression," "check visual changes," or "/visual-test".

generating-unit-tests

25
from ComeOnOliver/skillshub

This skill enables Claude to automatically generate comprehensive unit tests from source code. It is triggered when the user requests unit tests, test cases, or test suites for specific files or code snippets. The skill supports multiple testing frameworks including Jest, pytest, JUnit, and others, intelligently detecting the appropriate framework or using one specified by the user. Use this skill when the user asks to "generate tests", "create unit tests", or uses the shortcut "gut" followed by a file path.

train-test-splitter

25
from ComeOnOliver/skillshub

Train Test Splitter - Auto-activating skill for ML Training. Triggers on: train test splitter, train test splitter Part of the ML Training skill category.

test-retry-config

25
from ComeOnOliver/skillshub

Test Retry Config - Auto-activating skill for Test Automation. Triggers on: test retry config, test retry config Part of the Test Automation skill category.

generating-test-reports

25
from ComeOnOliver/skillshub

This skill generates comprehensive test reports with coverage metrics, trends, and stakeholder-friendly formats (HTML, PDF, JSON). It aggregates test results from various frameworks, calculates key metrics (coverage, pass rate, duration), and performs trend analysis. Use this skill when the user requests a test report, coverage analysis, failure analysis, or historical comparisons of test runs. Trigger terms include "test report", "coverage report", "testing trends", "failure analysis", and "historical test data".

test-parallelizer

25
from ComeOnOliver/skillshub

Test Parallelizer - Auto-activating skill for Test Automation. Triggers on: test parallelizer, test parallelizer Part of the Test Automation skill category.

test-organization-helper

25
from ComeOnOliver/skillshub

Test Organization Helper - Auto-activating skill for Test Automation. Triggers on: test organization helper, test organization helper Part of the Test Automation skill category.

test-naming-enforcer

25
from ComeOnOliver/skillshub

Test Naming Enforcer - Auto-activating skill for Test Automation. Triggers on: test naming enforcer, test naming enforcer Part of the Test Automation skill category.

managing-test-environments

25
from ComeOnOliver/skillshub

This skill enables Claude to manage isolated test environments using Docker Compose, Testcontainers, and environment variables. It is used to create consistent, reproducible testing environments for software projects. Claude should use this skill when the user needs to set up a test environment with specific configurations, manage Docker Compose files for test infrastructure, set up programmatic container management with Testcontainers, manage environment variables for tests, or ensure cleanup after tests. Trigger terms include "test environment", "docker compose", "testcontainers", "environment variables", "isolated environment", "env-setup", and "test setup".

generating-test-doubles

25
from ComeOnOliver/skillshub

This skill uses the test-doubles-generator plugin to automatically create mocks, stubs, spies, and fakes for unit testing. It analyzes dependencies in the code and generates appropriate test doubles based on the chosen testing framework, such as Jest, Sinon, or others. Use this skill when you need to generate test doubles, mocks, stubs, spies, or fakes to isolate units of code during testing. Trigger this skill by requesting test double generation or using the `/gen-doubles` or `/gd` command.

generating-test-data

25
from ComeOnOliver/skillshub

This skill enables Claude to generate realistic test data for software development. It uses the test-data-generator plugin to create users, products, orders, and custom schemas for comprehensive testing. Use this skill when you need to populate databases, simulate user behavior, or create fixtures for automated tests. Trigger phrases include "generate test data", "create fake users", "populate database", "generate product data", "create test orders", or "generate data based on schema". This skill is especially useful for populating testing environments or creating sample data for demonstrations.