multiAI Summary Pending

project-testing

Custom test patterns and fixtures for {project}. Covers E2E, integration, and specialized testing requirements.

231 stars

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/project-testing/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/consiliency/project-testing/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/project-testing/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How project-testing Compares

Feature / Agentproject-testingStandard Approach
Platform SupportmultiLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Custom test patterns and fixtures for {project}. Covers E2E, integration, and specialized testing requirements.

Which AI agents support this skill?

This skill is compatible with multi.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

<!-- Generated by ai-dev-kit:recommend-skills on {date} -->
<!-- This skill can be safely deleted if no longer needed -->

# {project} Testing Patterns

Custom testing patterns, fixtures, and strategies for this project.

## Variables

| Variable | Default | Description |
|----------|---------|-------------|
| COVERAGE_TARGET | 80 | Minimum coverage percentage |
| E2E_TIMEOUT | 30000 | E2E test timeout in ms |
| PARALLEL_TESTS | true | Run tests in parallel when possible |

## Instructions

1. Identify test type needed (unit/integration/e2e)
2. Use appropriate fixtures and patterns
3. Follow project naming conventions
4. Ensure proper cleanup

## Red Flags - STOP and Reconsider

If you're about to:
- Write a test without proper isolation
- Skip cleanup in fixtures
- Hardcode test data instead of using fixtures
- Write flaky tests (timing-dependent, order-dependent)

**STOP** -> Use proper fixtures -> Ensure isolation -> Then write test

## Test Categories

### Unit Tests

Location: `tests/unit/`

Patterns:
- Test single functions/methods in isolation
- Mock external dependencies
- Fast execution (< 100ms each)

### Integration Tests

Location: `tests/integration/`

Patterns:
- Test component interactions
- Use test database/fixtures
- May have external dependencies

### E2E Tests

Location: `tests/e2e/` or `playwright/`

Patterns:
- Test full user workflows
- Use browser automation
- Longer execution time acceptable

## Fixtures

### Database Fixtures

Location: `tests/fixtures/`

Usage:
```python
# Python example
from tests.fixtures import sample_user, sample_order

def test_order_creation(sample_user, sample_order):
    # Test uses pre-configured fixtures
    pass
```

### Mock Services

Location: `tests/mocks/`

Available mocks:
- [TODO: List project-specific mocks]

## Naming Conventions

| Test Type | File Pattern | Function Pattern |
|-----------|-------------|------------------|
| Unit | `test_*.py` | `test_<function>_<scenario>` |
| Integration | `test_*_integration.py` | `test_<component>_<action>` |
| E2E | `*.spec.ts` | `test('<feature> - <scenario>')` |

## Coverage Requirements

| Component | Minimum Coverage |
|-----------|-----------------|
| Core logic | 90% |
| API routes | 80% |
| Utilities | 70% |

## CI Integration

Tests run in CI:
- On PR: unit + integration
- On merge: all including E2E
- Nightly: full regression suite

## Customization

Edit this file to add:
- New fixture definitions
- Additional mock services
- Custom test patterns
- Coverage exceptions