pytest-mastery

Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".

242 stars

Best use case

pytest-mastery is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".

Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".

Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.

Practical example

Example input

Use the "pytest-mastery" skill to help with this workflow task. Context: Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".

Example output

A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.

When to use this skill

  • Use this skill when you want a reusable workflow rather than writing the same prompt again and again.

When not to use this skill

  • Do not use this when you only need a one-off answer and do not need a reusable workflow.
  • Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/pytest-mastery/SKILL.md --create-dirs "https://raw.githubusercontent.com/aiskillstore/marketplace/main/skills/abdullahmalik17/pytest-mastery/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/pytest-mastery/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How pytest-mastery Compares

Feature / Agentpytest-masteryStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# pytest Testing with uv

## Quick Reference

```bash
# Run all tests
uv run pytest

# Run with verbose output
uv run pytest -v

# Run specific file
uv run pytest tests/test_example.py

# Run specific test function
uv run pytest tests/test_example.py::test_function_name

# Run tests matching pattern
uv run pytest -k "pattern"

# Run with coverage
uv run pytest --cov=src --cov-report=html
```

## Installation

```bash
# Add pytest as dev dependency
uv add --dev pytest

# Add coverage support
uv add --dev pytest-cov

# Add async support (for FastAPI)
uv add --dev pytest-asyncio httpx
```

## Test Discovery

pytest automatically discovers tests following these conventions:

- Files: `test_*.py` or `*_test.py`
- Functions: `test_*`
- Classes: `Test*` (no `__init__` method)
- Methods: `test_*` inside `Test*` classes

Standard project structure:
```
project/
├── src/
│   └── myapp/
├── tests/
│   ├── __init__.py
│   ├── conftest.py      # Shared fixtures
│   ├── test_unit.py
│   └── integration/
│       └── test_api.py
└── pyproject.toml
```

## Fixtures

Fixtures provide reusable test setup/teardown:

```python
import pytest

@pytest.fixture
def sample_user():
    return {"id": 1, "name": "Test User"}

@pytest.fixture
def db_connection():
    conn = create_connection()
    yield conn  # Test runs here
    conn.close()  # Teardown

def test_user_name(sample_user):
    assert sample_user["name"] == "Test User"
```

### Fixture Scopes

```python
@pytest.fixture(scope="function")  # Default: new instance per test
@pytest.fixture(scope="class")     # Once per test class
@pytest.fixture(scope="module")    # Once per module
@pytest.fixture(scope="session")   # Once per test session
```

### Shared Fixtures (conftest.py)

Place in `tests/conftest.py` for automatic availability:

```python
# tests/conftest.py
import pytest

@pytest.fixture
def api_client():
    return TestClient(app)
```

## Parametrization

Run same test with multiple inputs:

```python
import pytest

@pytest.mark.parametrize("input,expected", [
    (1, 2),
    (2, 4),
    (3, 6),
])
def test_double(input, expected):
    assert input * 2 == expected

@pytest.mark.parametrize("value", [None, "", [], {}])
def test_falsy_values(value):
    assert not value
```

## Common Options

| Option | Description |
|--------|-------------|
| `-v` | Verbose output |
| `-vv` | More verbose |
| `-q` | Quiet mode |
| `-x` | Stop on first failure |
| `--lf` | Run last failed tests only |
| `--ff` | Run failures first |
| `-k "expr"` | Filter by name expression |
| `-m "mark"` | Run marked tests only |
| `--tb=short` | Shorter traceback |
| `--tb=no` | No traceback |
| `-s` | Show print statements |
| `--durations=10` | Show 10 slowest tests |
| `-n auto` | Parallel execution (pytest-xdist) |

## Coverage Reports

```bash
# Terminal report
uv run pytest --cov=src

# HTML report (creates htmlcov/)
uv run pytest --cov=src --cov-report=html

# With minimum threshold (fails if below)
uv run pytest --cov=src --cov-fail-under=80

# Multiple report formats
uv run pytest --cov=src --cov-report=term --cov-report=xml
```

## Markers

```python
import pytest

@pytest.mark.slow
def test_slow_operation():
    ...

@pytest.mark.skip(reason="Not implemented")
def test_future_feature():
    ...

@pytest.mark.skipif(condition, reason="...")
def test_conditional():
    ...

@pytest.mark.xfail(reason="Known bug")
def test_known_failure():
    ...
```

Run by marker:
```bash
uv run pytest -m "not slow"
uv run pytest -m "integration"
```

## pyproject.toml Configuration

```toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py"]
python_functions = ["test_*"]
addopts = "-v --tb=short"
markers = [
    "slow: marks tests as slow",
    "integration: integration tests",
]

[tool.coverage.run]
source = ["src"]
omit = ["tests/*", "*/__init__.py"]

[tool.coverage.report]
exclude_lines = [
    "pragma: no cover",
    "if TYPE_CHECKING:",
]
```

## FastAPI Testing

See [references/fastapi-testing.md](references/fastapi-testing.md) for comprehensive FastAPI testing patterns including:
- TestClient setup
- Async testing with httpx
- Database fixture patterns
- Dependency overrides
- Authentication testing

## Debugging Failed Tests

```bash
# Run with full traceback
uv run pytest --tb=long

# Drop into debugger on failure
uv run pytest --pdb

# Show local variables in traceback
uv run pytest -l

# Run only previously failed
uv run pytest --lf
```

Related Skills

feedback-mastery

242
from aiskillstore/marketplace

Navigate difficult conversations and deliver constructive feedback using structured frameworks. Covers the Preparation-Delivery-Follow-up model and Situation-Behavior-Impact (SBI) feedback technique. Use when preparing for difficult conversations, giving feedback, or managing conflicts.

javascript-mastery

242
from aiskillstore/marketplace

Comprehensive JavaScript reference covering 33+ essential concepts every developer should know. From fundamentals like primitives and closures to advanced patterns like async/await and functional programming. Use when explaining JS concepts, debugging JavaScript issues, or teaching JavaScript fundamentals.

tailwind-css-v4-mastery

242
from aiskillstore/marketplace

Expert guidance for leveraging Tailwind CSS V4's new Oxide engine, CSS-first configuration, and modern styling paradigms. This skill transforms Claude into a Tailwind V4 architecture specialist, capable of designing component systems, optimizing performance, and executing complex styling challenges with precision.

pytest

242
from aiskillstore/marketplace

Python testing framework for writing simple, scalable, and powerful tests

pytest-recording

242
from aiskillstore/marketplace

Work with pytest-recording (VCR.py) for recording and replaying HTTP interactions in tests. Use when writing VCR tests, managing cassettes, configuring VCR options, filtering sensitive data, or debugging recorded HTTP responses.

pytest-mock-guide

242
from aiskillstore/marketplace

Guide for using pytest-mock plugin to write tests with mocking. Use when writing pytest tests that need mocking, patching, spying, or stubbing. Covers mocker fixture usage, patch methods, spy/stub patterns, and assertion helpers.

fastapi-mastery

242
from aiskillstore/marketplace

Comprehensive FastAPI development skill covering REST API creation, routing, request/response handling, validation, authentication, database integration, middleware, and deployment. Use when working with FastAPI projects, building APIs, implementing CRUD operations, setting up authentication/authorization, integrating databases (SQL/NoSQL), adding middleware, handling WebSockets, or deploying FastAPI applications. Triggered by requests involving .py files with FastAPI code, API endpoint creation, Pydantic models, or FastAPI-specific features.

tdd-pytest

242
from aiskillstore/marketplace

Python/pytest TDD specialist for test-driven development workflows. Use when writing tests, auditing test quality, running pytest, or generating test reports. Integrates with uv and pyproject.toml configuration.

python-pytest-patterns

242
from aiskillstore/marketplace

pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.

azure-quotas

242
from aiskillstore/marketplace

Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".

DevOps & Infrastructure

raindrop-io

242
from aiskillstore/marketplace

Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.

Data & Research

zlibrary-to-notebooklm

242
from aiskillstore/marketplace

自动从 Z-Library 下载书籍并上传到 Google NotebookLM。支持 PDF/EPUB 格式,自动转换,一键创建知识库。