python-pytest-patterns
pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
Best use case
python-pytest-patterns is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "python-pytest-patterns" skill to help with this workflow task. Context: pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/python-pytest-patterns/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How python-pytest-patterns Compares
| Feature / Agent | python-pytest-patterns | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Python pytest Patterns
Modern pytest patterns for effective testing.
## Basic Test Structure
```python
import pytest
def test_basic():
"""Simple assertion test."""
assert 1 + 1 == 2
def test_with_description():
"""Descriptive name and docstring."""
result = calculate_total([1, 2, 3])
assert result == 6, "Sum should equal 6"
```
## Fixtures
```python
import pytest
@pytest.fixture
def sample_user():
"""Create test user."""
return {"id": 1, "name": "Test User"}
@pytest.fixture
def db_connection():
"""Fixture with setup and teardown."""
conn = create_connection()
yield conn
conn.close()
def test_user(sample_user):
"""Fixtures injected by name."""
assert sample_user["name"] == "Test User"
```
### Fixture Scopes
```python
@pytest.fixture(scope="function") # Default - per test
@pytest.fixture(scope="class") # Per test class
@pytest.fixture(scope="module") # Per test file
@pytest.fixture(scope="session") # Entire test run
```
## Parametrize
```python
@pytest.mark.parametrize("input,expected", [
(1, 2),
(2, 4),
(3, 6),
])
def test_double(input, expected):
assert double(input) == expected
# Multiple parameters
@pytest.mark.parametrize("x", [1, 2])
@pytest.mark.parametrize("y", [10, 20])
def test_multiply(x, y): # 4 test combinations
assert x * y > 0
```
## Exception Testing
```python
def test_raises():
with pytest.raises(ValueError) as exc_info:
raise ValueError("Invalid input")
assert "Invalid" in str(exc_info.value)
def test_raises_match():
with pytest.raises(ValueError, match=r".*[Ii]nvalid.*"):
raise ValueError("Invalid input")
```
## Markers
```python
@pytest.mark.skip(reason="Not implemented yet")
def test_future_feature():
pass
@pytest.mark.skipif(sys.platform == "win32", reason="Unix only")
def test_unix_feature():
pass
@pytest.mark.xfail(reason="Known bug")
def test_buggy():
assert broken_function() == expected
@pytest.mark.slow
def test_performance():
"""Custom marker - register in pytest.ini."""
pass
```
## Mocking
```python
from unittest.mock import Mock, patch, MagicMock
def test_with_mock():
mock_api = Mock()
mock_api.get.return_value = {"status": "ok"}
result = mock_api.get("/endpoint")
assert result["status"] == "ok"
@patch("module.external_api")
def test_with_patch(mock_api):
mock_api.return_value = {"data": []}
result = function_using_api()
mock_api.assert_called_once()
```
### pytest-mock (Recommended)
```python
def test_with_mocker(mocker):
mock_api = mocker.patch("module.api_call")
mock_api.return_value = {"success": True}
result = process_data()
assert result["success"]
```
## conftest.py
```python
# tests/conftest.py - Shared fixtures
import pytest
@pytest.fixture(scope="session")
def app():
"""Application fixture available to all tests."""
return create_app(testing=True)
@pytest.fixture
def client(app):
"""Test client fixture."""
return app.test_client()
```
## Quick Reference
| Command | Description |
|---------|-------------|
| `pytest` | Run all tests |
| `pytest -v` | Verbose output |
| `pytest -x` | Stop on first failure |
| `pytest -k "test_name"` | Run matching tests |
| `pytest -m slow` | Run marked tests |
| `pytest --lf` | Rerun last failed |
| `pytest --cov=src` | Coverage report |
| `pytest -n auto` | Parallel (pytest-xdist) |
## Additional Resources
- `./references/fixtures-advanced.md` - Factory fixtures, autouse, conftest patterns
- `./references/mocking-patterns.md` - Mock, patch, MagicMock, side_effect
- `./references/async-testing.md` - pytest-asyncio patterns
- `./references/coverage-strategies.md` - pytest-cov, branch coverage, reports
- `./references/integration-testing.md` - Database fixtures, API testing, testcontainers
- `./references/property-testing.md` - Hypothesis framework, strategies, shrinking
- `./references/test-architecture.md` - Test pyramid, organization, isolation strategies
## Scripts
- `./scripts/run-tests.sh` - Run tests with recommended options
- `./scripts/generate-conftest.sh` - Generate conftest.py boilerplate
## Assets
- `./assets/pytest.ini.template` - Recommended pytest configuration
- `./assets/conftest.py.template` - Common fixture patterns
---
## See Also
**Related Skills:**
- `python-typing-patterns` - Type-safe test code
- `python-async-patterns` - Async test patterns (pytest-asyncio)
**Testing specific frameworks:**
- `python-fastapi-patterns` - TestClient, API testing
- `python-database-patterns` - Database fixtures, transactionsRelated Skills
python-design-patterns
Python design patterns including KISS, Separation of Concerns, Single Responsibility, and composition over inheritance. Use when making architecture decisions, refactoring code structure, or evaluating when abstractions are appropriate.
design-system-patterns
Build scalable design systems with design tokens, theming infrastructure, and component architecture patterns. Use when creating design tokens, implementing theme switching, building component libraries, or establishing design system foundations.
vercel-composition-patterns
React composition patterns that scale. Use when refactoring components with boolean prop proliferation, building flexible component libraries, or designing reusable APIs. Triggers on tasks involving compound components, render props, context providers, or component architecture.
ui-component-patterns
Build reusable, maintainable UI components following modern design patterns. Use when creating component libraries, implementing design systems, or building scalable frontend architectures. Handles React patterns, composition, prop design, TypeScript, and component best practices.
zapier-make-patterns
No-code automation democratizes workflow building. Zapier and Make (formerly Integromat) let non-developers automate business processes without writing code. But no-code doesn't mean no-complexity - these platforms have their own patterns, pitfalls, and breaking points. This skill covers when to use which platform, how to build reliable automations, and when to graduate to code-based solutions. Key insight: Zapier optimizes for simplicity and integrations (7000+ apps), Make optimizes for power
workflow-patterns
Use this skill when implementing tasks according to Conductor's TDD workflow, handling phase checkpoints, managing git commits for tasks, or understanding the verification protocol.
workflow-orchestration-patterns
Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism constraints. Use when building long-running processes, distributed transactions, or microservice orchestration.
wcag-audit-patterns
Conduct WCAG 2.2 accessibility audits with automated testing, manual verification, and remediation guidance. Use when auditing websites for accessibility, fixing WCAG violations, or implementing accessible design patterns.
unity-ecs-patterns
Master Unity ECS (Entity Component System) with DOTS, Jobs, and Burst for high-performance game development. Use when building data-oriented games, optimizing performance, or working with large entity counts.
temporal-python-testing
Test Temporal workflows with pytest, time-skipping, and mocking strategies. Covers unit testing, integration testing, replay testing, and local development setup. Use when implementing Temporal workflow tests or debugging test failures.
temporal-python-pro
Master Temporal workflow orchestration with Python SDK. Implements durable workflows, saga patterns, and distributed transactions. Covers async/await, testing strategies, and production deployment. Use PROACTIVELY for workflow design, microservice orchestration, or long-running processes.
stride-analysis-patterns
Apply STRIDE methodology to systematically identify threats. Use when analyzing system security, conducting threat modeling sessions, or creating security documentation.