pytest
Python testing framework for writing simple, scalable, and powerful tests
Best use case
pytest is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Python testing framework for writing simple, scalable, and powerful tests
Python testing framework for writing simple, scalable, and powerful tests
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "pytest" skill to help with this workflow task. Context: Python testing framework for writing simple, scalable, and powerful tests
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/pytest/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How pytest Compares
| Feature / Agent | pytest | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Python testing framework for writing simple, scalable, and powerful tests
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Pytest Testing Framework
Pytest is a mature Python testing framework that makes it easy to write small tests while scaling to support complex functional testing.
## Quick Start
### Basic Test Structure
```python
# test_example.py
def test_addition():
assert 2 + 2 == 4
def test_string_operations():
assert "hello".upper() == "HELLO"
assert "world" in "hello world"
```
### Running Tests
```bash
# Run all tests
uv run pytest
# Run with verbose output
uv run pytest -v
# Run specific test file
uv run pytest test_example.py
# Run specific test function
uv run pytest test_example.py::test_addition
```
## Common Patterns
### Fixtures
**Basic fixture definition:**
```python
import pytest
@pytest.fixture
def sample_data():
return {"name": "Alice", "age": 30}
def test_user_data(sample_data):
assert sample_data["name"] == "Alice"
assert sample_data["age"] == 30
```
**Fixture with setup and teardown:**
```python
@pytest.fixture
def database_connection():
# Setup
conn = create_database_connection()
yield conn
# Teardown
conn.close()
def test_database_query(database_connection):
result = database_connection.query("SELECT * FROM users")
assert len(result) > 0
```
**Fixture scopes:**
```python
@pytest.fixture(scope="function") # Default - created per test
def temp_file():
pass
@pytest.fixture(scope="module") # Created once per module
def module_resource():
pass
@pytest.fixture(scope="session") # Created once per test session
def session_resource():
pass
```
### Parametrization
**Basic parametrization:**
```python
@pytest.mark.parametrize("input,expected", [
("3+5", 8),
("2+4", 6),
("6*9", 54),
])
def test_eval(input, expected):
assert eval(input) == expected
```
**Parametrized fixtures:**
```python
@pytest.fixture(params=["mysql", "postgresql", "sqlite"])
def database(request):
if request.param == "mysql":
return MySQLConnection()
elif request.param == "postgresql":
return PostgreSQLConnection()
else:
return SQLiteConnection()
def test_database_operations(database):
# Test runs 3 times, once for each database type
result = database.execute("SELECT 1")
assert result == 1
```
**Stacking parametrization for combinatorial testing:**
```python
@pytest.mark.parametrize("x", [0, 1])
@pytest.mark.parametrize("y", [2, 3])
def test_combinations(x, y):
# Runs 4 times: (0,2), (0,3), (1,2), (1,3)
assert x + y > 1
```
### Async Testing
**Basic async test:**
```python
import pytest
@pytest.mark.asyncio
async def test_async_function():
result = await async_operation()
assert result is not None
```
**Async fixtures:**
```python
@pytest.fixture
async def async_client():
client = AsyncClient()
await client.connect()
yield client
await client.disconnect()
@pytest.mark.asyncio
async def test_async_api(async_client):
response = await async_client.get("/api/data")
assert response.status_code == 200
```
### Test Organization
**Using conftest.py for shared fixtures:**
```python
# conftest.py
@pytest.fixture
def authenticated_client():
client = create_test_client()
client.login("testuser", "password")
return client
@pytest.fixture(scope="session")
def test_database():
db = create_test_database()
yield db
db.cleanup()
```
**Test classes:**
```python
class TestUserAPI:
def test_create_user(self, authenticated_client):
response = authenticated_client.post("/users", json={"name": "John"})
assert response.status_code == 201
def test_get_user(self, authenticated_client):
user_id = create_test_user()
response = authenticated_client.get(f"/users/{user_id}")
assert response.status_code == 200
```
### Mocking and Patching
**Using monkeypatch fixture:**
```python
def test_environment_variable(monkeypatch):
monkeypatch.setenv("API_KEY", "test-key")
assert get_api_key() == "test-key"
def test_file_operations(monkeypatch, tmp_path):
test_file = tmp_path / "test.txt"
test_file.write_text("test content")
monkeypatch.setattr("module.FILE_PATH", str(test_file))
assert read_file_content() == "test content"
```
### Markers and Selection
**Custom markers:**
```python
# pytest.ini
[tool:pytest]
markers =
slow: marks tests as slow
integration: marks tests as integration tests
unit: marks tests as unit tests
# test_file.py
@pytest.mark.slow
def test_expensive_operation():
pass
@pytest.mark.integration
def test_database_integration():
pass
```
**Running tests by marker:**
```bash
# Run only unit tests
uv run pytest -m unit
# Skip slow tests
uv run pytest -m "not slow"
# Run integration or unit tests
uv run pytest -m "integration or unit"
```
## Practical Code Snippets
### API Testing
```python
import pytest
from fastapi.testclient import TestClient
from myapp import app
@pytest.fixture
def client():
return TestClient(app)
@pytest.fixture
def test_user():
return {"username": "testuser", "email": "test@example.com"}
def test_create_user(client, test_user):
response = client.post("/users/", json=test_user)
assert response.status_code == 201
assert response.json()["username"] == test_user["username"]
def test_get_user(client, test_user):
# Create user first
create_response = client.post("/users/", json=test_user)
user_id = create_response.json()["id"]
# Get user
response = client.get(f"/users/{user_id}")
assert response.status_code == 200
assert response.json()["email"] == test_user["email"]
```
### Database Testing
```python
import pytest
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
@pytest.fixture(scope="function")
def test_db():
engine = create_engine("sqlite:///:memory:")
Session = sessionmaker(bind=engine)
Base.metadata.create_all(engine)
session = Session()
yield session
session.close()
def test_user_creation(test_db):
user = User(name="John", email="john@example.com")
test_db.add(user)
test_db.commit()
retrieved_user = test_db.query(User).filter_by(name="John").first()
assert retrieved_user.email == "john@example.com"
```
### Error Handling Testing
```python
def test_invalid_input_raises_error():
with pytest.raises(ValueError, match="Invalid input"):
process_input("invalid")
def test_file_not_found():
with pytest.raises(FileNotFoundError):
read_nonexistent_file()
def test_custom_exception():
with pytest.raises(CustomAPIError) as exc_info:
call_api_endpoint()
assert exc_info.value.status_code == 404
assert "not found" in str(exc_info.value)
```
## Requirements
- Python 3.7+
- pytest (`uv add --dev pytest`)
- For async testing: pytest-asyncio (`uv add --dev pytest-asyncio`)
- For API testing: web framework test client (e.g., `uv add --dev httpx` for async HTTP tests)Related Skills
pytest-recording
Work with pytest-recording (VCR.py) for recording and replaying HTTP interactions in tests. Use when writing VCR tests, managing cassettes, configuring VCR options, filtering sensitive data, or debugging recorded HTTP responses.
pytest-mock-guide
Guide for using pytest-mock plugin to write tests with mocking. Use when writing pytest tests that need mocking, patching, spying, or stubbing. Covers mocker fixture usage, patch methods, spy/stub patterns, and assertion helpers.
pytest-mastery
Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".
tdd-pytest
Python/pytest TDD specialist for test-driven development workflows. Use when writing tests, auditing test quality, running pytest, or generating test reports. Integrates with uv and pyproject.toml configuration.
python-pytest-patterns
pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
azure-quotas
Check/manage Azure quotas and usage across providers. For deployment planning, capacity validation, region selection. WHEN: "check quotas", "service limits", "current usage", "request quota increase", "quota exceeded", "validate capacity", "regional availability", "provisioning limits", "vCPU limit", "how many vCPUs available in my subscription".
raindrop-io
Manage Raindrop.io bookmarks with AI assistance. Save and organize bookmarks, search your collection, manage reading lists, and organize research materials. Use when working with bookmarks, web research, reading lists, or when user mentions Raindrop.io.
zlibrary-to-notebooklm
自动从 Z-Library 下载书籍并上传到 Google NotebookLM。支持 PDF/EPUB 格式,自动转换,一键创建知识库。
discover-skills
当你发现当前可用的技能都不够合适(或用户明确要求你寻找技能)时使用。本技能会基于任务目标和约束,给出一份精简的候选技能清单,帮助你选出最适配当前任务的技能。
web-performance-seo
Fix PageSpeed Insights/Lighthouse accessibility "!" errors caused by contrast audit failures (CSS filters, OKLCH/OKLAB, low opacity, gradient text, image backgrounds). Use for accessibility-driven SEO/performance debugging and remediation.
project-to-obsidian
将代码项目转换为 Obsidian 知识库。当用户提到 obsidian、项目文档、知识库、分析项目、转换项目 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入规则(默认到 00_Inbox/AI/、追加式、统一 Schema) 3. 执行 STEP 0: 使用 AskUserQuestion 询问用户确认 4. 用户确认后才开始 STEP 1 项目扫描 5. 严格按 STEP 0 → 1 → 2 → 3 → 4 顺序执行 【禁止行为】: - 禁止不读 SKILL.md 就开始分析项目 - 禁止跳过 STEP 0 用户确认 - 禁止直接在 30_Resources 创建(先到 00_Inbox/AI/) - 禁止自作主张决定输出位置
obsidian-helper
Obsidian 智能笔记助手。当用户提到 obsidian、日记、笔记、知识库、capture、review 时激活。 【激活后必须执行】: 1. 先完整阅读本 SKILL.md 文件 2. 理解 AI 写入三条硬规矩(00_Inbox/AI/、追加式、白名单字段) 3. 按 STEP 0 → STEP 1 → ... 顺序执行 4. 不要跳过任何步骤,不要自作主张 【禁止行为】: - 禁止不读 SKILL.md 就开始工作 - 禁止跳过用户确认步骤 - 禁止在非 00_Inbox/AI/ 位置创建新笔记(除非用户明确指定)