pytest-recording
Work with pytest-recording (VCR.py) for recording and replaying HTTP interactions in tests. Use when writing VCR tests, managing cassettes, configuring VCR options, filtering sensitive data, or debugging recorded HTTP responses.
Best use case
pytest-recording is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Work with pytest-recording (VCR.py) for recording and replaying HTTP interactions in tests. Use when writing VCR tests, managing cassettes, configuring VCR options, filtering sensitive data, or debugging recorded HTTP responses.
Teams using pytest-recording should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/pytest-recording/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How pytest-recording Compares
| Feature / Agent | pytest-recording | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Work with pytest-recording (VCR.py) for recording and replaying HTTP interactions in tests. Use when writing VCR tests, managing cassettes, configuring VCR options, filtering sensitive data, or debugging recorded HTTP responses.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# pytest-recording (VCR.py) Testing
## Overview
pytest-recording wraps VCR.py to record HTTP interactions as YAML cassettes, enabling deterministic tests without live API calls.
## Quick Reference
### Running Tests
```bash
# Run all tests (uses existing cassettes)
uv run pytest tests/
# Run a single test
uv run pytest tests/test_module.py::test_function
# Rewrite all cassettes with fresh responses
uv run pytest tests/ --vcr-record=rewrite
# Record only missing cassettes
uv run pytest tests/ --vcr-record=new_episodes
# Disable VCR (make live requests)
uv run pytest tests/ --disable-recording
```
### Recording Modes
| Mode | Flag | Behavior |
|------|------|----------|
| `none` | `--vcr-record=none` | Only replay, fail if no cassette |
| `once` | (default) | Record if no cassette exists |
| `new_episodes` | `--vcr-record=new_episodes` | Record new requests, keep existing |
| `all` | `--vcr-record=all` | Always record, overwrite existing |
| `rewrite` | `--vcr-record=rewrite` | Delete and re-record all cassettes |
### Writing VCR Tests
Basic test with VCR:
```python
import pytest
@pytest.mark.vcr()
def test_api_call():
response = my_api_function()
assert response.status_code == 200
```
Custom cassette name:
```python
@pytest.mark.vcr("custom_cassette_name.yaml")
def test_with_custom_cassette():
pass
```
Multiple cassettes:
```python
@pytest.mark.vcr("cassette1.yaml", "cassette2.yaml")
def test_with_multiple_cassettes():
pass
```
### VCR Configuration in conftest.py
The `vcr_config` fixture controls VCR behavior:
```python
@pytest.fixture(scope="module")
def vcr_config():
return {
# Filter sensitive headers from recordings
"filter_headers": ["authorization", "api-key", "x-api-key"],
# Filter query parameters
"filter_query_parameters": ["key", "api_key", "token"],
# Match requests by these criteria
"match_on": ["method", "scheme", "host", "port", "path", "query"],
# Ignore certain hosts (don't record)
"ignore_hosts": ["localhost", "127.0.0.1"],
# Record mode
"record_mode": "once",
}
```
### Filtering Sensitive Data
For LLM providers, filter authentication:
```python
@pytest.fixture(scope="module")
def vcr_config():
return {
"filter_headers": [
"authorization", # OpenAI, Anthropic
"api-key", # Azure OpenAI
"x-api-key", # Anthropic
"x-goog-api-key", # Google AI
],
"filter_query_parameters": ["key"],
}
```
### Response Processing
Use `pytest_recording_configure` for advanced processing:
```python
def pytest_recording_configure(config, vcr):
vcr.serializer = "yaml"
vcr.decode_compressed_response = True
# Sanitize response headers
def sanitize_response(response):
response['headers']['Set-Cookie'] = 'REDACTED'
return response
vcr.before_record_response = sanitize_response
```
### Cassette Location
Cassettes are stored in `tests/cassettes/` by default, organized by test module:
```
tests/
├── cassettes/
│ └── test_module/
│ └── test_function.yaml
└── test_module.py
```
## Debugging
### Cassette Not Found
If tests fail with "Can't find cassette":
1. Run with `--vcr-record=once` to create missing cassettes
2. Check cassette path matches test location
3. Verify cassette file exists and is valid YAML
### Request Mismatch
If VCR can't match requests:
1. Check `match_on` criteria in `vcr_config`
2. Compare request details in cassette vs actual request
3. Use `--vcr-record=new_episodes` to add missing interactions
### Stale Cassettes
When API responses change:
1. Delete specific cassette file and re-run test
2. Or use `--vcr-record=rewrite` to refresh all cassettes
### View Cassette Contents
```bash
# View a cassette file
cat tests/cassettes/test_module/test_function.yaml
# Search for specific content in cassettes
grep -r "error" tests/cassettes/
```
## Adding New LLM Providers
When adding a new provider:
1. Identify authentication headers (check provider docs)
2. Add headers to `filter_headers` in `vcr_config`
3. Add any query param auth to `filter_query_parameters`
4. Test with `--vcr-record=once` to create cassettes
5. Verify cassettes don't contain secrets
Common provider authentication:
| Provider | Headers to Filter |
|----------|-------------------|
| OpenAI | `authorization` |
| Anthropic | `x-api-key`, `authorization` |
| Azure OpenAI | `api-key` |
| Google AI | `x-goog-api-key` |
| Cohere | `authorization` |
## Best Practices
1. **Never commit secrets**: Always filter auth headers/params
2. **Use descriptive test names**: Cassette names derive from test names
3. **Keep cassettes small**: Mock only what you need to test
4. **Review cassettes in PRs**: Check for sensitive data leaks
5. **Regenerate periodically**: API responses may change over time
6. **Use scope appropriately**: `scope="module"` for shared fixturesRelated Skills
pytest-test-generator
Pytest Test Generator - Auto-activating skill for Test Automation. Triggers on: pytest test generator, pytest test generator Part of the Test Automation skill category.
pytest-coverage
Run pytest tests with coverage, discover lines missing coverage, and increase coverage to 100%.
pytest-mock-guide
Guide for using pytest-mock plugin to write tests with mocking. Use when writing pytest tests that need mocking, patching, spying, or stubbing. Covers mocker fixture usage, patch methods, spy/stub patterns, and assertion helpers.
pytest-mastery
Python testing with pytest using uv package manager. Use when: (1) Running Python tests, (2) Writing test files or test functions, (3) Setting up fixtures, (4) Parametrizing tests, (5) Generating coverage reports, (6) Testing FastAPI applications, (7) Debugging test failures, (8) Configuring pytest options. Triggers: "run tests", "write tests", "test coverage", "pytest", "unit test", "integration test", "test FastAPI".
tdd-pytest
Python/pytest TDD specialist for test-driven development workflows. Use when writing tests, auditing test quality, running pytest, or generating test reports. Integrates with uv and pyproject.toml configuration.
python-pytest-patterns
pytest testing patterns for Python. Triggers on: pytest, fixture, mark, parametrize, mock, conftest, test coverage, unit test, integration test, pytest.raises.
pytest
Pytest testing patterns for Python. Trigger: When writing Python tests - fixtures, mocking, markers.
axiom-ui-recording
Use when setting up UI test recording in Xcode 26, enhancing recorded tests for stability, or configuring test plans for multi-configuration replay. Based on WWDC 2025-344 "Record, replay, and review".
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.
Socratic Method: The Dialectic Engine
This skill transforms Claude into a Socratic agent — a cognitive partner who guides
Sokratische Methode: Die Dialektik-Maschine
Dieser Skill verwandelt Claude in einen sokratischen Agenten — einen kognitiven Partner, der Nutzende durch systematisches Fragen zur Wissensentdeckung führt, anstatt direkt zu instruieren.
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.