py-pydantic-patterns

Pydantic v2 patterns for validation and serialization. Use when creating schemas, validating data, or working with request/response models.

25 stars

Best use case

py-pydantic-patterns is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Pydantic v2 patterns for validation and serialization. Use when creating schemas, validating data, or working with request/response models.

Teams using py-pydantic-patterns should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/py-pydantic-patterns/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/cjharmath/py-pydantic-patterns/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/py-pydantic-patterns/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How py-pydantic-patterns Compares

Feature / Agentpy-pydantic-patternsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Pydantic v2 patterns for validation and serialization. Use when creating schemas, validating data, or working with request/response models.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Pydantic v2 Patterns

## Problem Statement

Pydantic v2 has significant API changes from v1. This codebase uses v2. Wrong patterns cause validation failures, serialization bugs, and frontend integration issues.

---

## Pattern: v1 to v2 Migration

**Critical changes to know:**

```python
# ❌ v1 (OLD - don't use)
from pydantic import validator
class Model(BaseModel):
    class Config:
        orm_mode = True
    
    @validator("email")
    def validate_email(cls, v):
        return v.lower()
    
    def dict(self):
        ...

# ✅ v2 (CURRENT)
from pydantic import field_validator, ConfigDict
class Model(BaseModel):
    model_config = ConfigDict(from_attributes=True)
    
    @field_validator("email")
    @classmethod
    def validate_email(cls, v: str) -> str:
        return v.lower()
    
    def model_dump(self):
        ...
```

**Quick reference:**

| v1 | v2 |
|----|-----|
| `class Config` | `model_config = ConfigDict(...)` |
| `orm_mode = True` | `from_attributes=True` |
| `.dict()` | `.model_dump()` |
| `.json()` | `.model_dump_json()` |
| `@validator` | `@field_validator` |
| `@root_validator` | `@model_validator` |
| `parse_obj()` | `model_validate()` |
| `update_forward_refs()` | `model_rebuild()` |

---

## Pattern: Field Validators

```python
from pydantic import BaseModel, field_validator, ValidationInfo

class AssessmentCreate(BaseModel):
    title: str
    skill_areas: list[str]
    max_score: int
    
    # Single field validator
    @field_validator("title")
    @classmethod
    def title_not_empty(cls, v: str) -> str:
        if not v.strip():
            raise ValueError("Title cannot be empty")
        return v.strip()
    
    # Validator with access to other fields
    @field_validator("max_score")
    @classmethod
    def validate_max_score(cls, v: int, info: ValidationInfo) -> int:
        if v < 1:
            raise ValueError("Max score must be positive")
        return v
    
    # Multiple fields
    @field_validator("skill_areas")
    @classmethod
    def validate_skill_areas(cls, v: list[str]) -> list[str]:
        valid = {"fundamentals", "advanced", "strategy"}
        for area in v:
            if area not in valid:
                raise ValueError(f"Invalid skill area: {area}")
        return v
```

---

## Pattern: Model Validators

```python
from pydantic import BaseModel, model_validator

class DateRange(BaseModel):
    start_date: datetime
    end_date: datetime
    
    # Before validation (raw input)
    @model_validator(mode="before")
    @classmethod
    def parse_dates(cls, data: dict) -> dict:
        # Handle string dates
        if isinstance(data.get("start_date"), str):
            data["start_date"] = datetime.fromisoformat(data["start_date"])
        return data
    
    # After validation (validated model)
    @model_validator(mode="after")
    def validate_range(self) -> "DateRange":
        if self.end_date < self.start_date:
            raise ValueError("end_date must be after start_date")
        return self
```

---

## Pattern: Model Configuration

```python
from pydantic import BaseModel, ConfigDict

class UserRead(BaseModel):
    # Configure model behavior
    model_config = ConfigDict(
        from_attributes=True,      # Allow from ORM objects
        str_strip_whitespace=True, # Strip strings
        str_min_length=1,          # No empty strings by default
        validate_default=True,     # Validate default values
        extra="forbid",            # Error on extra fields
        frozen=False,              # Allow mutation
    )
    
    id: UUID
    email: str
    created_at: datetime

# Usage with SQLModel objects
user_db = await session.get(User, user_id)
user_read = UserRead.model_validate(user_db)  # Works due to from_attributes
```

---

## Pattern: Field Definitions

```python
from pydantic import BaseModel, Field
from typing import Annotated

class AssessmentCreate(BaseModel):
    # Basic constraints
    title: str = Field(min_length=1, max_length=200)
    score: int = Field(ge=0, le=100)  # 0 <= score <= 100
    rating: float = Field(gt=0, lt=5.5)  # 0 < rating < 5.5
    
    # With description (shows in OpenAPI)
    skill_areas: list[str] = Field(
        min_length=1,
        description="List of skill areas to assess",
        examples=[["fundamentals", "strategy"]],
    )
    
    # Optional with default
    notes: str | None = Field(default=None, max_length=1000)
    
    # Computed default
    created_at: datetime = Field(default_factory=datetime.utcnow)

# Reusable type with constraints
PositiveInt = Annotated[int, Field(gt=0)]
Rating = Annotated[float, Field(ge=1.0, le=5.5)]

class Result(BaseModel):
    count: PositiveInt
    rating: Rating
```

---

## Pattern: Discriminated Unions

**Problem:** Polymorphic responses where type depends on a field.

```python
from pydantic import BaseModel, Field
from typing import Literal, Union
from typing_extensions import Annotated

class TextQuestion(BaseModel):
    type: Literal["text"] = "text"
    prompt: str
    max_length: int

class MultipleChoiceQuestion(BaseModel):
    type: Literal["multiple_choice"] = "multiple_choice"
    prompt: str
    options: list[str]

class RatingQuestion(BaseModel):
    type: Literal["rating"] = "rating"
    prompt: str
    min_value: int
    max_value: int

# Discriminated union - Pydantic uses 'type' field to determine class
Question = Annotated[
    Union[TextQuestion, MultipleChoiceQuestion, RatingQuestion],
    Field(discriminator="type"),
]

class Assessment(BaseModel):
    questions: list[Question]

# Pydantic automatically deserializes to correct type
data = {
    "questions": [
        {"type": "text", "prompt": "Describe...", "max_length": 500},
        {"type": "rating", "prompt": "Rate...", "min_value": 1, "max_value": 5},
    ]
}
assessment = Assessment.model_validate(data)
# assessment.questions[0] is TextQuestion
# assessment.questions[1] is RatingQuestion
```

---

## Pattern: Custom Types

```python
from pydantic import BaseModel, AfterValidator, BeforeValidator
from typing import Annotated
import re

# Email normalization
def normalize_email(v: str) -> str:
    return v.lower().strip()

Email = Annotated[str, AfterValidator(normalize_email)]

# Phone validation
def validate_phone(v: str) -> str:
    cleaned = re.sub(r"[^\d+]", "", v)
    if not re.match(r"^\+?1?\d{10,14}$", cleaned):
        raise ValueError("Invalid phone number")
    return cleaned

PhoneNumber = Annotated[str, BeforeValidator(validate_phone)]

# UUID from string
def parse_uuid(v: str | UUID) -> UUID:
    if isinstance(v, str):
        return UUID(v)
    return v

UUIDStr = Annotated[UUID, BeforeValidator(parse_uuid)]

class User(BaseModel):
    email: Email
    phone: PhoneNumber | None = None
    id: UUIDStr
```

---

## Pattern: Serialization Control

```python
from pydantic import BaseModel, field_serializer, computed_field

class User(BaseModel):
    id: UUID
    email: str
    created_at: datetime
    
    # Custom serialization
    @field_serializer("created_at")
    def serialize_datetime(self, dt: datetime) -> str:
        return dt.isoformat()
    
    @field_serializer("id")
    def serialize_uuid(self, id: UUID) -> str:
        return str(id)
    
    # Computed field (included in serialization)
    @computed_field
    @property
    def display_name(self) -> str:
        return self.email.split("@")[0]

# Serialization options
user.model_dump()                          # Full dict
user.model_dump(exclude={"created_at"})    # Exclude fields
user.model_dump(include={"id", "email"})   # Include only
user.model_dump(exclude_none=True)         # Skip None values
user.model_dump(by_alias=True)             # Use field aliases
user.model_dump_json()                     # JSON string
```

---

## Pattern: Schema Inheritance

```python
class UserBase(BaseModel):
    email: str
    name: str

class UserCreate(UserBase):
    password: str  # Only for creation

class UserRead(UserBase):
    id: UUID
    created_at: datetime
    
    model_config = ConfigDict(from_attributes=True)

class UserUpdate(BaseModel):
    # All optional for partial updates
    email: str | None = None
    name: str | None = None
    password: str | None = None
```

---

## Common Issues

| Issue | Likely Cause | Solution |
|-------|--------------|----------|
| "X is not a valid dict" | Using `.dict()` (v1) | Use `.model_dump()` |
| "Unable to parse ORM object" | Missing `from_attributes` | Add `ConfigDict(from_attributes=True)` |
| "@validator not recognized" | v1 decorator | Use `@field_validator` with `@classmethod` |
| "Extra fields not permitted" | `extra="forbid"` | Remove extra fields or change config |
| Validation not running | Default value not validated | Add `validate_default=True` |

---

## Detection Commands

```bash
# Find v1 patterns
grep -rn "class Config:" --include="*.py"
grep -rn "@validator" --include="*.py"
grep -rn "\.dict()" --include="*.py"
grep -rn "orm_mode" --include="*.py"
```

Related Skills

exa-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready exa-js SDK patterns with type safety, singletons, and wrappers. Use when implementing Exa integrations, refactoring SDK usage, or establishing team coding standards for Exa. Trigger with phrases like "exa SDK patterns", "exa best practices", "exa code patterns", "idiomatic exa", "exa wrapper".

exa-reliability-patterns

25
from ComeOnOliver/skillshub

Implement Exa reliability patterns: query fallback chains, circuit breakers, and graceful degradation. Use when building fault-tolerant Exa integrations, implementing fallback strategies, or adding resilience to production search services. Trigger with phrases like "exa reliability", "exa circuit breaker", "exa fallback", "exa resilience", "exa graceful degradation".

evernote-sdk-patterns

25
from ComeOnOliver/skillshub

Advanced Evernote SDK patterns and best practices. Use when implementing complex note operations, batch processing, search queries, or optimizing SDK usage. Trigger with phrases like "evernote sdk patterns", "evernote best practices", "evernote advanced", "evernote batch operations".

elevenlabs-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready ElevenLabs SDK patterns for TypeScript and Python. Use when implementing ElevenLabs integrations, refactoring SDK usage, or establishing team coding standards for audio AI applications. Trigger: "elevenlabs SDK patterns", "elevenlabs best practices", "elevenlabs code patterns", "idiomatic elevenlabs", "elevenlabs typescript".

documenso-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Documenso SDK patterns for TypeScript and Python. Use when implementing Documenso integrations, refactoring SDK usage, or establishing team coding standards for Documenso. Trigger with phrases like "documenso SDK patterns", "documenso best practices", "documenso code patterns", "idiomatic documenso".

deepgram-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Deepgram SDK patterns for TypeScript and Python. Use when implementing Deepgram integrations, refactoring SDK usage, or establishing team coding standards for Deepgram. Trigger: "deepgram SDK patterns", "deepgram best practices", "deepgram code patterns", "idiomatic deepgram", "deepgram typescript".

databricks-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Databricks SDK patterns for Python and REST API. Use when implementing Databricks integrations, refactoring SDK usage, or establishing team coding standards for Databricks. Trigger with phrases like "databricks SDK patterns", "databricks best practices", "databricks code patterns", "idiomatic databricks".

customerio-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Customer.io SDK patterns. Use when implementing typed clients, retry logic, event batching, or singleton management for customerio-node. Trigger: "customer.io best practices", "customer.io patterns", "production customer.io", "customer.io architecture", "customer.io singleton".

customerio-reliability-patterns

25
from ComeOnOliver/skillshub

Implement Customer.io reliability and fault-tolerance patterns. Use when building circuit breakers, fallback queues, idempotency, or graceful degradation for Customer.io integrations. Trigger: "customer.io reliability", "customer.io resilience", "customer.io circuit breaker", "customer.io fault tolerance".

coreweave-sdk-patterns

25
from ComeOnOliver/skillshub

Production-ready patterns for CoreWeave GPU workload management with kubectl and Python. Use when building inference clients, managing GPU deployments programmatically, or creating reusable CoreWeave deployment templates. Trigger with phrases like "coreweave patterns", "coreweave client", "coreweave Python", "coreweave deployment template".

cohere-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready Cohere SDK patterns for TypeScript and Python. Use when implementing Cohere integrations, refactoring SDK usage, or establishing team coding standards for Cohere API v2. Trigger with phrases like "cohere SDK patterns", "cohere best practices", "cohere code patterns", "idiomatic cohere", "cohere wrapper".

coderabbit-sdk-patterns

25
from ComeOnOliver/skillshub

Apply production-ready CodeRabbit automation patterns using GitHub API and PR comments. Use when building automation around CodeRabbit reviews, processing review feedback programmatically, or integrating CodeRabbit into custom workflows. Trigger with phrases like "coderabbit automation", "coderabbit API patterns", "automate coderabbit", "coderabbit github api", "process coderabbit reviews".