Best use case
Schema Versioning is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Schema Versioning should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/schema-versioning/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Schema Versioning Compares
| Feature / Agent | Schema Versioning | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Schema Versioning
## Overview
This skill helps you establish a reliable database schema versioning workflow: generating timestamped migration files, testing them against a shadow database, integrating schema checks into CI/CD, and rolling back safely when deployments fail. It works with any migration tool (Prisma, Knex, TypeORM, Flyway, Alembic) and focuses on patterns rather than vendor lock-in.
## Instructions
### 1. Initialize migration infrastructure
Set up the migration directory structure and configuration:
```bash
# For Knex.js
npx knex init
npx knex migrate:make initial_schema
# For Prisma
npx prisma init
npx prisma migrate dev --name initial_schema
# For Alembic (Python)
alembic init migrations
alembic revision --autogenerate -m "initial_schema"
```
Create a shadow database for testing migrations before applying to production:
```yaml
# docker-compose.shadow-db.yml
services:
shadow-db:
image: postgres:16
environment:
POSTGRES_DB: app_shadow
POSTGRES_PASSWORD: shadow_test
ports:
- "5433:5432"
```
### 2. Generate migration files from schema changes
When models change, generate the migration diff:
```typescript
// Knex migration example: 20250217_add_orders_table.ts
import { Knex } from "knex";
export async function up(knex: Knex): Promise<void> {
await knex.schema.createTable("orders", (table) => {
table.uuid("id").primary().defaultTo(knex.fn.uuid());
table.uuid("user_id").notNullable().references("id").inTable("users");
table.decimal("total", 10, 2).notNullable();
table.enum("status", ["pending", "paid", "shipped", "cancelled"]).defaultTo("pending");
table.timestamps(true, true);
table.index(["user_id", "status"]);
});
}
export async function down(knex: Knex): Promise<void> {
await knex.schema.dropTableIfExists("orders");
}
```
### 3. Test migrations safely
Always test against a shadow database before production:
```bash
# Apply all pending migrations to shadow database
DATABASE_URL="postgres://localhost:5433/app_shadow" npx knex migrate:latest
# Verify the migration is reversible
DATABASE_URL="postgres://localhost:5433/app_shadow" npx knex migrate:rollback
# Re-apply to confirm idempotency
DATABASE_URL="postgres://localhost:5433/app_shadow" npx knex migrate:latest
```
### 4. Implement rollback procedures
Create a rollback script that handles both schema and data rollbacks:
```bash
#!/bin/bash
# rollback.sh — Roll back the last N migrations
set -euo pipefail
ROLLBACK_COUNT=${1:-1}
ENVIRONMENT=${2:-staging}
echo "Rolling back $ROLLBACK_COUNT migration(s) on $ENVIRONMENT..."
# Capture current state for audit
npx knex migrate:currentVersion --env "$ENVIRONMENT" > /tmp/pre_rollback_version.txt
# Execute rollback
npx knex migrate:rollback --count "$ROLLBACK_COUNT" --env "$ENVIRONMENT"
# Verify post-rollback state
npx knex migrate:currentVersion --env "$ENVIRONMENT" > /tmp/post_rollback_version.txt
echo "Rolled back from $(cat /tmp/pre_rollback_version.txt) to $(cat /tmp/post_rollback_version.txt)"
```
### 5. CI/CD integration
Add migration checks to your pipeline:
```yaml
# .github/workflows/migration-check.yml
migration-check:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:16
env:
POSTGRES_DB: test_db
POSTGRES_PASSWORD: test
ports: ["5432:5432"]
steps:
- uses: actions/checkout@v4
- run: npm ci
- name: Apply all migrations from scratch
run: npx knex migrate:latest
env:
DATABASE_URL: postgres://postgres:test@localhost:5432/test_db
- name: Verify rollback works
run: npx knex migrate:rollback --all
env:
DATABASE_URL: postgres://postgres:test@localhost:5432/test_db
- name: Re-apply to confirm clean state
run: npx knex migrate:latest
env:
DATABASE_URL: postgres://postgres:test@localhost:5432/test_db
```
## Examples
### Example 1: Adding a column with backfill
**Prompt:** "Generate a migration that adds an `email_verified` boolean column to the users table, defaulting to false, and backfills it to true for all users who have logged in within the last 90 days."
**Agent output:**
- Creates `migrations/20250217143000_add_email_verified_to_users.ts`
- The `up` function adds the column with a default, then runs a data backfill query using `last_login_at > NOW() - INTERVAL '90 days'`
- The `down` function drops the column
- Includes a note: "Backfill runs in batches of 1,000 to avoid locking the table"
### Example 2: Renaming a table safely
**Prompt:** "I need to rename the `customers` table to `clients` without breaking the app during deployment. Generate a zero-downtime migration strategy."
**Agent output:**
- Migration 1: Create `clients` table, add trigger to sync writes from `customers` to `clients`
- Migration 2: Backfill existing data from `customers` to `clients`
- Migration 3: Create a view `customers` pointing to `clients` (backwards compatibility)
- Migration 4: Drop the view and old table after all application code references `clients`
- Each migration has a working `down` function
## Guidelines
- **Every migration must have a working `down` function** — untested rollbacks fail when you need them most.
- **Never modify a migration that has been applied** to any shared environment — create a new migration instead.
- **Use transactions** for DDL when your database supports it (PostgreSQL does, MySQL does not for most DDL).
- **Test the full sequence**: migrate up, roll back, migrate up again — this catches hidden state dependencies.
- **Keep migrations small** — one logical change per file. A 500-line migration is a red flag.
- **Add indexes in separate migrations** from table creation to avoid long locks on large tables.
- **Timestamp your migration filenames** — sequential integers cause merge conflicts in teams.Related Skills
graphql-schema-generator
Graphql Schema Generator - Auto-activating skill for API Development. Triggers on: graphql schema generator, graphql schema generator Part of the API Development skill category.
designing-database-schemas
Process use when you need to work with database schema design. This skill provides schema design and migrations with comprehensive guidance and automation. Trigger with phrases like "design schema", "create migration", or "model database".
database-schema-visualizer
Database Schema Visualizer - Auto-activating skill for Visual Content. Triggers on: database schema visualizer, database schema visualizer Part of the Visual Content skill category.
comparing-database-schemas
This skill leverages the database-diff-tool plugin to compare database schemas, generate migration scripts, and provide rollback procedures. It is triggered when the user requests database schema comparisons, migration script generation, or database synchronization. Use this skill when asked to identify differences between database schemas (PostgreSQL or MySQL), create safe migration scripts with transaction safety, validate changes before deployment, or generate rollback procedures. The skill is activated by requests involving terms like "database diff", "schema comparison", "generate migration script", "database synchronization", or `/db-diff`.
electric-schema-shapes
Design Postgres schema and Electric shape definitions together for a new feature. Covers single-table shape constraint, cross-table joins using multiple shapes, WHERE clause design for tenant isolation, column selection for bandwidth optimization, replica mode choice (default vs full for old_value), enum casting in WHERE clauses, and txid handshake setup with pg_current_xact_id() for optimistic writes. Load when designing database tables for use with Electric shapes.
apify-generate-output-schema
Generate output schemas (dataset_schema.json, output_schema.json, key_value_store_schema.json) for an Apify Actor by analyzing its source code. Use when creating or updating Actor output schemas.
database-schema-design
Design and optimize database schemas for SQL and NoSQL databases. Use when creating new databases, designing tables, defining relationships, indexing strategies, or database migrations. Handles PostgreSQL, MySQL, MongoDB, normalization, and performance optimization.
add-malli-schemas
Efficiently add Malli schemas to API endpoints in the Metabase codebase with proper patterns, validation timing, and error handling
schema-visualizer
Generate database schema diagrams, ERDs, and documentation from database schemas.
schema-markup
When the user wants to add, fix, or optimize schema markup and structured data on their site. Also use when the user mentions "schema markup," "structured data," "JSON-LD," "rich snippets," "schema.org," "FAQ schema," "product schema," "review schema," or "breadcrumb schema." For broader SEO issues, see seo-audit.
graphql-schema
GraphQL queries, mutations, and code generation patterns. Use when creating GraphQL operations, working with Apollo Client, or generating types.
understanding-db-schema
Deep expertise in Logseq's Datascript database schema. Auto-invokes when users ask about Logseq DB schema, Datascript attributes, built-in classes, property types, entity relationships, schema validation, or the node/block/page data model. Provides authoritative knowledge of the DB graph architecture.