verify-output
Pattern for verifying your output matches required schema before completing. Use before writing final output to ensure validity.
Best use case
verify-output is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Pattern for verifying your output matches required schema before completing. Use before writing final output to ensure validity.
Teams using verify-output should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/verify-output/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How verify-output Compares
| Feature / Agent | verify-output | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Pattern for verifying your output matches required schema before completing. Use before writing final output to ensure validity.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Verify Output Skill
Pattern for ensuring outputs match required schemas using automated validation.
## When to Use
- Before writing final output file
- After completing a task
- When producing structured JSON output
## Quick Reference
**Validate before writing:**
```bash
./scripts/validate.sh <schema_name> <file_path>
```
## Available Schemas
| Schema | Used By | Output Path |
|--------|---------|-------------|
| `demand` | PM | `memory/reports/demand.json` |
| `design` | Architect | `memory/reports/designs/*.json` |
| `task-output` | Implementer | `memory/tasks/*/output.json` |
| `verification` | Verifier | `memory/tasks/*/verification.json` |
| `reflection` | Reflector | `memory/reflections/*.json` |
| `evolution-proposal` | Evolver | `memory/evolution/*.json` |
| `contract` | Executor | `memory/contracts/*.json` |
## Validation Process
### Step 1: Determine Your Schema
Based on your agent role:
```
PM agent → demand.schema.json
Architect agent → design.schema.json
Implementer agent → task-output.schema.json
Verifier agent → verification.schema.json
Reflector agent → reflection.schema.json
Evolver agent → evolution-proposal.schema.json
```
### Step 2: Write Output to Temp Location
```bash
Write(memory/tasks/{id}/output.json.tmp, content)
```
### Step 3: Validate
```bash
./scripts/validate.sh task-output memory/tasks/{id}/output.json.tmp
```
### Step 4: If Valid, Move to Final Location
```bash
mv memory/tasks/{id}/output.json.tmp memory/tasks/{id}/output.json
```
### Step 5: If Invalid, Fix and Retry
Common validation errors and fixes:
| Error | Fix |
|-------|-----|
| `'X' is a required property` | Add the missing field |
| `'Y' is not one of ['a', 'b']` | Use valid enum value |
| `'Z' is not of type 'array'` | Wrap value in array: `[value]` |
| `Additional properties not allowed` | Remove extra fields |
## Output Format: Compact JSON
All agent outputs MUST be compact JSON (single line, no extra whitespace):
```json
{"task_id":"001","status":"pre_complete","knowledge_updates":[],"reflection":{"what_worked":[],"what_failed":[],"patterns_noticed":[]}}
```
## Mandatory Fields (All Agents)
Every output MUST include:
```json
{"knowledge_updates":[{"category":"codebase","content":"string","confidence":"certain"}],"reflection":{"what_worked":["string"],"what_failed":["string"],"patterns_noticed":["string"]}}
```
Or empty arrays if no updates:
```json
{"knowledge_updates":[],"reflection":{"what_worked":[],"what_failed":[],"patterns_noticed":[]}}
```
Valid values:
- `category`: `"codebase"` | `"convention"` | `"decision"` | `"gotcha"`
- `confidence`: `"certain"` | `"likely"` | `"uncertain"`
## Pre-Write Validation Pattern
Recommended pattern for agents:
```
1. Construct output object in memory
2. Write to {output_path}.tmp
3. Run: ./scripts/validate.sh {schema} {output_path}.tmp
4. IF validation passes:
→ mv {output_path}.tmp {output_path}
→ Log: "Output validated and written"
5. ELSE:
→ Read validation errors
→ Fix output object
→ Retry from step 2
→ Max 3 retries, then report validation failure
```
## Common Mistakes
- Forgetting `knowledge_updates` (even if empty array)
- Forgetting `reflection` fields
- Using invalid enum values (check schema for allowed values)
- Missing required nested fields
- Wrong type (string instead of array, etc.)
- Using YAML instead of JSON
- Pretty-printing JSON (use compact format)
## Principles
1. **Validate before write** - Never output invalid data
2. **Schema is law** - Missing fields = rejection by executor
3. **Empty is valid** - `"knowledge_updates":[]` is okay
4. **Fail fast** - Catch errors before they propagate
5. **Compact JSON** - Single line, no formattingRelated Skills
apify-generate-output-schema
Generate output schemas (dataset_schema.json, output_schema.json, key_value_store_schema.json) for an Apify Actor by analyzing its source code. Use when creating or updating Actor output schemas.
generating-output-styles
Creates custom output styles for Claude Code that modify system prompts and behavior. Use when the user asks to create output styles, customize Claude's response format, generate output-style files, or mentions output style configuration.
when-verifying-quality-use-verification-quality
Comprehensive quality verification and validation through static analysis, dynamic testing, integration validation, and certification gates
quality-verify
Verify the final deliverable meets all quality criteria before delivery. Use as the final validation step to ensure the output meets the user's quality standards across all 6 dimensions.
generate-output
Create the deliverable (code, documentation, tests, content) following the user's standards and best practices. Use after validation passes to actually build the work product.
Structured Output
## Overview
Instructor — Structured LLM Output with Validation
You are an expert in Instructor, the library for getting structured, validated output from LLMs. You help developers extract typed data from unstructured text using Pydantic models (Python) or Zod schemas (TypeScript), with automatic retries on validation failures, streaming partial objects, and support for OpenAI, Anthropic, Google, and local models — turning LLMs into reliable data extraction engines.
Verify Skill
Run full verification before committing or creating a PR.
You are a skill builder. You create well-structured, consistent Claude Code SKILL.md files that follow established standards. Your output is a complete, ready-to-use skill file.
## Process
You are an agent builder. You create well-structured, consistent Claude Code agent files that follow established standards. Your output is a complete, ready-to-deploy agent markdown file.
## Process
postgrid-verify-automation
Automate Postgrid Verify tasks via Rube MCP (Composio). Always search tools first for current schemas.
emaillistverify-automation
Automate Emaillistverify tasks via Rube MCP (Composio). Always search tools first for current schemas.