contract-first-agents
Contract-First Map-Reduce coordination protocol for native TeamCreate multi-agent teams. Wraps TeamCreate, Task (teammates), SendMessage with an upfront shared contract phase that eliminates 75% of integration errors. Based on 400+ experiment research proving 52.5% quality improvement over naive coordination.
Best use case
contract-first-agents is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Contract-First Map-Reduce coordination protocol for native TeamCreate multi-agent teams. Wraps TeamCreate, Task (teammates), SendMessage with an upfront shared contract phase that eliminates 75% of integration errors. Based on 400+ experiment research proving 52.5% quality improvement over naive coordination.
Teams using contract-first-agents should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/contract-first-agents/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How contract-first-agents Compares
| Feature / Agent | contract-first-agents | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Contract-First Map-Reduce coordination protocol for native TeamCreate multi-agent teams. Wraps TeamCreate, Task (teammates), SendMessage with an upfront shared contract phase that eliminates 75% of integration errors. Based on 400+ experiment research proving 52.5% quality improvement over naive coordination.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Contract-First Map-Reduce Agent Coordination
> Wraps native TeamCreate/Task/SendMessage with a proven 4-phase protocol
> that eliminates 75% of integration errors in multi-agent tasks.
## Research Basis
Based on 400+ controlled experiments comparing 7 coordination strategies:
- **Naive multi-agent**: 0.571 composite quality score
- **Contract-First Map-Reduce**: 0.871 composite quality score (+52.5%)
- The contract alone accounts for the ENTIRE quality improvement
- Validated across 2-64 agent configurations
## The Protocol
### Phase 1: CONTRACT GENERATION (Team Lead, ~5% of time)
Before spawning ANY worker agent, the team lead MUST create a Contract Document.
**The Contract Document must contain:**
```markdown
# === SHARED CONTRACT: [Project Name] ===
## 1. MODULE MANIFEST
For EVERY module/section an agent will produce:
- Exact filename
- Purpose (1 sentence)
- ALL exported names (exact spelling, exact case)
## 2. INTERFACE DEFINITIONS
For EVERY cross-module reference:
- Exact function/class name
- Exact parameter names and types
- Exact return type
- Source module -> consuming module(s)
## 3. SHARED TYPES
For EVERY data structure shared across modules:
- Exact field names and types
- Validation rules
- Serialization format
## 4. STYLE GUIDE
- Naming: snake_case for functions/variables, PascalCase for classes
- Indentation: [N] spaces
- Docstrings: [Google/NumPy/Sphinx] style on all public functions/classes
- Error handling: exact exception types
- Import convention: "from module import Name"
- Language-specific conventions
## 5. DEPENDENCY MAP
- Which modules import from which
- Execution order constraints (if any)
- Shared state / global configuration
## 6. SECTION BOUNDARIES
For EACH section assigned to an agent:
- What it must produce (exact deliverables)
- What it imports from other sections (exact names)
- What it exports for other sections (exact names)
- How it connects to adjacent sections (transitions, API calls, imports)
```
**Contract Quality Checklist:**
- [ ] Every exported name is spelled out exactly
- [ ] Every cross-module import has a matching export
- [ ] Style guide is specific (not "be consistent" but "use snake_case")
- [ ] Shared types have exact field names and types
- [ ] No ambiguity - an agent reading only the contract can produce correct code
### Phase 2: PARALLEL EXECUTION (Worker Agents)
Spawn worker agents using the Task tool with `team_name` parameter. Each worker's prompt MUST include:
```
[FULL CONTRACT TEXT - every worker gets the COMPLETE contract]
---
YOUR ASSIGNMENT: [Specific section/module]
You are Agent [N] of [Total]. You are producing [section name].
CRITICAL INSTRUCTIONS:
1. Follow the contract EXACTLY. Do not deviate from specified names, types, or conventions.
2. Your section will be merged with outputs from other agents working in parallel.
3. Use ONLY the exported names specified in the contract when referencing other modules.
4. Do NOT rename, reorganize, or "improve" the interface - follow the contract.
5. Write your output to: [exact file path]
```
**Spawn all workers in parallel** using a single message with multiple Task tool calls.
### Phase 3: AUTOMATED VALIDATION (Team Lead or Script)
After all workers complete, the team lead validates the merged output:
```python
# Validation checks (in priority order):
1. Syntax validity - Can the output be parsed? (ast.parse, eslint --no-fix, etc.)
2. Import resolution - Do all imports match actual exports?
3. Name consistency - Is the naming convention uniform throughout?
4. Completeness - Are all contracted exports present?
5. Style consistency - Indent, docstrings, error handling patterns
6. Cross-references - Do function calls use correct signatures?
```
Save validation results. If all pass -> done. If issues found -> Phase 4.
### Phase 4: TARGETED FIX (Fixer Agent, only if needed)
Spawn ONE fixer agent that receives:
- The merged output
- The specific validation errors (exact line numbers, exact issues)
- The original contract
**The fixer ONLY fixes the specific issues found. It does NOT regenerate or restructure.**
## Implementation with Native Tools
```python
# Step 1: Create team
TeamCreate(team_name="my-project", description="Building X with contract-first protocol")
# Step 2: Team lead generates the contract
# Save contract to a shared file: ~/.claude/teams/{team-name}/contract.md
# Step 3: Spawn workers with full contract in their prompts
Task(
name="worker-auth",
team_name="my-project",
subagent_type="general-purpose",
prompt=f"""
{FULL_CONTRACT_TEXT}
YOUR ASSIGNMENT: auth module
Write to: /workspace/project/auth.py
Follow the contract exactly.
"""
)
# ... spawn all workers in parallel
# Step 4: After workers complete, validate
# Team lead reads all output files and runs validation checks
# Step 5: If issues found, spawn fixer
Task(
name="fixer",
team_name="my-project",
subagent_type="general-purpose",
prompt=f"""
Fix these specific issues in the merged output:
{VALIDATION_ERRORS}
Original contract: {CONTRACT}
Do NOT restructure. Only fix the specific issues listed.
"""
)
# Step 6: Cleanup
SendMessage(type="shutdown_request", recipient="worker-auth")
# ... shutdown all workers
TeamDelete()
```
## When to Use This Protocol
**ALWAYS use for:**
- Any task requiring 3+ agents
- Tasks producing code that must interoperate
- Large document generation (multiple sections that reference each other)
- Any task where agents produce outputs that will be merged
**SKIP the contract for:**
- Completely independent tasks (no cross-references)
- Single-agent tasks
- Research/exploration tasks (no integration needed)
## Key Principles
1. **The contract IS the coordination** - review phases add <5% value if contract is good
2. **Every agent gets the FULL contract** - not just their section's part
3. **Specific beats general** - "use snake_case" beats "be consistent"
4. **Parallel beats sequential** - contract enables parallel work; pipelines sacrifice speed for marginal gain
5. **Targeted fixes beat regeneration** - fix specific issues, don't redo entire sections
6. **10+ agents need validation** - probability of ANY error increases with agent countRelated Skills
generating-api-contracts
Generate API contracts and OpenAPI specifications from code or design documents. Use when documenting API contracts and specifications. Trigger with phrases like "generate API contract", "create OpenAPI spec", or "document API contract".
validating-api-contracts
This skill validates API contracts using consumer-driven testing and OpenAPI validation. It leverages Pact for consumer-driven contract testing, ensuring that API providers adhere to the expectations of their consumers. It also validates APIs against OpenAPI specifications to guarantee compliance and identify breaking changes. Use this skill when the user asks to generate contract tests, validate API responses, check backward compatibility, or validate requests/responses using the terms "contract-test", "ct", "Pact", "OpenAPI validation", or "consumer-driven contract testing".
contract-test-creator
Contract Test Creator - Auto-activating skill for Test Automation. Triggers on: contract test creator, contract test creator Part of the Test Automation skill category.
api-contract
Configure this skill should be used when the user asks about "API contract", "api-contract.md", "shared interface", "TypeScript interfaces", "request response schemas", "endpoint design", or needs guidance on designing contracts that coordinate backend and frontend agents. Use when building or modifying API endpoints. Trigger with phrases like 'create API', 'design endpoint', or 'API scaffold'.
hosted-agents
This skill should be used when the user asks to "build background agent", "create hosted coding agent", "set up sandboxed execution", "implement multiplayer agent", or mentions background agents, sandboxed VMs, agent infrastructure, Modal sandboxes, self-spawning agents, or remote coding environments.
suggest-awesome-github-copilot-agents
Suggest relevant GitHub Copilot Custom Agents files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing custom agents in this repository, and identifying outdated agents that need updates.
mcp-deploy-manage-agents
Skill converted from mcp-deploy-manage-agents.prompt.md
first-ask
Interactive, input-tool powered, task refinement workflow: interrogates scope, deliverables, constraints before carrying out the task; Requires the Joyride extension.
declarative-agents
Complete development kit for Microsoft 365 Copilot declarative agents with three comprehensive workflows (basic, advanced, validation), TypeSpec support, and Microsoft 365 Agents Toolkit integration
create-agentsmd
Prompt for generating an AGENTS.md file for a repository
agents-md
This skill should be used when the user asks to "create AGENTS.md", "update AGENTS.md", "maintain agent docs", "set up CLAUDE.md", or needs to keep agent instructions concise. Enforces research-backed best practices for minimal, high-signal agent documentation.
Nightmarket — API Marketplace for AI Agents
Nightmarket is a marketplace where AI agents discover and pay for third-party API services. Every call settles on-chain in USDC on Base. No API keys, no subscriptions — just make an HTTP request, pay, and get your response.