multi-agent-orchestration
Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.
Best use case
multi-agent-orchestration is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.
Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "multi-agent-orchestration" skill to help with this workflow task. Context: Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/multi-agent-orchestration/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How multi-agent-orchestration Compares
| Feature / Agent | multi-agent-orchestration | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
SKILL.md Source
# Multi-Agent Orchestration Skill Route and delegate tasks to the most appropriate AI provider based on task characteristics and provider capabilities. ## Variables | Variable | Default | Description | |----------|---------|-------------| | ENABLED_CLAUDE | true | Enable Claude Code as provider | | ENABLED_OPENAI | true | Enable OpenAI/Codex as provider | | ENABLED_GEMINI | true | Enable Gemini as provider | | ENABLED_CURSOR | true | Enable Cursor as provider | | ENABLED_OPENCODE | true | Enable OpenCode as provider | | ENABLED_OLLAMA | true | Enable local Ollama as provider | | DEFAULT_PROVIDER | claude | Fallback when routing is uncertain | | CHECK_COST_STATUS | true | Check usage before delegating | ## Instructions **MANDATORY** - Follow the Workflow steps below in order. Do not skip steps. - Before delegating, understand the task characteristics - Use the model-discovery skill for current model names - Check cost/usage status before high-volume delegation ## Quick Decision Tree ``` What type of task is this? │ ├─ Needs conversation history? ─────────► Keep in Claude (no delegation) │ ├─ Needs sandboxed execution? ──────────► OpenAI/Codex │ ├─ Large context (>100k tokens)? ───────► Gemini │ ├─ Multimodal (images/video)? ──────────► Gemini │ ├─ Needs web search? ───────────────────► Gemini │ ├─ Quick IDE edit? ─────────────────────► Cursor │ ├─ Privacy required / offline? ─────────► Ollama │ ├─ Provider-agnostic fallback? ─────────► OpenCode │ └─ General reasoning / coding? ─────────► Claude (default) ``` ## Red Flags - STOP and Reconsider If you're about to: - Delegate without checking provider availability - Use hardcoded model names (use model-discovery skill instead) - Send sensitive data to a provider without user consent - Delegate a task that requires your conversation history - Skip the routing decision and guess which provider **STOP** -> Read the appropriate cookbook file -> Check provider status -> Then proceed ## Workflow 1. [ ] Analyze the task: What capabilities are required? 2. [ ] **CHECKPOINT**: Consult `reference/provider-matrix.md` for routing decision 3. [ ] Check provider availability: Run provider-check and cost-status if CHECK_COST_STATUS is true 4. [ ] Read the appropriate cookbook file for the selected provider 5. [ ] **CHECKPOINT**: Confirm API key / auth is configured 6. [ ] Execute delegation with proper context 7. [ ] Parse and summarize results for the user ## Cookbook ### Claude Code (Orchestrator) - IF: Task requires complex reasoning, multi-file analysis, or conversation history - THEN: Keep task in Claude Code (you are the orchestrator) - WHY: Best for architecture decisions, complex refactoring ### OpenAI / Codex - IF: Task needs sandboxed execution OR security-sensitive operations - THEN: Read and execute `cookbook/openai-codex.md` - REQUIRES: `OPENAI_API_KEY` or Codex subscription ### Google Gemini - IF: Task involves large context (>100k tokens), multimodal (images/video), OR web search - THEN: Read and execute `cookbook/gemini-cli.md` - REQUIRES: `GEMINI_API_KEY` or Gemini subscription ### Cursor - IF: Task is quick IDE edits, simple codegen, or rename/refactor - THEN: Read and execute `cookbook/cursor-agent.md` - REQUIRES: Cursor installed and configured ### OpenCode - IF: Need provider-agnostic execution or a fallback CLI - THEN: Read and execute `cookbook/opencode-cli.md` - REQUIRES: OpenCode CLI installed and configured ### Ollama (Local) - IF: Task needs privacy, offline operation, or cost-free inference - THEN: Read and execute `cookbook/ollama-local.md` - REQUIRES: Ollama running with models pulled ## Model Names **Do not hardcode model version numbers** - they become stale quickly. For current model names, use the `model-discovery` skill: ```bash python .claude/ai-dev-kit/skills/model-discovery/scripts/fetch_models.py ``` Or read: `.claude/ai-dev-kit/skills/model-discovery/SKILL.md` ## Quick Reference | Task Type | Primary | Fallback | |-----------|---------|----------| | Complex reasoning | Claude | OpenAI | | Sandboxed execution | OpenAI | Cursor | | Large context (>100k) | Gemini | Claude | | Multimodal | Gemini | Claude | | Quick codegen | Cursor | Claude | | Web search | Gemini | (web tools) | | Privacy/offline | Ollama | Claude | See `reference/provider-matrix.md` for detailed routing guidance. ## Tool Discovery Orchestration tools are available in `.claude/ai-dev-kit/dev-tools/orchestration/`: ```bash # Check provider status and usage .claude/ai-dev-kit/dev-tools/orchestration/monitoring/cost-status.sh # Check CLI availability (optional apply) .claude/ai-dev-kit/dev-tools/orchestration/monitoring/provider-check.py # Intelligent task routing .claude/ai-dev-kit/dev-tools/orchestration/routing/route-task.py "your task" # Direct provider execution .claude/ai-dev-kit/dev-tools/orchestration/providers/claude-code/spawn.sh "task" .claude/ai-dev-kit/dev-tools/orchestration/providers/codex/execute.sh "task" .claude/ai-dev-kit/dev-tools/orchestration/providers/gemini/query.sh "task" .claude/ai-dev-kit/dev-tools/orchestration/providers/cursor/agent.sh "task" .claude/ai-dev-kit/dev-tools/orchestration/providers/opencode/execute.sh "task" .claude/ai-dev-kit/dev-tools/orchestration/providers/ollama/query.sh "task" ``` ## Output Delegation results should be: 1. Parsed from provider's response format 2. Summarized for the user 3. Integrated back into the conversation context ```markdown ## Delegation Result **Provider**: [provider name] **Task**: [brief description] **Status**: Success / Partial / Failed ### Summary [Key findings or outputs] ### Details [Full response if relevant] ```
Related Skills
workflow-orchestration-patterns
Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism constraints. Use when building long-running processes, distributed transactions, or microservice orchestration.
saga-orchestration
Implement saga patterns for distributed transactions and cross-aggregate workflows. Use when coordinating multi-step business processes, handling compensating transactions, or managing long-running workflows.
performance-testing-review-multi-agent-review
Use when working with performance testing review multi agent review
multi-platform-apps-multi-platform
Build and deploy the same feature consistently across web, mobile, and desktop platforms using API-first architecture and parallel implementation strategies.
multi-cloud-architecture
Design multi-cloud architectures using a decision framework to select and integrate services across AWS, Azure, and GCP. Use when building multi-cloud systems, avoiding vendor lock-in, or leveraging best-of-breed services from multiple providers.
multi-agent-brainstorming
Use this skill when a design or idea requires higher confidence, risk reduction, or formal review. This skill orchestrates a structured, sequential multi-agent design review where each agent has a strict, non-overlapping role. It prevents blind spots, false confidence, and premature convergence.
multiplayer
Multiplayer game development principles. Architecture, networking, synchronization.
full-stack-orchestration-full-stack-feature
Use when working with full stack orchestration full stack feature
error-debugging-multi-agent-review
Use when working with error debugging multi agent review
design-orchestration
Orchestrates design workflows by routing work through brainstorming, multi-agent review, and execution readiness in the correct order. Prevents premature implementation, skipped validation, and unreviewed high-risk designs.
agent-orchestration-multi-agent-optimize
Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughput, or reliability.
agent-orchestration-improve-agent
Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.