multi-agent-orchestration

Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.

25 stars

Best use case

multi-agent-orchestration is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.

Teams using multi-agent-orchestration should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/multi-agent-orchestration/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/aiskillstore/marketplace/consiliency/multi-agent-orchestration/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/multi-agent-orchestration/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How multi-agent-orchestration Compares

Feature / Agentmulti-agent-orchestrationStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Orchestrate tasks across multiple AI providers (Claude, OpenAI, Gemini, Cursor, OpenCode, Ollama). Use when delegating tasks to specialized providers, routing based on capabilities, or implementing fallback strategies.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Multi-Agent Orchestration Skill

Route and delegate tasks to the most appropriate AI provider based on task characteristics and provider capabilities.

## Variables

| Variable | Default | Description |
|----------|---------|-------------|
| ENABLED_CLAUDE | true | Enable Claude Code as provider |
| ENABLED_OPENAI | true | Enable OpenAI/Codex as provider |
| ENABLED_GEMINI | true | Enable Gemini as provider |
| ENABLED_CURSOR | true | Enable Cursor as provider |
| ENABLED_OPENCODE | true | Enable OpenCode as provider |
| ENABLED_OLLAMA | true | Enable local Ollama as provider |
| DEFAULT_PROVIDER | claude | Fallback when routing is uncertain |
| CHECK_COST_STATUS | true | Check usage before delegating |

## Instructions

**MANDATORY** - Follow the Workflow steps below in order. Do not skip steps.

- Before delegating, understand the task characteristics
- Use the model-discovery skill for current model names
- Check cost/usage status before high-volume delegation

## Quick Decision Tree

```
What type of task is this?
│
├─ Needs conversation history? ─────────► Keep in Claude (no delegation)
│
├─ Needs sandboxed execution? ──────────► OpenAI/Codex
│
├─ Large context (>100k tokens)? ───────► Gemini
│
├─ Multimodal (images/video)? ──────────► Gemini
│
├─ Needs web search? ───────────────────► Gemini
│
├─ Quick IDE edit? ─────────────────────► Cursor
│
├─ Privacy required / offline? ─────────► Ollama
│
├─ Provider-agnostic fallback? ─────────► OpenCode
│
└─ General reasoning / coding? ─────────► Claude (default)
```

## Red Flags - STOP and Reconsider

If you're about to:
- Delegate without checking provider availability
- Use hardcoded model names (use model-discovery skill instead)
- Send sensitive data to a provider without user consent
- Delegate a task that requires your conversation history
- Skip the routing decision and guess which provider

**STOP** -> Read the appropriate cookbook file -> Check provider status -> Then proceed

## Workflow

1. [ ] Analyze the task: What capabilities are required?
2. [ ] **CHECKPOINT**: Consult `reference/provider-matrix.md` for routing decision
3. [ ] Check provider availability: Run provider-check and cost-status if CHECK_COST_STATUS is true
4. [ ] Read the appropriate cookbook file for the selected provider
5. [ ] **CHECKPOINT**: Confirm API key / auth is configured
6. [ ] Execute delegation with proper context
7. [ ] Parse and summarize results for the user

## Cookbook

### Claude Code (Orchestrator)
- IF: Task requires complex reasoning, multi-file analysis, or conversation history
- THEN: Keep task in Claude Code (you are the orchestrator)
- WHY: Best for architecture decisions, complex refactoring

### OpenAI / Codex
- IF: Task needs sandboxed execution OR security-sensitive operations
- THEN: Read and execute `cookbook/openai-codex.md`
- REQUIRES: `OPENAI_API_KEY` or Codex subscription

### Google Gemini
- IF: Task involves large context (>100k tokens), multimodal (images/video), OR web search
- THEN: Read and execute `cookbook/gemini-cli.md`
- REQUIRES: `GEMINI_API_KEY` or Gemini subscription

### Cursor
- IF: Task is quick IDE edits, simple codegen, or rename/refactor
- THEN: Read and execute `cookbook/cursor-agent.md`
- REQUIRES: Cursor installed and configured

### OpenCode
- IF: Need provider-agnostic execution or a fallback CLI
- THEN: Read and execute `cookbook/opencode-cli.md`
- REQUIRES: OpenCode CLI installed and configured

### Ollama (Local)
- IF: Task needs privacy, offline operation, or cost-free inference
- THEN: Read and execute `cookbook/ollama-local.md`
- REQUIRES: Ollama running with models pulled

## Model Names

**Do not hardcode model version numbers** - they become stale quickly.

For current model names, use the `model-discovery` skill:
```bash
python .claude/ai-dev-kit/skills/model-discovery/scripts/fetch_models.py
```

Or read: `.claude/ai-dev-kit/skills/model-discovery/SKILL.md`

## Quick Reference

| Task Type | Primary | Fallback |
|-----------|---------|----------|
| Complex reasoning | Claude | OpenAI |
| Sandboxed execution | OpenAI | Cursor |
| Large context (>100k) | Gemini | Claude |
| Multimodal | Gemini | Claude |
| Quick codegen | Cursor | Claude |
| Web search | Gemini | (web tools) |
| Privacy/offline | Ollama | Claude |

See `reference/provider-matrix.md` for detailed routing guidance.

## Tool Discovery

Orchestration tools are available in `.claude/ai-dev-kit/dev-tools/orchestration/`:

```bash
# Check provider status and usage
.claude/ai-dev-kit/dev-tools/orchestration/monitoring/cost-status.sh

# Check CLI availability (optional apply)
.claude/ai-dev-kit/dev-tools/orchestration/monitoring/provider-check.py

# Intelligent task routing
.claude/ai-dev-kit/dev-tools/orchestration/routing/route-task.py "your task"

# Direct provider execution
.claude/ai-dev-kit/dev-tools/orchestration/providers/claude-code/spawn.sh "task"
.claude/ai-dev-kit/dev-tools/orchestration/providers/codex/execute.sh "task"
.claude/ai-dev-kit/dev-tools/orchestration/providers/gemini/query.sh "task"
.claude/ai-dev-kit/dev-tools/orchestration/providers/cursor/agent.sh "task"
.claude/ai-dev-kit/dev-tools/orchestration/providers/opencode/execute.sh "task"
.claude/ai-dev-kit/dev-tools/orchestration/providers/ollama/query.sh "task"
```

## Output

Delegation results should be:
1. Parsed from provider's response format
2. Summarized for the user
3. Integrated back into the conversation context

```markdown
## Delegation Result

**Provider**: [provider name]
**Task**: [brief description]
**Status**: Success / Partial / Failed

### Summary
[Key findings or outputs]

### Details
[Full response if relevant]
```

Related Skills

orchestrating-multi-agent-systems

25
from ComeOnOliver/skillshub

Execute orchestrate multi-agent systems with handoffs, routing, and workflows across AI providers. Use when building complex AI systems requiring agent collaboration, task delegation, or workflow coordination. Trigger with phrases like "create multi-agent system", "orchestrate agents", or "coordinate agent workflows".

exa-multi-env-setup

25
from ComeOnOliver/skillshub

Configure Exa across development, staging, and production environments. Use when setting up multi-environment search pipelines, managing API key isolation, or configuring per-environment search limits and caching. Trigger with phrases like "exa environments", "exa staging", "exa dev prod", "exa environment setup", "exa multi-env".

evernote-multi-env-setup

25
from ComeOnOliver/skillshub

Configure multi-environment setup for Evernote integrations. Use when setting up dev, staging, and production environments, or managing environment-specific configurations. Trigger with phrases like "evernote environments", "evernote staging", "evernote dev setup", "multiple environments evernote".

documenso-multi-env-setup

25
from ComeOnOliver/skillshub

Configure Documenso across multiple environments (dev, staging, production). Use when setting up environment-specific configurations, managing API keys, or implementing environment promotion workflows. Trigger with phrases like "documenso environments", "documenso staging", "documenso dev setup", "multi-environment documenso".

deepgram-multi-env-setup

25
from ComeOnOliver/skillshub

Configure Deepgram multi-environment setup for dev, staging, and production. Use when setting up environment-specific configurations, managing multiple Deepgram projects, or implementing environment isolation. Trigger: "deepgram environments", "deepgram staging", "deepgram dev prod", "multi-environment deepgram", "deepgram config management".

databricks-multi-env-setup

25
from ComeOnOliver/skillshub

Configure Databricks across development, staging, and production environments. Use when setting up multi-environment deployments, configuring per-environment secrets, or implementing environment-specific Databricks configurations. Trigger with phrases like "databricks environments", "databricks staging", "databricks dev prod", "databricks environment setup", "databricks config by env".

customerio-multi-env-setup

25
from ComeOnOliver/skillshub

Configure Customer.io multi-environment setup with workspace isolation. Use when setting up dev/staging/prod workspaces, environment-aware clients, or Kubernetes config overlays. Trigger: "customer.io environments", "customer.io staging", "customer.io dev prod", "customer.io workspace isolation".

cursor-multi-repo

25
from ComeOnOliver/skillshub

Work with multiple repositories in Cursor: multi-root workspaces, monorepo patterns, selective indexing, and cross-project context. Triggers on "cursor multi repo", "cursor multiple projects", "cursor monorepo", "cursor workspace", "multi-root workspace".

cohere-multi-env-setup

25
from ComeOnOliver/skillshub

Configure Cohere across development, staging, and production environments. Use when setting up multi-environment deployments, configuring per-environment API keys, model selection, and rate limit strategies. Trigger with phrases like "cohere environments", "cohere staging", "cohere dev prod", "cohere environment setup", "cohere config by env".

coderabbit-multi-env-setup

25
from ComeOnOliver/skillshub

Configure CodeRabbit review behavior per branch and environment using path instructions and base branches. Use when setting different review profiles per branch, configuring stricter reviews for release branches, or customizing CodeRabbit behavior across dev/staging/prod workflows. Trigger with phrases like "coderabbit environments", "coderabbit staging", "coderabbit per-branch config", "coderabbit release review", "coderabbit environment setup".

clickup-multi-env-setup

25
from ComeOnOliver/skillshub

Configure ClickUp API access across dev, staging, and production environments with per-environment tokens and workspace isolation. Trigger: "clickup environments", "clickup staging", "clickup dev prod", "clickup environment setup", "clickup config by env", "clickup multi-env".

clickhouse-multi-env-setup

25
from ComeOnOliver/skillshub

Configure ClickHouse across dev, staging, and production with environment-specific settings, secrets management, and infrastructure-as-code patterns. Use when setting up per-environment ClickHouse instances, managing connection configs, or deploying to multiple environments. Trigger: "clickhouse environments", "clickhouse dev staging prod", "clickhouse multi-env", "clickhouse environment config", "clickhouse staging setup".