parallel-agents
Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Best use case
parallel-agents is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "parallel-agents" skill to help with this workflow task. Context: Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/parallel-agents/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How parallel-agents Compares
| Feature / Agent | parallel-agents | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Dispatching Parallel Agents Assign separate agents to independent problem domains simultaneously for faster resolution. ## When to Use - 3+ test failures across different files/subsystems - Multiple independent tasks that don't share state - Investigations that won't interfere with each other - Failures from unrelated root causes ## When NOT to Use - Failures are interconnected - Tasks share state or create conflicts - Agents would modify the same files - You lack context to properly scope tasks ## Implementation Steps ### 1. Group by Domain Organize failures/tasks into independent categories: ```markdown Group A: Authentication tests (3 failures) Group B: API endpoint tests (2 failures) Group C: UI component tests (4 failures) ``` ### 2. Define Focused Tasks Each agent receives: | Field | Description | | ----------- | -------------------------------- | | Scope | Specific files/tests to focus on | | Goal | Clear success criteria | | Constraints | What NOT to change | | Output | Expected deliverable | ### 3. Dispatch Concurrently **IMPORTANT**: Launch all tasks in a **single message** (no `run_in_background`). Multiple Task calls in the same message automatically run in parallel, and Claude waits for all to complete. ``` # All three tasks run in parallel automatically when in the same message Task(test-engineer, prompt="Fix auth test failures in src/auth/*.test.ts") Task(test-engineer, prompt="Fix API test failures in src/api/*.test.ts") Task(frontend-developer, prompt="Fix UI test failures in src/components/*.test.tsx") # Claude waits for all to complete, then continues ``` **Avoid `run_in_background: true`** unless you need to do other work while waiting. Task IDs must be captured and used within the same response. ### 4. Integrate Results 1. Review all agent outputs (available after parallel completion) 2. Verify no conflicts between changes 3. Run full test suite 4. Merge changes ## Effective Agent Prompts **Good prompt:** ``` Fix the 3 failing tests in src/auth/login.test.ts: - "should reject invalid email format" - "should require password min length" - "should handle network errors" Error messages attached. Identify root causes - don't just increase timeouts. Constraints: Don't modify src/api/* files. Output: Summary of fixes with test results. ``` **Bad prompt:** ``` Fix all the tests ``` ## Prompt Template ```markdown ## Task: [Specific description] **Scope:** [Files/tests to focus on] **Failures:** - [Test name]: [Error message] - [Test name]: [Error message] **Goal:** [What success looks like] **Constraints:** - Don't modify [files] - Preserve [behavior] **Output:** - Summary of root causes found - Changes made - Verification results ``` ## Common Pitfalls | Mistake | Problem | Solution | | --------------- | ------------------------- | ---------------------- | | Vague scope | Agent changes wrong files | Specify exact paths | | Missing context | Agent can't diagnose | Include error messages | | No constraints | Conflicting changes | Define boundaries | | Unclear output | Can't verify success | Specify deliverables | ## Benefits - Reduces investigation time through parallelization - Each agent maintains narrow focus - Minimizes cross-agent interference - Solves multiple problems concurrently ## Background Execution For long-running tasks where you need to continue working, use `run_in_background: true`. ### Pattern: Background + Foreground ``` # Long-running audit in background audit_task = Task(security-auditor, prompt="Full security audit", run_in_background: true) # Continue with implementation work Task(frontend-developer, prompt="Build login form") # Later, get audit results TaskOutput(audit_task.id, block: true) ``` ### Pattern: Multiple Background Tasks ``` # Launch multiple background tasks task1 = Task(test-engineer, prompt="...", run_in_background: true) task2 = Task(code-reviewer, prompt="...", run_in_background: true) # Do other work... # Collect all results result1 = TaskOutput(task1.id, block: true) result2 = TaskOutput(task2.id, block: true) ``` ### When to Use Background vs Foreground | Scenario | Mode | Why | | -------------------------- | ------------------------- | ---------------------------- | | Quick tasks (< 1 min) | Foreground | Simpler, immediate results | | Long audit/analysis | Background | Continue working | | Multiple independent tasks | Foreground (parallel) | Auto-waits for all | | Security + Implementation | Background + Foreground | Overlap work | ### Important Notes - Task IDs are only valid within the same response - Always use `block: true` when retrieving results with TaskOutput - Prefer foreground parallel (single message, multiple Tasks) when possible - Background tasks should be collected before the response ends
Related Skills
voice-agents
Voice agents represent the frontier of AI interaction - humans speaking naturally with AI systems. The challenge isn't just speech recognition and synthesis, it's achieving natural conversation flow with sub-800ms latency while handling interruptions, background noise, and emotional nuance. This skill covers two architectures: speech-to-speech (OpenAI Realtime API, lowest latency, most natural) and pipeline (STT→LLM→TTS, more control, easier to debug). Key insight: latency is the constraint. Hu
m365-agents-ts
Microsoft 365 Agents SDK for TypeScript/Node.js. Build multichannel agents for Teams/M365/Copilot Studio with AgentApplication routing, Express hosting, streaming responses, and Copilot Studio client integration. Triggers: "Microsoft 365 Agents SDK", "@microsoft/agents-hosting", "AgentApplication", "startServer", "streamingResponse", "Copilot Studio client", "@microsoft/agents-copilotstudio-client".
m365-agents-py
Microsoft 365 Agents SDK for Python. Build multichannel agents for Teams/M365/Copilot Studio with aiohttp hosting, AgentApplication routing, streaming responses, and MSAL-based auth. Triggers: "Microsoft 365 Agents SDK", "microsoft_agents", "AgentApplication", "start_agent_process", "TurnContext", "Copilot Studio client", "CloudAdapter".
m365-agents-dotnet
Microsoft 365 Agents SDK for .NET. Build multichannel agents for Teams/M365/Copilot Studio with ASP.NET Core hosting, AgentApplication routing, and MSAL-based auth. Triggers: "Microsoft 365 Agents SDK", "Microsoft.Agents", "AddAgentApplicationOptions", "AgentApplication", "AddAgentAspNetAuthentication", "Copilot Studio client", "IAgentHttpAdapter".
hosted-agents-v2-py
Build hosted agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition. Use when creating container-based agents that run custom code in Azure AI Foundry. Triggers: "ImageBasedHostedAgentDefinition", "hosted agent", "container agent", "create_version", "ProtocolVersionRecord", "AgentProtocol.RESPONSES".
computer-use-agents
Build AI agents that interact with computers like humans do - viewing screens, moving cursors, clicking buttons, and typing text. Covers Anthropic's Computer Use, OpenAI's Operator/CUA, and open-source alternatives. Critical focus on sandboxing, security, and handling the unique challenges of vision-based control. Use when: computer use, desktop automation agent, screen control AI, vision-based agent, GUI automation.
azure-ai-agents-persistent-java
Azure AI Agents Persistent SDK for Java. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Triggers: "PersistentAgentsClient", "persistent agents java", "agent threads java", "agent runs java", "streaming agents java".
azure-ai-agents-persistent-dotnet
Azure AI Agents Persistent SDK for .NET. Low-level SDK for creating and managing AI agents with threads, messages, runs, and tools. Use for agent CRUD, conversation threads, streaming responses, function calling, file search, and code interpreter. Triggers: "PersistentAgentsClient", "persistent agents", "agent threads", "agent runs", "streaming agents", "function calling agents .NET".
autonomous-agents
Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The challenge isn't making them capable - it's making them reliable. Every extra decision multiplies failure probability. This skill covers agent loops (ReAct, Plan-Execute), goal decomposition, reflection patterns, and production reliability. Key insight: compounding error rates kill autonomous agents. A 95% success rate per step drops to 60% b
ai-agents-architect
Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool use, function calling.
agents-v2-py
Build container-based Foundry Agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition. Use when creating hosted agents that run custom code in Azure AI Foundry with your own container images. Triggers: "ImageBasedHostedAgentDefinition", "hosted agent", "container agent", "Foundry Agent", "create_version", "ProtocolVersionRecord", "AgentProtocol.RESPONSES", "custom agent image".
testing-skills-with-subagents
Use when creating or editing skills, before deployment, to verify they work under pressure and resist rationalization - applies RED-GREEN-REFACTOR cycle to process documentation by running baseline without skill, writing to address failures, iterating to close loopholes