parallel-agents
Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Best use case
parallel-agents is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Teams using parallel-agents should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/parallel-agents/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How parallel-agents Compares
| Feature / Agent | parallel-agents | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Dispatch multiple agents to work on independent problems concurrently. Use when facing 3+ independent failures or tasks.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Dispatching Parallel Agents Assign separate agents to independent problem domains simultaneously for faster resolution. ## When to Use - 3+ test failures across different files/subsystems - Multiple independent tasks that don't share state - Investigations that won't interfere with each other - Failures from unrelated root causes ## When NOT to Use - Failures are interconnected - Tasks share state or create conflicts - Agents would modify the same files - You lack context to properly scope tasks ## Implementation Steps ### 1. Group by Domain Organize failures/tasks into independent categories: ```markdown Group A: Authentication tests (3 failures) Group B: API endpoint tests (2 failures) Group C: UI component tests (4 failures) ``` ### 2. Define Focused Tasks Each agent receives: | Field | Description | | ----------- | -------------------------------- | | Scope | Specific files/tests to focus on | | Goal | Clear success criteria | | Constraints | What NOT to change | | Output | Expected deliverable | ### 3. Dispatch Concurrently **IMPORTANT**: Launch all tasks in a **single message** (no `run_in_background`). Multiple Task calls in the same message automatically run in parallel, and Claude waits for all to complete. ``` # All three tasks run in parallel automatically when in the same message Task(test-engineer, prompt="Fix auth test failures in src/auth/*.test.ts") Task(test-engineer, prompt="Fix API test failures in src/api/*.test.ts") Task(frontend-developer, prompt="Fix UI test failures in src/components/*.test.tsx") # Claude waits for all to complete, then continues ``` **Avoid `run_in_background: true`** unless you need to do other work while waiting. Task IDs must be captured and used within the same response. ### 4. Integrate Results 1. Review all agent outputs (available after parallel completion) 2. Verify no conflicts between changes 3. Run full test suite 4. Merge changes ## Effective Agent Prompts **Good prompt:** ``` Fix the 3 failing tests in src/auth/login.test.ts: - "should reject invalid email format" - "should require password min length" - "should handle network errors" Error messages attached. Identify root causes - don't just increase timeouts. Constraints: Don't modify src/api/* files. Output: Summary of fixes with test results. ``` **Bad prompt:** ``` Fix all the tests ``` ## Prompt Template ```markdown ## Task: [Specific description] **Scope:** [Files/tests to focus on] **Failures:** - [Test name]: [Error message] - [Test name]: [Error message] **Goal:** [What success looks like] **Constraints:** - Don't modify [files] - Preserve [behavior] **Output:** - Summary of root causes found - Changes made - Verification results ``` ## Common Pitfalls | Mistake | Problem | Solution | | --------------- | ------------------------- | ---------------------- | | Vague scope | Agent changes wrong files | Specify exact paths | | Missing context | Agent can't diagnose | Include error messages | | No constraints | Conflicting changes | Define boundaries | | Unclear output | Can't verify success | Specify deliverables | ## Benefits - Reduces investigation time through parallelization - Each agent maintains narrow focus - Minimizes cross-agent interference - Solves multiple problems concurrently ## Background Execution For long-running tasks where you need to continue working, use `run_in_background: true`. ### Pattern: Background + Foreground ``` # Long-running audit in background audit_task = Task(security-auditor, prompt="Full security audit", run_in_background: true) # Continue with implementation work Task(frontend-developer, prompt="Build login form") # Later, get audit results TaskOutput(audit_task.id, block: true) ``` ### Pattern: Multiple Background Tasks ``` # Launch multiple background tasks task1 = Task(test-engineer, prompt="...", run_in_background: true) task2 = Task(code-reviewer, prompt="...", run_in_background: true) # Do other work... # Collect all results result1 = TaskOutput(task1.id, block: true) result2 = TaskOutput(task2.id, block: true) ``` ### When to Use Background vs Foreground | Scenario | Mode | Why | | -------------------------- | ------------------------- | ---------------------------- | | Quick tasks (< 1 min) | Foreground | Simpler, immediate results | | Long audit/analysis | Background | Continue working | | Multiple independent tasks | Foreground (parallel) | Auto-waits for all | | Security + Implementation | Background + Foreground | Overlap work | ### Important Notes - Task IDs are only valid within the same response - Always use `block: true` when retrieving results with TaskOutput - Prefer foreground parallel (single message, multiple Tasks) when possible - Background tasks should be collected before the response ends
Related Skills
test-parallelizer
Test Parallelizer - Auto-activating skill for Test Automation. Triggers on: test parallelizer, test parallelizer Part of the Test Automation skill category.
contract-first-agents
Contract-First Map-Reduce coordination protocol for native TeamCreate multi-agent teams. Wraps TeamCreate, Task (teammates), SendMessage with an upfront shared contract phase that eliminates 75% of integration errors. Based on 400+ experiment research proving 52.5% quality improvement over naive coordination.
hosted-agents
This skill should be used when the user asks to "build background agent", "create hosted coding agent", "set up sandboxed execution", "implement multiplayer agent", or mentions background agents, sandboxed VMs, agent infrastructure, Modal sandboxes, self-spawning agents, or remote coding environments.
suggest-awesome-github-copilot-agents
Suggest relevant GitHub Copilot Custom Agents files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing custom agents in this repository, and identifying outdated agents that need updates.
mcp-deploy-manage-agents
Skill converted from mcp-deploy-manage-agents.prompt.md
declarative-agents
Complete development kit for Microsoft 365 Copilot declarative agents with three comprehensive workflows (basic, advanced, validation), TypeSpec support, and Microsoft 365 Agents Toolkit integration
create-agentsmd
Prompt for generating an AGENTS.md file for a repository
agents-md
This skill should be used when the user asks to "create AGENTS.md", "update AGENTS.md", "maintain agent docs", "set up CLAUDE.md", or needs to keep agent instructions concise. Enforces research-backed best practices for minimal, high-signal agent documentation.
Nightmarket — API Marketplace for AI Agents
Nightmarket is a marketplace where AI agents discover and pay for third-party API services. Every call settles on-chain in USDC on Base. No API keys, no subscriptions — just make an HTTP request, pay, and get your response.
../../../agents/engineering-team/cs-workspace-admin.md
No description provided.
../../../agents/ra-qm-team/cs-quality-regulatory.md
No description provided.
../../../agents/project-management/cs-project-manager.md
No description provided.