ultrawork
Parallel execution engine for high-throughput task completion
Best use case
ultrawork is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Parallel execution engine for high-throughput task completion
Parallel execution engine for high-throughput task completion
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "ultrawork" skill to help with this workflow task. Context: Parallel execution engine for high-throughput task completion
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/ultrawork/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How ultrawork Compares
| Feature / Agent | ultrawork | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Parallel execution engine for high-throughput task completion
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
SKILL.md Source
<Purpose>
Ultrawork is a parallel execution engine that runs multiple agents simultaneously for independent tasks. It is a component, not a standalone persistence mode -- it provides parallelism and smart model routing but not persistence, verification loops, or state management.
</Purpose>
<Use_When>
- Multiple independent tasks can run simultaneously
- User says "ulw", "ultrawork", or wants parallel execution
- You need to delegate work to multiple agents at once
- Task benefits from concurrent execution but the user will manage completion themselves
</Use_When>
<Do_Not_Use_When>
- Task requires guaranteed completion with verification -- use `ralph` instead (ralph includes ultrawork)
- Task requires a full autonomous pipeline -- use `autopilot` instead (autopilot includes ralph which includes ultrawork)
- There is only one sequential task with no parallelism opportunity -- delegate directly to an executor agent
- User needs session persistence for resume -- use `ralph` which adds persistence on top of ultrawork
</Do_Not_Use_When>
<Why_This_Exists>
Sequential task execution wastes time when tasks are independent. Ultrawork enables firing multiple agents simultaneously and routing each to the right model tier, reducing total execution time while controlling token costs. It is designed as a composable component that ralph and autopilot layer on top of.
</Why_This_Exists>
<Execution_Policy>
- Fire all independent agent calls simultaneously -- never serialize independent work
- Always pass the `model` parameter explicitly when delegating
- Read `docs/shared/agent-tiers.md` before first delegation for agent selection guidance
- Use `run_in_background: true` for operations over ~30 seconds (installs, builds, tests)
- Run quick commands (git status, file reads, simple checks) in the foreground
</Execution_Policy>
<Steps>
1. **Read agent reference**: Load `docs/shared/agent-tiers.md` for tier selection
2. **Classify tasks by independence**: Identify which tasks can run in parallel vs which have dependencies
3. **Route to correct tiers**:
- Simple lookups/definitions: LOW tier (Haiku)
- Standard implementation: MEDIUM tier (Sonnet)
- Complex analysis/refactoring: HIGH tier (Opus)
4. **Fire independent tasks simultaneously**: Launch all parallel-safe tasks at once
5. **Run dependent tasks sequentially**: Wait for prerequisites before launching dependent work
6. **Background long operations**: Builds, installs, and test suites use `run_in_background: true`
7. **Verify when all tasks complete** (lightweight):
- Build/typecheck passes
- Affected tests pass
- No new errors introduced
</Steps>
<Tool_Usage>
- Use `Task(subagent_type="oh-my-claudecode:executor", model="haiku", ...)` for simple changes
- Use `Task(subagent_type="oh-my-claudecode:executor", model="sonnet", ...)` for standard work
- Use `Task(subagent_type="oh-my-claudecode:executor", model="opus", ...)` for complex work
- Use `run_in_background: true` for package installs, builds, and test suites
- Use foreground execution for quick status checks and file operations
</Tool_Usage>
<Examples>
<Good>
Three independent tasks fired simultaneously:
```
Task(subagent_type="oh-my-claudecode:executor", model="haiku", prompt="Add missing type export for Config interface")
Task(subagent_type="oh-my-claudecode:executor", model="sonnet", prompt="Implement the /api/users endpoint with validation")
Task(subagent_type="oh-my-claudecode:executor", model="sonnet", prompt="Add integration tests for the auth middleware")
```
Why good: Independent tasks at appropriate tiers, all fired at once.
</Good>
<Good>
Correct use of background execution:
```
Task(subagent_type="oh-my-claudecode:executor", model="sonnet", prompt="npm install && npm run build", run_in_background=true)
Task(subagent_type="oh-my-claudecode:executor", model="haiku", prompt="Update the README with new API endpoints")
```
Why good: Long build runs in background while short task runs in foreground.
</Good>
<Bad>
Sequential execution of independent work:
```
result1 = Task(executor, "Add type export") # wait...
result2 = Task(executor, "Implement endpoint") # wait...
result3 = Task(executor, "Add tests") # wait...
```
Why bad: These tasks are independent. Running them sequentially wastes time.
</Bad>
<Bad>
Wrong tier selection:
```
Task(subagent_type="oh-my-claudecode:executor", model="opus", prompt="Add a missing semicolon")
```
Why bad: Opus is expensive overkill for a trivial fix. Use executor with Haiku instead.
</Bad>
</Examples>
<Escalation_And_Stop_Conditions>
- When ultrawork is invoked directly (not via ralph), apply lightweight verification only -- build passes, tests pass, no new errors
- For full persistence and comprehensive architect verification, recommend switching to `ralph` mode
- If a task fails repeatedly across retries, report the issue rather than retrying indefinitely
- Escalate to the user when tasks have unclear dependencies or conflicting requirements
</Escalation_And_Stop_Conditions>
<Final_Checklist>
- [ ] All parallel tasks completed
- [ ] Build/typecheck passes
- [ ] Affected tests pass
- [ ] No new errors introduced
</Final_Checklist>
<Advanced>
## Relationship to Other Modes
```
ralph (persistence wrapper)
\-- includes: ultrawork (this skill)
\-- provides: parallel execution only
autopilot (autonomous execution)
\-- includes: ralph
\-- includes: ultrawork (this skill)
```
Ultrawork is the parallelism layer. Ralph adds persistence and verification. Autopilot adds the full lifecycle pipeline.
</Advanced>Related Skills
writer-memory
Agentic memory system for writers - track characters, relationships, scenes, and themes
visual-verdict
Structured visual QA verdict for screenshot-to-reference comparisons
ultraqa
QA cycling workflow - test, verify, fix, repeat until goal met
trace
Evidence-driven tracing lane that orchestrates competing tracer hypotheses in Claude built-in team mode
team
N coordinated agents on shared task list using Claude Code native teams
skill
Manage local skills - list, add, remove, search, edit, setup wizard
setup
Use first for install/update routing — sends setup, doctor, or MCP requests to the correct OMC setup flow
sciomc
Orchestrate parallel scientist agents for comprehensive analysis with AUTO mode
release
Automated release workflow for oh-my-claudecode
ralplan
Consensus planning entrypoint that auto-gates vague ralph/autopilot/team requests before execution
ralph
Self-referential loop until task completion with configurable verification reviewer
project-session-manager
Worktree-first dev environment manager for issues, PRs, and features with optional tmux sessions