PicoClaw Fleet
Orchestrate a fleet of remote PicoClaw workers over SSH for fast, ephemeral one-shot tasks, enabling parallel execution and management of distributed AI workloads.
About this skill
This skill empowers AI agents to manage and orchestrate a fleet of remote PicoClaw workers via SSH. It automates the deployment of PicoClaw onto various hosts, dispatches one-shot AI tasks, and facilitates parallel execution across multiple machines. This is particularly useful for computationally intensive or batch processing tasks that can benefit from distributed processing, allowing an agent to scale its capabilities beyond a single machine. Key use cases include fanning out work in parallel for tasks like summarizing large log files, processing multiple documents simultaneously, or executing distinct AI agent prompts across different environments. It supports both initial deployment and ongoing task dispatch, with options to tear down workers after task completion for ephemeral use cases. The skill relies on a local configuration file (~/.openclaw/workspace/config/picoclaw-fleet.json) to manage host details and default AI provider settings, ensuring flexibility and reusability. An AI agent would leverage this skill to significantly scale and accelerate its operations by distributing workloads to available remote infrastructure. It simplifies the complexity of SSH connections, binary deployment, and task execution, allowing the agent to focus on the higher-level task of orchestrating intelligent operations. This leads to faster task completion times and more efficient utilization of compute resources for AI-driven workflows, making it ideal for distributed AI task processing.
Best use case
PicoClaw Fleet is best used when you need a repeatable devops & infrastructure workflow instead of a one-off prompt. It is especially useful for teams working in multi. Orchestrate a fleet of remote PicoClaw workers over SSH for fast, ephemeral one-shot tasks, enabling parallel execution and management of distributed AI workloads.
Orchestrate a fleet of remote PicoClaw workers over SSH for fast, ephemeral one-shot tasks, enabling parallel execution and management of distributed AI workloads.
Users should expect a more consistent devops & infrastructure output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "PicoClaw Fleet" skill to help with this devops & infrastructure task. Context: Orchestrate a fleet of remote PicoClaw workers over SSH for fast, ephemeral one-shot tasks, enabling parallel execution and management of distributed AI workloads.
Example output
A structured devops & infrastructure result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
- Use it when you are solving a devops & infrastructure task and want a more structured operating flow.
- Use it when you can invest a small amount of setup effort for a more repeatable workflow.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/picoclaw-fleet/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How PicoClaw Fleet Compares
| Feature / Agent | PicoClaw Fleet | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | medium | N/A |
Frequently Asked Questions
What does this skill do?
Orchestrate a fleet of remote PicoClaw workers over SSH for fast, ephemeral one-shot tasks, enabling parallel execution and management of distributed AI workloads.
How difficult is it to install?
The installation complexity is rated as medium. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Top AI Agents for Productivity
See the top AI agent skills for productivity, workflow automation, operational systems, documentation, and everyday task execution.
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
SKILL.md Source
# PicoClaw Fleet
Orchestrate a fleet of remote PicoClaw workers over SSH for fast, ephemeral one-shot tasks.
## Purpose
Use this skill to deploy PicoClaw to remote machines, dispatch one-shot tasks, fan out work in parallel, and optionally tear workers down after completion.
## Skill Files
- `scripts/deploy.sh` — install/update PicoClaw on a host
- `scripts/dispatch.sh` — run `picoclaw agent -m "TASK"` on a host and return stdout
- `scripts/fleet-status.sh` — check host reachability and install readiness
## 1) Always read fleet config first
Fleet config path:
- `~/.openclaw/workspace/config/picoclaw-fleet.json`
If missing, create it with this default template:
```json
{
"hosts": [
{
"name": "darth",
"host": "192.168.50.57",
"user": "eric",
"arch": "arm64",
"ssh_key": "~/.ssh/id_rsa"
}
],
"defaults": {
"provider": "anthropic",
"api_key_env": "ANTHROPIC_API_KEY"
}
}
```
## 2) Deploy PicoClaw to a host
Use `scripts/deploy.sh <host> <user> <arch> [ssh_key]`.
Expected behavior:
- Resolve latest release from `EricGrill/picoclaw` GitHub releases
- Select architecture asset (`amd64`, `arm64`, `riscv64`)
- Install binary remotely to `~/.local/bin/picoclaw`
- Create `~/.picoclaw/.env` with provider + API key env value
- Run `picoclaw onboard`
Required envs before deploy:
- `ANTHROPIC_API_KEY` (default) or whichever env is set by `defaults.api_key_env`
- Optional: `PROVIDER` (defaults to `anthropic`)
- Optional: `API_KEY_ENV` override
## 3) Dispatch one-shot work
Use `scripts/dispatch.sh <host> <user> <task> [timeout_seconds]`.
Behavior:
- SSH into host
- Run `picoclaw agent -m "TASK"`
- Enforce timeout (default 120s)
- Return stdout directly (clean output for inline display)
## 4) Run tasks in parallel across hosts
For multi-task batches, dispatch to multiple hosts in background and wait for all:
```bash
scripts/dispatch.sh 192.168.50.57 eric "summarize logs" 120 > /tmp/darth.out 2>&1 &
scripts/dispatch.sh 192.168.50.58 eric "extract action items" 120 > /tmp/lobot.out 2>&1 &
wait
```
Then print each host output inline.
## 5) Host selection policy
For single tasks:
1. Run `scripts/fleet-status.sh`
2. Prefer reachable hosts where picoclaw is installed
3. Pick least-loaded host when load data exists; otherwise pick first available
If selected host is missing PicoClaw, run deploy first.
## 6) Teardown (optional)
To remove PicoClaw after one-shot jobs:
```bash
ssh -i ~/.ssh/id_rsa eric@HOST 'rm -f ~/.local/bin/picoclaw ~/.picoclaw/.env'
```
Use teardown only when explicitly requested or for strict ephemeral execution workflows.
## Failure handling
- SSH failure: report host as unreachable and continue with other hosts
- Deploy failure on one host: continue dispatching to healthy hosts
- Timeout: return timeout status with partial output if present
- Missing config: create template, then re-run
## Recommended workflow
1. Load/validate fleet config
2. Check health: `scripts/fleet-status.sh`
3. Deploy missing hosts: `scripts/deploy.sh ...`
4. Dispatch task(s): `scripts/dispatch.sh ...`
5. Aggregate outputs and return inline
6. Optional teardown if requestedRelated Skills
linux-shell-scripting
Provide production-ready shell script templates for common Linux system administration tasks including backups, monitoring, user management, log analysis, and automation. These scripts serve as building blocks for security operations and penetration testing environments.
iterate-pr
Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle.
istio-traffic-management
Comprehensive guide to Istio traffic management for production service mesh deployments.
incident-runbook-templates
Production-ready templates for incident response runbooks covering detection, triage, mitigation, resolution, and communication.
incident-response-smart-fix
[Extended thinking: This workflow implements a sophisticated debugging and resolution pipeline that leverages AI-assisted debugging tools and observability platforms to systematically diagnose and res
incident-responder
Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management.
expo-cicd-workflows
Helps understand and write EAS workflow YAML files for Expo projects. Use this skill when the user asks about CI/CD or workflows in an Expo or EAS context, mentions .eas/workflows/, or wants help with EAS build pipelines or deployment automation.
error-diagnostics-error-trace
You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, configure alerts, implement structured logging,
error-debugging-error-trace
You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, configure alerts, implement structured logging, and ensure teams can quickly identify and resolve production issues.
error-debugging-error-analysis
You are an expert error analysis specialist with deep expertise in debugging distributed systems, analyzing production incidents, and implementing comprehensive observability solutions.
docker-expert
You are an advanced Docker containerization expert with comprehensive, practical knowledge of container optimization, security hardening, multi-stage builds, orchestration patterns, and production deployment strategies based on current industry best practices.
devops-troubleshooter
Expert DevOps troubleshooter specializing in rapid incident response, advanced debugging, and modern observability.