wt
Manage LlamaFarm worktrees for isolated parallel development. Create, start, stop, and clean up worktrees.
About this skill
The `wt` skill provides AI agents with the capability to manage LlamaFarm worktrees, which are isolated development environments built upon Git worktrees. Each worktree maintains its own services, ports, and data directories, preventing conflicts when running multiple parallel development sessions or testing different features concurrently. This is particularly useful for complex AI coding agents that might need to operate on various branches or features simultaneously. Agents can use `wt` to streamline their development workflow by creating dedicated environments for new tasks, switching between ongoing projects, and ensuring that changes in one worktree do not affect others. It includes commands for creating new worktrees, listing existing ones, checking service health, starting/stopping services, viewing logs, deleting worktrees, and diagnosing issues. This skill eliminates common development headaches like port conflicts and messy development environments, providing a clean, reproducible setup for each task. It's ideal for agents that need to manage a dynamic set of development contexts within the LlamaFarm ecosystem, facilitating efficient feature development, bug fixing, and testing.
Best use case
Primarily used by AI agents working on LlamaFarm projects to create, manage, and tear down isolated development environments for features, bug fixes, or parallel testing without interfering with other work or consuming global resources. This allows for concurrent coding sessions and efficient project management, benefiting agents that manage complex, multi-component development tasks.
Manage LlamaFarm worktrees for isolated parallel development. Create, start, stop, and clean up worktrees.
The agent can efficiently create, manage, and remove isolated LlamaFarm development environments, ensuring smooth parallel development without resource conflicts and a clean workspace for each task.
Practical example
Example input
Create a new isolated LlamaFarm worktree for a feature called 'user-auth-flow', start its services, and then provide a status update.
Example output
Worktree 'feat/user-auth-flow' created and services started. Its services are running on auto-assigned ports. Current worktree status: NAME STATUS SERVER DESIGNER RUNTIME feat-user-auth-flow running 8160 5160 11160 main running 8100 5100 11100
When to use this skill
- Starting isolated work on a new feature or task that requires dedicated running services.
- Running parallel coding sessions or testing multiple LlamaFarm instances concurrently.
- Testing changes or experimenting without affecting the main development environment.
- Avoiding port conflicts between concurrent LlamaFarm instances managed by an agent.
When not to use this skill
- For general Git operations unrelated to LlamaFarm worktree management.
- When working on non-LlamaFarm projects or ecosystems.
- If a single, non-isolated development environment is sufficient for the task.
- When the agent does not require managing isolated services or specific port assignments.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/wt/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How wt Compares
| Feature / Agent | wt | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | medium | N/A |
Frequently Asked Questions
What does this skill do?
Manage LlamaFarm worktrees for isolated parallel development. Create, start, stop, and clean up worktrees.
How difficult is it to install?
The installation complexity is rated as medium. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
Top AI Agents for Productivity
See the top AI agent skills for productivity, workflow automation, operational systems, documentation, and everyday task execution.
SKILL.md Source
# wt - Worktree Manager Skill Manages isolated LlamaFarm development environments using git worktrees. Each worktree has its own services, ports, and data directories - enabling parallel agent sessions without conflicts. ## Full Documentation For complete documentation, architecture details, and advanced usage: @scripts/wt/README.md --- ## Quick Reference | Task | Command | |------|---------| | Create worktree and start services | `wt create feat/my-feature` | | Create and cd into worktree | `wt create --go feat/my-feature` | | List all worktrees with status | `wt list` | | Check service health | `wt status` or `wt health` | | Start/stop services | `wt start` / `wt stop` | | View service logs | `wt logs [server\|rag\|runtime\|designer\|all]` | | Delete worktree | `wt delete <name>` | | Switch to worktree | `wt switch <name>` | | Open Designer in browser | `wt open` | | Diagnose issues | `wt doctor` | | Clean orphaned data | `wt gc` | --- ## When to Use wt Use `wt` when: - Starting isolated work on a feature/task that needs running services - Running parallel coding sessions (multiple agents or terminals) - Testing changes without affecting the main development environment - Avoiding port conflicts between concurrent LlamaFarm instances --- ## Common Workflows ### Starting a New Task ```bash # Create isolated environment with services running wt create --go feat/my-task # Work in the worktree... # Services are already running on auto-assigned ports # Check status anytime wt status # View logs if needed wt logs server ``` ### Checking What's Running ```bash # List all worktrees with their port assignments wt list # Example output: # NAME STATUS SERVER DESIGNER RUNTIME # feat-my-task running 8150 5150 11150 # fix-bug stopped 8234 5234 11234 ``` ### Cleaning Up ```bash # Stop services and remove a worktree wt delete feat-my-task # Remove data for worktrees that no longer exist wt gc # Remove worktrees for branches merged to main wt prune ``` ### Troubleshooting ```bash # Diagnose common issues (ports, stale PIDs, missing tools) wt doctor # Restart stuck services wt stop && wt start # Force delete if normal delete fails wt delete my-worktree --force ``` --- ## Service URLs Each worktree gets unique ports. Check URLs with: ```bash wt url # Outputs: # Server: http://localhost:8150 # Designer: http://localhost:5150 # Runtime: http://localhost:11150 ``` If the Caddy proxy is running, use port-free URLs: ``` http://server.feat-my-task.localhost http://designer.feat-my-task.localhost ``` --- ## Key Environment Details - **Worktrees location**: `~/worktrees/llamafarm/` - **Data directories**: `~/.llamafarm/worktrees/<name>/` - **Port allocation**: Deterministic hash of worktree name (14345+offset, 5000+offset, 11000+offset) - **Logs**: `~/.llamafarm/worktrees/<name>/logs/` --- ## Notes for the Agent 1. **Always use `wt create --go`** when setting up a new task environment - it handles everything (branch, deps, build, services) 2. **Check `wt list` first** before creating a new worktree to see what already exists 3. **Use `wt status`** to verify services are healthy before running tests or making API calls 4. **Run `wt doctor`** when encountering unexplained service issues 5. **Clean up with `wt delete`** when a task is complete to free resources
Related Skills
botlearn-healthcheck
botlearn-healthcheck — BotLearn autonomous health inspector for OpenClaw instances across 5 domains (hardware, config, security, skills, autonomy); triggers on system check, health report, diagnostics, or scheduled heartbeat inspection.
Incident Postmortem Generator
Generate blameless incident postmortems from raw notes, Slack threads, or bullet points.
Post-Mortem & Incident Review Framework
Run structured post-mortems that actually prevent repeat failures. Blameless analysis, root cause identification, and action tracking.
afrexai-performance-engineering
Complete performance engineering system — profiling, optimization, load testing, capacity planning, and performance culture. Use when diagnosing slow applications, optimizing code/queries/infrastructure, load testing before launch, planning capacity, or building performance into CI/CD. Covers Node.js, Python, Go, Java, databases, APIs, and frontend.
OpenClaw Mastery — The Complete Agent Engineering & Operations System
> Built by AfrexAI — the team that runs 9+ production agents 24/7 on OpenClaw.
Legacy System Modernization Engine
Complete methodology for assessing, planning, and executing legacy system modernization — from monolith decomposition to cloud migration. Works for any tech stack, any scale.
Incident Response Playbook
Structured incident response for business and IT teams. Guides you through detection, triage, containment, resolution, and post-mortem — with auto-generated timelines and action items.
Git Engineering & Repository Strategy
You are a Git Engineering expert. You help teams design branching strategies, implement code review workflows, manage monorepos, automate releases, and maintain healthy repository practices at scale.
Django Production Engineering
Complete methodology for building, scaling, and operating production Django applications. From project structure to deployment, security to performance — every decision framework a Django team needs.
IT Disaster Recovery Plan Generator
Build production-ready disaster recovery plans that actually get followed when things break.
afrexai-api-architect
Design, build, test, document, and secure production-grade APIs. Covers the full lifecycle from schema design through deployment, monitoring, and versioning. Use when designing new APIs, reviewing existing ones, generating OpenAPI specs, building test suites, or debugging production issues.
Agent Ops Runbook
Generate a production-ready operations runbook for deploying AI agents. Covers pre-deployment checklists, shadow mode → supervised → autonomous rollout stages, monitoring dashboards, rollback procedures, cost management, and incident response templates.