slfg
Full autonomous engineering workflow using swarm mode for parallel execution
Best use case
slfg is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Full autonomous engineering workflow using swarm mode for parallel execution
Teams using slfg should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/slfg/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How slfg Compares
| Feature / Agent | slfg | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Full autonomous engineering workflow using swarm mode for parallel execution
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
Cursor vs Codex for AI Workflows
Compare Cursor and Codex for AI coding workflows, repository assistance, debugging, refactoring, and reusable developer skills.
SKILL.md Source
Swarm-enabled LFG. Run these steps in order, parallelizing where indicated. Do not stop between steps — complete every step through to the end. ## Sequential Phase 1. **Optional:** If the `ralph-loop` skill is available, run `/ralph-loop:ralph-loop "finish all slash commands" --completion-promise "DONE"`. If not available or it fails, skip and continue to step 2 immediately. 2. `/ce:plan $ARGUMENTS` — **Record the plan file path** from `docs/plans/` for steps 4 and 6. 3. `/ce:work` — **Use swarm mode**: Make a Task list and launch an army of agent swarm subagents to build the plan ## Parallel Phase After work completes, launch steps 4 and 5 as **parallel swarm agents** (both only need code to be written): 4. `/ce:review mode:report-only plan:<plan-path-from-step-2>` — spawn as background Task agent 5. `/compound-engineering:test-browser` — spawn as background Task agent Wait for both to complete before continuing. ## Autofix Phase 6. `/ce:review mode:autofix plan:<plan-path-from-step-2>` — run sequentially after the parallel phase so it can safely mutate the checkout, apply `safe_auto` fixes, and emit residual todos for step 7 ## Finalize Phase 7. `/compound-engineering:todo-resolve` — resolve findings, compound on learnings, clean up completed todos 8. `/compound-engineering:feature-video` — record the final walkthrough and add to PR 9. Output `<promise>DONE</promise>` when video is in PR Start with step 1 now.
Related Skills
skill-one
Sample skill
disabled-skill
A skill with model invocation disabled
default-skill
No description provided.
custom-skill
No description provided.
todo-triage
Use when reviewing pending todos for approval, prioritizing code review findings, or interactively categorizing work items
todo-resolve
Use when batch-resolving approved todos, especially after code review or triage sessions
todo-create
Use when creating durable work items, managing todo lifecycle, or tracking findings across sessions in the file-based todo system
test-xcode
Build and test iOS apps on simulator using XcodeBuildMCP. Use after making iOS code changes, before creating a PR, or when verifying app behavior and checking for crashes on simulator.
test-browser
Run browser tests on pages affected by current PR or branch
setup
Configure project-level settings for compound-engineering workflows. Currently a placeholder — review agent selection is handled automatically by ce:review.
resolve-pr-feedback
Resolve PR review feedback by evaluating validity and fixing issues in parallel. Use when addressing PR review comments, resolving review threads, or fixing code review feedback.
reproduce-bug
Systematically reproduce and investigate a bug from a GitHub issue. Use when the user provides a GitHub issue number or URL for a bug they want reproduced or investigated.