retrospective

Analyze completed tasks to improve the Ralph system. Saves learnings to living knowledge vault and coordinates insights across 6 ralph-* teammates.

108 stars

Best use case

retrospective is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Analyze completed tasks to improve the Ralph system. Saves learnings to living knowledge vault and coordinates insights across 6 ralph-* teammates.

Teams using retrospective should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/retrospective/SKILL.md --create-dirs "https://raw.githubusercontent.com/alfredolopez80/multi-agent-ralph-loop/main/.claude/skills/retrospective/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/retrospective/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How retrospective Compares

Feature / AgentretrospectiveStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Analyze completed tasks to improve the Ralph system. Saves learnings to living knowledge vault and coordinates insights across 6 ralph-* teammates.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Skill: Retrospective & Self-Improvement

**ultrathink** - Take a deep breath. We're not here to write code. We're here to make a dent in the universe.

## v2.88 Key Changes (MODEL-AGNOSTIC)

- **Model-agnostic**: Uses model configured in `~/.claude/settings.json` or CLI/env vars
- **No flags required**: Works with the configured default model
- **Flexible**: Works with GLM-5, Claude, Minimax, or any configured model
- **Settings-driven**: Model selection via `ANTHROPIC_DEFAULT_*_MODEL` env vars

## The Vision
Every retrospective should make the system inevitable and better.

## Your Work, Step by Step
1. **Summarize outcomes**: Task, complexity, iterations, models.
2. **Analyze effectiveness**: Routing, clarification, and agents.
3. **Identify gaps**: Missed checks or friction.
4. **Propose improvements**: Concrete, minimal changes.

## Ultrathink Principles in Practice
- **Think Different**: Question the status quo.
- **Obsess Over Details**: Use evidence, not guesses.
- **Plan Like Da Vinci**: Structure feedback before writing.
- **Craft, Don't Code**: Keep recommendations actionable.
- **Iterate Relentlessly**: Apply learnings immediately.
- **Simplify Ruthlessly**: Focus on the few changes that matter.

## Purpose
Analyze completed tasks to improve the Ralph Wiggum system.

## When to Use
MANDATORY after every task completion, before declaring VERIFIED_DONE.

## Analysis Categories

### 1. Routing Effectiveness
- Was the complexity classification accurate?
- Did the chosen model perform well?
- Should routing thresholds change?

## Agent Teams Integration (v2.88)

**Optimal Scenario**: Pure Agent Teams (Native)

This skill uses Pure Agent Teams with native coordination - no custom subagent specialization needed.

### Why Scenario A for This Skill
- Retrospective is primarily analytical and sequential
- Read/Grep tools available to all native agents
- Analysis doesn't require specialized tool restrictions
- Native agent types sufficient for metric gathering
- Lower complexity, faster execution

### Configuration
1. **TeamCreate**: Optional, for simple retrospective tasks
2. **Task**: Use native agent types (no ralph-* needed)
3. **Hooks**: TeammateIdle + TaskCompleted available if needed
4. **Simple**: Minimal setup overhead

### Workflow Pattern
```
TeamCreate (optional)
  → Task(analyze completed work)
  → Native agent gathers metrics
  → Complete with improvement proposals
```

### When This Is Sufficient
- Single-task retrospective analysis
- Simple metric gathering workflows
- No specialized analysis needed
- Quick post-task reviews preferred

### 2. Clarification Quality
- Were the right questions asked?
- Did any missed clarifications cause rework?
- Should question templates be updated?

### 3. Agent Performance
- Which subagents were most useful?
- Any agents that didn't add value?
- New agent patterns needed?

### 4. Quality Gate Effectiveness
- Did gates catch real issues?
- Any false positives/negatives?
- Missing validations?

### 5. Iteration Efficiency
- How many iterations were used?
- Could it have been done faster?
- Any wasted iterations?

## Output Format

```markdown
## 📊 Task Retrospective

### Summary
- Task: [description]
- Complexity: [classified] → [actual]
- Iterations: [used] / [limit]
- Models: [list used]

### What Went Well
- [positive 1]
- [positive 2]

### Improvement Opportunities
1. **[Category]**: [description]
   - Current: [what happens now]
   - Proposed: [improvement]
   - Impact: [low/medium/high]
   - Risk: [low/medium/high]

### Proposed Changes
```json
{
  "type": "routing_adjustment|clarification_enhancement|agent_behavior|new_command|delegation_update|quality_gate",
  "file": "[path to modify]",
  "change": "[description]",
  "justification": "[why]"
}
```
```

## Improvement Types

| Type | Example |
|------|---------|
| routing_adjustment | Change complexity thresholds |
| clarification_enhancement | Add new question templates |
| agent_behavior | Modify agent instructions |
| new_command | Create new slash command |
| delegation_update | Change model assignments |
| quality_gate | Add/modify validations |

Related Skills

worktree-pr

108
from alfredolopez80/multi-agent-ralph-loop

Manage git worktrees with PR workflow and multi-agent review (Claude + Codex). Use when developing features in isolation with easy rollback.

vercel-react-best-practices

108
from alfredolopez80/multi-agent-ralph-loop

React and Next.js performance optimization guidelines from Vercel Engineering. Use when writing, reviewing, or refactoring React/Next.js code. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements.

vault

108
from alfredolopez80/multi-agent-ralph-loop

Living knowledge base management. Actions: search (query vault), save (store learning), index (update indices), compile (raw->wiki->rules graduation), init (create vault structure). Follows Karpathy pipeline: ingest->compile->query. Use when: (1) searching accumulated knowledge, (2) saving learnings, (3) compiling raw notes into wiki, (4) initializing a new vault. Triggers: /vault, 'vault search', 'knowledge base', 'save learning'.

testing-anti-patterns

108
from alfredolopez80/multi-agent-ralph-loop

Custom skill for testing-anti-patterns

task-visualizer

108
from alfredolopez80/multi-agent-ralph-loop

Visualize task dependencies and progress (Gastown-style)

task-classifier

108
from alfredolopez80/multi-agent-ralph-loop

Classifies task complexity (1-10) for model and agent routing

task-batch

108
from alfredolopez80/multi-agent-ralph-loop

Autonomous batch task execution with PRD parsing, task decomposition, and continuous execution until all tasks complete. Uses /orchestrator internally. Stops only for major failures (no internet, token limit, system crash). Use when: (1) processing task lists autonomously, (2) PRD-driven development, (3) batch feature implementation. Triggers: /task-batch, 'batch tasks', 'process PRD', 'run task queue'.

tap-explorer

108
from alfredolopez80/multi-agent-ralph-loop

Tree of Attacks with Pruning for systematic code analysis

stop-slop

108
from alfredolopez80/multi-agent-ralph-loop

A skill for removing AI-generated writing patterns ('slop') from prose. Eliminates telltale signs of AI writing like filler phrases, excessive hedging, overly formal language, and mechanical sentence structures. Use when: writing content that should sound human and natural, editing AI-generated drafts, cleaning up prose for publication, or any content that needs to sound authentic rather than AI-generated. Triggers: 'stop-slop', 'remove AI tells', 'clean up prose', 'make it sound human', 'edit AI writing'.

spec

108
from alfredolopez80/multi-agent-ralph-loop

Produce a verifiable technical specification before coding. 6 mandatory sections: Interfaces, Behaviors, Invariants (from Aristotle Phase 2), File Plan, Test Plan, Exit Criteria (executable bash commands + expected results). Use when: (1) before implementing features with complexity > 4, (2) as Step 1.5 in orchestrator workflow, (3) when requirements need formalization. Triggers: /spec, 'create spec', 'write specification', 'technical spec'.

smart-fork

108
from alfredolopez80/multi-agent-ralph-loop

Smart Forking - Find and fork from relevant historical sessions using parallel memory search across vault, memvid, handoffs, and ledgers

ship

108
from alfredolopez80/multi-agent-ralph-loop

Pre-launch shipping checklist orchestrating /gates, /security, /browser-test, /perf. Ensures nothing ships without passing all quality checks. Use when: (1) before deploying, (2) before merging to main, (3) before release. Triggers: /ship, 'ship it', 'ready to deploy', 'pre-launch check'.