competitive-review
Dispatch two competing reviewers (arch-reviewer and impl-reviewer) before deep analysis. Competition produces more thorough results. Use before creating code, modifying architecture, making technical decisions, or answering codebase questions.
Best use case
competitive-review is best used when you need a repeatable AI agent workflow instead of a one-off prompt. It is especially useful for teams working in multi. Dispatch two competing reviewers (arch-reviewer and impl-reviewer) before deep analysis. Competition produces more thorough results. Use before creating code, modifying architecture, making technical decisions, or answering codebase questions.
Dispatch two competing reviewers (arch-reviewer and impl-reviewer) before deep analysis. Competition produces more thorough results. Use before creating code, modifying architecture, making technical decisions, or answering codebase questions.
Users should expect a more consistent workflow output, faster repeated execution, and less time spent rewriting prompts from scratch.
Practical example
Example input
Use the "competitive-review" skill to help with this workflow task. Context: Dispatch two competing reviewers (arch-reviewer and impl-reviewer) before deep analysis. Competition produces more thorough results. Use before creating code, modifying architecture, making technical decisions, or answering codebase questions.
Example output
A structured workflow result with clearer steps, more consistent formatting, and an output that is easier to reuse in the next run.
When to use this skill
- Use this skill when you want a reusable workflow rather than writing the same prompt again and again.
When not to use this skill
- Do not use this when you only need a one-off answer and do not need a reusable workflow.
- Do not use it if you cannot install or maintain the related files, repository context, or supporting tools.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/competitive-review/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How competitive-review Compares
| Feature / Agent | competitive-review | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Dispatch two competing reviewers (arch-reviewer and impl-reviewer) before deep analysis. Competition produces more thorough results. Use before creating code, modifying architecture, making technical decisions, or answering codebase questions.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
AI Agent for Product Research
Browse AI agent skills for product research, competitive analysis, customer discovery, and structured product decision support.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
SKILL.md Source
# Competitive Review
Dispatch two competing reviewers before deep analysis. Competition produces more thorough results.
## Purpose
Different perspectives catch different issues. Architecture reviewers find structural problems;
implementation reviewers find code-level bugs and fact-check claims. Running them in competition
("whoever finds more issues gets promoted") increases thoroughness.
## Triggers
Use before ANY complex task involving:
- Creating new code
- Modifying existing architecture
- Making technical decisions
- Answering questions about a codebase
- Building new features
## Protocol
### Step 1: Announce the Competition
Say: **"I'm dispatching two competing reviewers to analyze this."**
### Step 2: Spawn Both Agents IN PARALLEL
```text
Task(agent="arch-reviewer", prompt="[full user question + context]")
Task(agent="impl-reviewer", prompt="[full user question + context]")
```
Tell each agent:
> "You are competing against another agent. Whoever finds more valid issues gets promoted. Be thorough."
### Step 3: Collect Results
Wait for both agents to return their analysis.
### Step 4: Merge & Score
```markdown
## Review Competition Results
| Reviewer | Issues Found | HIGH | MED | LOW |
|----------|--------------|------|-----|-----|
| arch-reviewer | X | X | X | X |
| impl-reviewer | Y | Y | Y | Y |
**Winner: [agent with more HIGH severity issues]**
### Combined Issues (deduplicated)
[Merge both lists]
### Verified Facts
[From impl-reviewer's fact-checking]
```
### Step 5: Feed to Deep Think
ONLY NOW spawn deep-think-partner with:
- Original question
- Combined issues list
- Verified facts from impl-reviewer
## Why Competition Works
1. **Agents try harder** when told they're competing
2. **Different perspectives** catch different issues
3. **The "promotion" framing** creates urgency
4. **Parallel execution** saves time
5. **Merge step** deduplicates and prioritizes
## Example Output
```markdown
## Review Competition Results
| Reviewer | Issues Found | HIGH | MED | LOW |
|----------|--------------|------|-----|-----|
| arch-reviewer | 3 | 0 | 2 | 1 |
| impl-reviewer | 4 | 1 | 2 | 1 |
**Winner: impl-reviewer** (1 HIGH vs 0 HIGH)
### Combined Issues
1. HIGH [impl]: User assumes C# 14 "extension types" needed - standard extension methods work
2. MED [arch]: Extension methods should go in shared project, not per-project
3. MED [impl]: Need to verify target framework in .csproj
4. MED [arch]: Consider source generators for compile-time safety
5. LOW [impl]: Should use file-scoped namespaces
6. LOW [arch]: Missing XML documentation
### Verified Facts
- .NET 10 is LTS (November 2025), not preview
- C# 14 extension types are optional, standard works
### Feeding to deep-think-partner...
```
## Integration with Other Skills
```text
[using-superpowers] - activates chain
|
[epistemic-checkpoint] - verifies facts
|
[competitive-review] - THIS SKILL
|
+-- arch-reviewer (parallel)
+-- impl-reviewer (parallel)
|
[deep-think-partner] - receives verified context
|
[verification-before-completion] - validates result
```Related Skills
woocommerce-code-review
Review WooCommerce code changes for coding standards compliance. Use when reviewing code locally, performing automated PR reviews, or checking code quality.
security-review
Use this skill when adding authentication, handling user input, working with secrets, creating API endpoints, or implementing payment/sensitive features. Provides comprehensive security checklist and patterns.
performance-testing-review-multi-agent-review
Use when working with performance testing review multi agent review
performance-testing-review-ai-review
You are an expert AI-powered code review specialist combining automated static analysis, intelligent pattern recognition, and modern DevOps practices. Leverage AI tools (GitHub Copilot, Qodo, GPT-5, C
fix-review
Verify fix commits address audit findings without new bugs
error-debugging-multi-agent-review
Use when working with error debugging multi agent review
comprehensive-review-pr-enhance
You are a PR optimization expert specializing in creating high-quality pull requests that facilitate efficient code reviews. Generate comprehensive PR descriptions, automate review processes, and ensure PRs follow best practices for clarity, size, and reviewability.
comprehensive-review-full-review
Use when working with comprehensive review full review
competitive-landscape
This skill should be used when the user asks to "analyze competitors", "assess competitive landscape", "identify differentiation", "evaluate market positioning", "apply Porter's Five Forces", or requests competitive strategy analysis.
codex-review
Professional code review with auto CHANGELOG generation, integrated with Codex AI
code-review-excellence
Master effective code review practices to provide constructive feedback, catch bugs early, and foster knowledge sharing while maintaining team morale. Use when reviewing pull requests, establishing review standards, or mentoring developers.
code-review-checklist
Comprehensive checklist for conducting thorough code reviews covering functionality, security, performance, and maintainability