product-analysis

Multi-path parallel product analysis with cross-model test-time compute scaling. Spawns parallel agents (Claude Code agent teams + Codex CLI) to explore product from multiple perspectives, then synthesizes findings into actionable optimization plans. Can invoke competitors-analysis for competitive benchmarking. Use when "product audit", "self-review", "发布前审查", "产品分析", "analyze our product", "UX audit", or "信息架构审计".

25 stars

Best use case

product-analysis is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Multi-path parallel product analysis with cross-model test-time compute scaling. Spawns parallel agents (Claude Code agent teams + Codex CLI) to explore product from multiple perspectives, then synthesizes findings into actionable optimization plans. Can invoke competitors-analysis for competitive benchmarking. Use when "product audit", "self-review", "发布前审查", "产品分析", "analyze our product", "UX audit", or "信息架构审计".

Teams using product-analysis should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/product-analysis/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/daymade/claude-code-skills/product-analysis/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/product-analysis/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How product-analysis Compares

Feature / Agentproduct-analysisStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Multi-path parallel product analysis with cross-model test-time compute scaling. Spawns parallel agents (Claude Code agent teams + Codex CLI) to explore product from multiple perspectives, then synthesizes findings into actionable optimization plans. Can invoke competitors-analysis for competitive benchmarking. Use when "product audit", "self-review", "发布前审查", "产品分析", "analyze our product", "UX audit", or "信息架构审计".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Product Analysis

Multi-path parallel product analysis that combines **Claude Code agent teams** and **Codex CLI** for cross-model test-time compute scaling.

**Core principle**: Same analysis task, multiple AI perspectives, deep synthesis.

## How It Works

```
/product-analysis full
         │
         ├─ Step 0: Auto-detect available tools (codex? competitors?)
         │
    ┌────┼──────────────┐
    │    │              │
 Claude Code         Codex CLI (auto-detected)
 Task Agents         (background Bash)
 (Explore ×3-5)      (×2-3 parallel)
    │                   │
    └────────┬──────────┘
             │
      Synthesis (main context)
             │
      Structured Report
```

## Step 0: Auto-Detect Available Tools

Before launching any agents, detect what tools are available:

```bash
# Check if Codex CLI is installed
which codex 2>/dev/null && codex --version
```

**Decision logic**:
- If `codex` is found: Inform the user — "Codex CLI detected (version X). Will run cross-model analysis for richer perspectives."
- If `codex` is not found: Silently proceed with Claude Code agents only. Do NOT ask the user to install anything.

Also detect the project type to tailor agent prompts:
```bash
# Detect project type
ls package.json 2>/dev/null    # Node.js/React
ls pyproject.toml 2>/dev/null  # Python
ls Cargo.toml 2>/dev/null      # Rust
ls go.mod 2>/dev/null          # Go
```

## Scope Modes

Parse `$ARGUMENTS` to determine analysis scope:

| Scope | What it covers | Typical agents |
|-------|---------------|----------------|
| `full` | UX + API + Architecture + Docs (default) | 5 Claude + Codex (if available) |
| `ux` | Frontend navigation, information density, user journey, empty state, onboarding | 3 Claude + Codex (if available) |
| `api` | Backend API coverage, endpoint health, error handling, consistency | 2 Claude + Codex (if available) |
| `arch` | Module structure, dependency graph, code duplication, separation of concerns | 2 Claude + Codex (if available) |
| `compare X Y` | Self-audit + competitive benchmarking (invokes `/competitors-analysis`) | 3 Claude + competitors-analysis |

## Phase 1: Parallel Exploration

Launch all exploration agents simultaneously using Task tool (background mode).

### Claude Code Agents (always)

For each dimension, spawn a Task agent with `subagent_type: Explore` and `run_in_background: true`:

**Agent A — Frontend Navigation & Information Density**
```
Explore the frontend navigation structure and entry points:
1. App.tsx: How many top-level components are mounted simultaneously?
2. Left sidebar: How many buttons/entries? What does each link to?
3. Right sidebar: How many tabs? How many sections per tab?
4. Floating panels: How many drawers/modals? Which overlap in functionality?
5. Count total first-screen interactive elements for a new user.
6. Identify duplicate entry points (same feature accessible from 2+ places).
Give specific file paths, line numbers, and element counts.
```

**Agent B — User Journey & Empty State**
```
Explore the new user experience:
1. Empty state page: What does a user with no sessions see? Count clickable elements.
2. Onboarding flow: How many steps? What information is presented?
3. Prompt input area: How many buttons/controls surround the input box? Which are high-frequency vs low-frequency?
4. Mobile adaptation: How many nav items? How does it differ from desktop?
5. Estimate: Can a new user complete their first conversation in 3 minutes?
Give specific file paths, line numbers, and UX assessment.
```

**Agent C — Backend API & Health**
```
Explore the backend API surface:
1. List ALL API endpoints (method + path + purpose).
2. Identify endpoints that are unused or have no frontend consumer.
3. Check error handling consistency (do all endpoints return structured errors?).
4. Check authentication/authorization patterns (which endpoints require auth?).
5. Identify any endpoints that duplicate functionality.
Give specific file paths and line numbers.
```

**Agent D — Architecture & Module Structure** (full/arch scope only)
```
Explore the module structure and dependencies:
1. Map the module dependency graph (which modules import which).
2. Identify circular dependencies or tight coupling.
3. Find code duplication across modules (same pattern in 3+ places).
4. Check separation of concerns (does each module have a single responsibility?).
5. Identify dead code or unused exports.
Give specific file paths and line numbers.
```

**Agent E — Documentation & Config Consistency** (full scope only)
```
Explore documentation and configuration:
1. Compare README claims vs actual implemented features.
2. Check config file consistency (base.yaml vs .env.example vs code defaults).
3. Find outdated documentation (references to removed features/files).
4. Check test coverage gaps (which modules have no tests?).
Give specific file paths and line numbers.
```

### Codex CLI Agents (auto-detected)

If Codex CLI was detected in Step 0, launch parallel Codex analyses via background Bash.

Each Codex invocation gets the same dimensional prompt but from a different model's perspective:

```bash
codex -m o4-mini \
  -c model_reasoning_effort="high" \
  --full-auto \
  "Analyze the frontend navigation structure of this project. Count all interactive elements visible to a new user on first screen. Identify duplicate entry points where the same feature is accessible from 2+ places. Give specific file paths and counts."
```

Run 2-3 Codex commands in parallel (background Bash), one per major dimension.

**Important**: Codex runs in the project's working directory. It has full filesystem access. The `--full-auto` flag (or `--dangerously-bypass-approvals-and-sandbox` for older versions) enables autonomous execution.

## Phase 2: Competitive Benchmarking (compare scope only)

When scope is `compare`, invoke the competitors-analysis skill for each competitor:

```
Use the Skill tool to invoke: /competitors-analysis {competitor-name} {competitor-url}
```

This delegates to the orthogonal `competitors-analysis` skill which handles:
- Repository cloning and validation
- Evidence-based code analysis (file:line citations)
- Competitor profile generation

## Phase 3: Synthesis

After all agents complete, synthesize findings in the main conversation context.

### Cross-Validation

Compare findings across agents (Claude vs Claude, Claude vs Codex):
- **Agreement** = high confidence finding
- **Disagreement** = investigate deeper (one agent may have missed context)
- **Codex-only finding** = different model perspective, validate manually

### Quantification

Extract hard numbers from agent reports:

| Metric | What to measure |
|--------|----------------|
| First-screen interactive elements | Total count of buttons/links/inputs visible to new user |
| Feature entry point duplication | Number of features with 2+ entry points |
| API endpoints without frontend consumer | Count of unused backend routes |
| Onboarding steps to first value | Steps from launch to first successful action |
| Module coupling score | Number of circular or bi-directional dependencies |

### Structured Output

Produce a layered optimization report:

```markdown
## Product Analysis Report

### Executive Summary
[1-2 sentences: key finding]

### Quantified Findings
| Metric | Value | Assessment |
|--------|-------|------------|
| ... | ... | ... |

### P0: Critical (block launch)
[Issues that prevent basic usability]

### P1: High Priority (launch week)
[Issues that significantly degrade experience]

### P2: Medium Priority (next sprint)
[Issues worth addressing but not blocking]

### Cross-Model Insights
[Findings that only one model identified — worth investigating]

### Competitive Position (if compare scope)
[How we compare on key dimensions]
```

## Workflow Checklist

- [ ] Parse `$ARGUMENTS` for scope
- [ ] Auto-detect Codex CLI availability (`which codex`)
- [ ] Auto-detect project type (package.json / pyproject.toml / etc.)
- [ ] Launch Claude Code Explore agents (3-5 parallel, background)
- [ ] Launch Codex CLI commands (2-3 parallel, background) if detected
- [ ] Invoke `/competitors-analysis` if `compare` scope
- [ ] Collect all agent results
- [ ] Cross-validate findings
- [ ] Quantify metrics
- [ ] Generate structured report with P0/P1/P2 priorities

## References

- [references/analysis_dimensions.md](references/analysis_dimensions.md) — Detailed audit dimension definitions and prompts
- [references/synthesis_methodology.md](references/synthesis_methodology.md) — How to weight and merge multi-agent findings
- [references/codex_patterns.md](references/codex_patterns.md) — Codex CLI invocation patterns and flag reference

Related Skills

Betting Analysis

25
from ComeOnOliver/skillshub

Before writing queries, consult `references/api-reference.md` for odds formats, command parameters, and key concepts.

performing-regression-analysis

25
from ComeOnOliver/skillshub

This skill empowers Claude to perform regression analysis and modeling using the regression-analysis-tool plugin. It analyzes datasets, generates appropriate regression models (linear, polynomial, etc.), validates the models, and provides performance metrics. Use this skill when the user explicitly requests regression analysis, prediction based on data, or mentions terms like "linear regression," "polynomial regression," "regression model," or "predictive modeling." This skill is also helpful when the user needs to understand the relationship between variables in a dataset.

regression-analysis-helper

25
from ComeOnOliver/skillshub

Regression Analysis Helper - Auto-activating skill for Data Analytics. Triggers on: regression analysis helper, regression analysis helper Part of the Data Analytics skill category.

product-brief

25
from ComeOnOliver/skillshub

Structured product brief and PRD creation assistant. Use when the user needs to write a product brief, PRD, feature spec, or any document that defines what to build and why. Triggers include "product brief", "PRD", "spec", "feature doc", "write a brief", "define this feature", or when scoping work for engineering.

log-analysis-security

25
from ComeOnOliver/skillshub

Log Analysis Security - Auto-activating skill for Security Advanced. Triggers on: log analysis security, log analysis security Part of the Security Advanced skill category.

impact-analysis-helper

25
from ComeOnOliver/skillshub

Impact Analysis Helper - Auto-activating skill for Enterprise Workflows. Triggers on: impact analysis helper, impact analysis helper Part of the Enterprise Workflows skill category.

genkit-production-expert

25
from ComeOnOliver/skillshub

Build production Firebase Genkit applications including RAG systems, multi-step flows, and tool calling for Node.js/Python/Go. Deploy to Firebase Functions or Cloud Run with AI monitoring. Use when asked to "create genkit flow" or "implement RAG". Trigger with relevant phrases based on skill purpose.

funnel-analysis-builder

25
from ComeOnOliver/skillshub

Funnel Analysis Builder - Auto-activating skill for Data Analytics. Triggers on: funnel analysis builder, funnel analysis builder Part of the Data Analytics skill category.

cohort-analysis-creator

25
from ComeOnOliver/skillshub

Cohort Analysis Creator - Auto-activating skill for Data Analytics. Triggers on: cohort analysis creator, cohort analysis creator Part of the Data Analytics skill category.

churn-analysis-helper

25
from ComeOnOliver/skillshub

Churn Analysis Helper - Auto-activating skill for Data Analytics. Triggers on: churn analysis helper, churn analysis helper Part of the Data Analytics skill category.

project-workflow-analysis-blueprint-generator

25
from ComeOnOliver/skillshub

Comprehensive technology-agnostic prompt generator for documenting end-to-end application workflows. Automatically detects project architecture patterns, technology stacks, and data flow patterns to generate detailed implementation blueprints covering entry points, service layers, data access, error handling, and testing approaches across multiple technologies including .NET, Java/Spring, React, and microservices architectures.

gtm-technical-product-pricing

25
from ComeOnOliver/skillshub

Pricing strategy for technical products. Use when choosing usage-based vs seat-based, designing freemium thresholds, structuring enterprise pricing conversations, deciding when to raise prices, or using price as a positioning signal.