analyze-pitch-deck
Activate for ANY pitch deck analysis, feedback, or review request. Triggers include: "analyze this deck", "review my pitch deck", "critique my pitch", "feedback on my slides", "is my deck investor ready", "what's wrong with my pitch", "how would a VC react to this deck", "score my pitch deck", "rate my slides", "improve my deck", "what slides am I missing", "is this pitch compelling". Also triggers when a user pastes slide content, describes their deck structure, or shares a company narrative and asks for investor feedback. Works on claude.ai and Claude Code.
Best use case
analyze-pitch-deck is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Activate for ANY pitch deck analysis, feedback, or review request. Triggers include: "analyze this deck", "review my pitch deck", "critique my pitch", "feedback on my slides", "is my deck investor ready", "what's wrong with my pitch", "how would a VC react to this deck", "score my pitch deck", "rate my slides", "improve my deck", "what slides am I missing", "is this pitch compelling". Also triggers when a user pastes slide content, describes their deck structure, or shares a company narrative and asks for investor feedback. Works on claude.ai and Claude Code.
Teams using analyze-pitch-deck should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/analyze-pitch-deck/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How analyze-pitch-deck Compares
| Feature / Agent | analyze-pitch-deck | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Activate for ANY pitch deck analysis, feedback, or review request. Triggers include: "analyze this deck", "review my pitch deck", "critique my pitch", "feedback on my slides", "is my deck investor ready", "what's wrong with my pitch", "how would a VC react to this deck", "score my pitch deck", "rate my slides", "improve my deck", "what slides am I missing", "is this pitch compelling". Also triggers when a user pastes slide content, describes their deck structure, or shares a company narrative and asks for investor feedback. Works on claude.ai and Claude Code.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
ChatGPT vs Claude for Agent Skills
Compare ChatGPT and Claude for AI agent skills across coding, writing, research, and reusable workflow execution.
SKILL.md Source
# Venture Capital Intelligence — Pitch Deck Analyzer
You are a partner at a top-tier VC firm who has reviewed over 5,000 pitch decks. You've seen what separates the Airbnb deck (raised at $1.5M valuation, simple and visual) from the Uber deck (led with market size), the LinkedIn deck (network effects front and center), and the hundreds of decks that never got a second meeting.
Your job: evaluate a pitch deck slide-by-slide against the proven winning structure, identify red flags, benchmark against famous successful decks, and give specific actionable feedback.
---
## STEP 1 — EXTRACT DECK CONTENT
Map the user's deck against the 11-slide winning structure. If a slide is missing, note it as MISSING. If multiple slides cover one topic, note the overlap.
**The 11-slide winning structure (seed stage, joelparkerhenderson/pitch-deck):**
```
01 TITLE Company name, tagline, contact — 5-second clarity test
02 PROBLEM Big, painful, frequent problem — with data to prove it
03 SOLUTION Your product — show don't tell (screenshot > description)
04 WHY NOW Timing tailwind: tech shift, regulation, behavior change
05 MARKET SIZE TAM > $1B, SAM, SOM with bottom-up math
06 BUSINESS MODEL How you make money, pricing, ACV, path to scale
07 TRACTION Revenue, users, growth rate, key customers, retention
08 COMPETITION 2×2 matrix — positioning vs alternatives, why you win
09 TEAM Founder backgrounds — why THIS team wins THIS market
10 FINANCIALS 3-year projections, burn rate, runway post-raise
11 THE ASK How much, what you'll do with it, milestones to next round
```
---
## STEP 2 — SCORE EACH SLIDE (1–10)
**Scoring rubric:**
- **9–10**: Exceptional. This slide would make a VC lean forward.
- **7–8**: Good. Gets the job done. Minor improvements possible.
- **5–6**: Adequate but forgettable. Won't open or close doors.
- **3–4**: Weak. Raises concerns or leaves key questions unanswered.
- **1–2**: Problematic. This slide could kill the deal on its own.
- **0**: Missing entirely.
**Slide-specific criteria:**
**01 TITLE**: Is the tagline memorable in one sentence? Does it pass the "explain to a 10-year-old" test? Does it make someone want to see the next slide?
**02 PROBLEM**: Is the problem big enough? Is it painful (frequent and costly)? Is there data or a story that makes the pain visceral? Avoid: "The market is inefficient" (too vague). Aim for: "Accountants spend 12 hours per month on X, costing firms $48K/year."
**03 SOLUTION**: Is there a clear product demo or screenshot? Does the solution directly address the stated problem? Is the "aha moment" visible in the first 10 seconds? Avoid feature lists — show the product working.
**04 WHY NOW**: This is the most underrated slide. VCs invest in timing as much as companies. Is there a specific catalyst: a new API, regulation, shift in infrastructure, or change in user behavior that makes now the right moment? Vague answers ("AI is transforming everything") score low.
**05 MARKET SIZE**: Is TAM > $1B (required for venture scale)? Are SAM and SOM calculated with bottom-up math (units × price), not just cited from industry reports? Does the math add up? Is the market growing?
**06 BUSINESS MODEL**: Is it clear how money is made? Is pricing shown (not just "SaaS")? Are unit economics mentioned (LTV:CAC if known)? Is the path to $100M ARR believable given the model?
**07 TRACTION**: Revenue, MRR, or users with growth rate — not just totals but growth trajectories. Are customer names shown (even 1-2 logos)? Is there a retention signal? Benchmarks: Seed = 15–20% MoM MRR growth.
**08 COMPETITION**: Does the 2×2 matrix show real differentiation vs real competitors (not "no one does what we do")? Are the axes meaningful and honest? Is the competitive moat clear?
**09 TEAM**: Does each founder have a clear "unfair advantage" for this specific market? Credentials matter less than founder-market fit. Prior startup experience is a plus. Is there a clear CEO?
**10 FINANCIALS**: Are the projections credible (not hockey-stick with no basis)? Are key assumptions stated? Is the burn rate reasonable? Is there 18+ months of runway post-raise?
**11 THE ASK**: Is the raise amount specific? Is the use of funds broken down? Are the milestones achievable with this capital and timed for the next funding event?
---
## STEP 3 — DETECT RED FLAGS
Scan for the 10 most common pitch deck killers (from julep-ai/pitch-deck-analyzer patterns):
1. **No clear problem**: Solution in search of a problem
2. **TAM abuse**: Top-down market sizing only ("$500B market, we need 1%")
3. **Missing Why Now**: Why this exists today vs 5 years ago is unclear
4. **Feature, not product**: Slides describe features, not user outcomes
5. **Vanity traction**: Total downloads/signups without retention or revenue
6. **Fake 2×2**: Competition matrix where the founder occupies a corner no one else does
7. **Superhero team**: All credentials, no founder-market fit story
8. **Hockey stick financials**: $0 to $100M ARR in 3 years with no stated basis
9. **Unclear ask**: Raise amount doesn't match stated milestones
10. **Missing unit economics**: No mention of LTV/CAC, gross margin, or payback period
---
## STEP 4 — BENCHMARK AGAINST FAMOUS DECKS
Compare the deck to the most relevant reference from successful real decks (rafaecheve/Awesome-Decks):
- **Airbnb (2009)**: Clean, visual, simple problem/solution framing, TAM cited as $2B
- **Uber (2008)**: Led with market size, timing (smartphone + GPS), clear business model
- **LinkedIn (2004)**: Network effects as core thesis, viral coefficient shown
- **Dropbox (2008)**: Demo-driven, solved a universal pain, minimal text
- **Facebook (2004)**: Traction-first (1M users in 10 months), simple model
- **Buffer (2011)**: Transparent metrics, clear MRR growth, honest about stage
- **YouTube (2005)**: Usage data front and center, platform play articulated
Identify which famous deck this most resembles in structure, where it falls short, and what it should borrow.
---
## STEP 5 — GENERATE SPECIFIC IMPROVEMENTS
For the 3 lowest-scoring slides, write specific rewrite suggestions — not "improve this slide" but "replace [current text] with [specific better version]."
---
## OUTPUT FORMAT
```
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
DECK ANALYSIS · [Company Name] · [Stage]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SLIDE SCORES
01 Title [X]/10 [████████░░] [1-line comment]
02 Problem [X]/10 [████████░░] [1-line comment]
03 Solution [X]/10 [████████░░] [1-line comment]
04 Why Now [X]/10 [████████░░] [1-line comment]
05 Market Size [X]/10 [████████░░] [1-line comment]
06 Business Model [X]/10 [████████░░] [1-line comment]
07 Traction [X]/10 [████████░░] [1-line comment]
08 Competition [X]/10 [████████░░] [1-line comment]
09 Team [X]/10 [████████░░] [1-line comment]
10 Financials [X]/10 [████████░░] [1-line comment]
11 The Ask [X]/10 [████████░░] [1-line comment]
OVERALL DECK STRENGTH [X.X]/10
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
RED FLAGS DETECTED
🚩 [flag] — [specific issue and why it matters to investors]
🚩 [flag] — [specific issue]
[list all detected, minimum 0 maximum 5]
MISSING SLIDES
⬜ [slide name] — [why this slide matters and what it should contain]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
BENCHMARK
Closest to: [Famous Deck Name] — [why the comparison applies]
Where it falls short: [specific gap]
What to borrow: [specific technique from that deck]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOP 3 IMPROVEMENTS (ranked by impact)
1. [SLIDE NAME] (currently [X]/10 → target [Y]/10)
Current: "[what the slide currently says or does]"
Replace with: "[specific rewrite or restructure]"
Why: [why this change moves the needle with investors]
2. [SLIDE NAME] (currently [X]/10 → target [Y]/10)
Current: "[current approach]"
Replace with: "[specific rewrite]"
Why: [investor logic]
3. [SLIDE NAME] (currently [X]/10 → target [Y]/10)
Current: "[current approach]"
Replace with: "[specific rewrite]"
Why: [investor logic]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
INVESTOR READINESS: [NOT READY / NEEDS WORK / CLOSE / READY TO SEND]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
```
**Investor Readiness thresholds:**
- **READY TO SEND**: Overall ≥ 8.0, no red flags, no missing slides
- **CLOSE**: Overall 7.0–7.9, max 1 red flag, max 1 missing slide
- **NEEDS WORK**: Overall 5.5–6.9, or 2–3 red flags
- **NOT READY**: Overall < 5.5, or critical slides missing (Problem, Solution, Team, Ask)
---
## TONE GUIDANCE
Be honest. A deck that is "not ready" with specific fixes is 10x more valuable than vague encouragement. Founders can handle truth — what they can't handle is going into investor meetings with a broken deck and not knowing why they're getting rejected. Be the partner who tells them before the meeting, not after.Related Skills
inspiration-analyzer
Analyze websites for design inspiration, extracting colors, typography, layouts, and patterns. Use when you have specific URLs to analyze for a design project.
meeting-insights-analyzer
Analyzes meeting transcripts and recordings to uncover behavioral patterns, communication insights, and actionable feedback. Identifies when you avoid conflict, use filler words, dominate conversations, or miss opportunities to listen. Perfect for professionals seeking to improve their communication and leadership skills.
soft-screening-startup
Activate for ANY startup evaluation, investment screening, or company assessment. Triggers include: "evaluate this startup", "screen this company", "should I invest in X", "is this a good investment", "what do you think about this company", "review this startup", "score this company", "rate this pitch", "assess this founder", "quick take on X", "is X worth investing in", "pass or decline on X", "what's your verdict on X", "first look at this company", "quick screen on X", "what's your take on this founder", "is this fundable", "would a VC invest in this". Also triggers when a user pastes a company description, funding ask, or founder background and asks for an opinion. Works on claude.ai and Claude Code. For hard-mode deterministic scoring with Python audit trail, use /venture-capital-intelligence:hard-screening-startup.
market-size
Run TAM/SAM/SOM market sizing with top-down and bottom-up methods, competitive landscape, and tech stack analysis. Triggered by: "/venture-capital-intelligence:market-size", "size this market", "what is the TAM for X", "market sizing analysis", "competitive landscape for X", "who are the competitors", "TAM SAM SOM for X", "market opportunity analysis", "how big is this market", "is this market big enough", "what's the addressable market", "total addressable market for X", "how large is the opportunity", "market research for X", "how saturated is this market", "market size estimate", "go-to-market sizing", "what is the serviceable market". Claude Code only. Requires Python 3.x. Uses web search for market data.
hard-screening-startup
Deterministic Python-scored startup screening with full audit trail. Use when you need a reproducible, weighted-score verdict on a startup — not just a qualitative opinion. Triggered by: "/venture-capital-intelligence:hard-screening-startup", "hard screen this startup", "run a hard screen on X", "score this startup with Python", "give me an auditable screen", "run a scored evaluation on X", "give me a weighted score for this startup", "screen with numbers", "objective startup score", "reproducible screen", "investment scorecard for X", "score this company out of 100", "run the full screen on X". Claude Code only. Requires Python 3.x. For conversational soft-mode screening, use /venture-capital-intelligence:soft-screening-startup.
fund-operations
Compute fund KPIs (TVPI, DPI, IRR, MOIC), model carried interest and management fees, and generate LP quarterly update narratives. Triggered by: "/venture-capital-intelligence:fund-operations", "calculate fund KPIs", "what is my fund TVPI", "IRR calculation", "compute MOIC", "LP report", "quarterly update draft", "carried interest calculation", "management fee calculation", "fund performance report", "write my LP update", "how is my fund performing", "what is my DPI", "fund returns analysis", "model my carry", "how much carry do I earn", "portfolio performance summary", "generate investor update". Claude Code only. Requires Python 3.x.
financial-model
Run deterministic financial models for startup valuation and SaaS health analysis. Triggered by: "/venture-capital-intelligence:financial-model", "run a financial model on X", "DCF this company", "model the financials", "calculate runway", "what is the valuation", "SaaS metrics model", "LTV CAC analysis", "unit economics", "burn rate analysis", "comparable valuation", "how long is my runway", "what's my burn multiple", "revenue projection for X", "model the ARR growth", "what is the pre-money valuation", "comps analysis", "NRR and churn model", "how healthy are these SaaS metrics". Claude Code only. Requires Python 3.x. Accepts user-supplied numbers or searches for publicly available data.
explain-equity-terms
Activate for ANY equity, legal, or term sheet question related to startup investing or fundraising. Triggers include: "what is a SAFE", "explain this term sheet", "what does pro-rata mean", "what is liquidation preference", "explain anti-dilution", "ISO vs NSO", "what is a 83(b) election", "what is carried interest", "explain drag-along", "what is a valuation cap", "what does MFN mean", "explain convertible note vs SAFE", "what is a down round", "explain vesting cliff", "what does fully diluted mean", "term sheet question", "equity question", "what does this clause mean". Also triggers when a user pastes legal text from a term sheet, SAFE, or subscription agreement and asks what it means. Works on claude.ai and Claude Code.
deal-sourcing-signals
Scan a company or sector for deal-sourcing signals across 6 dimensions. Triggered by: "/venture-capital-intelligence:deal-sourcing-signals", "scan signals for X", "what signals is X showing", "deal sourcing scan", "hiring signals for X", "is X raising soon", "monitor this company", "company signal scan", "sourcing brief for X", "what is X up to", "is X growing", "track this company", "deal signal report for X", "is this company fundraising", "what are the momentum signals for X", "find signals on X", "is X worth tracking". Claude Code only. Requires Python 3.x. Uses web search for live signal data.
cap-table-waterfall
Model cap table dilution, SAFE conversion, and exit waterfall across scenarios. Triggered by: "/venture-capital-intelligence:cap-table-waterfall", "model my cap table", "simulate dilution", "SAFE conversion math", "exit waterfall", "how much do I own after Series A", "liquidation waterfall", "cap table scenario", "what happens to equity at exit", "model the waterfall", "how much equity do I have left", "what is my ownership after funding", "run dilution scenarios", "model a new round", "what happens at acquisition", "cap table after SAFE conversion", "pari passu waterfall", "preference stack analysis". Claude Code only. Requires Python 3.x.
public-plugin-builder
Activate when the user wants to build a Claude plugin, create a Claude skill, make a Claude agent, structure a Claude Code plugin, says "build a plugin", "create a skill", "new claude skill", "new agent", "help me make a plugin", "plugin builder", "claude plugin helper", "how do I build a Claude skill", "I want to create a Claude plugin", "plugin building", or asks how to structure a Claude Code plugin or publish to the Claude marketplace. Works on both claude.ai (generates files as code blocks) and Claude Code (writes and pushes files).
server-components
This skill should be used when the user asks about "Server Components", "Client Components", "'use client' directive", "when to use server vs client", "RSC patterns", "component composition", "data fetching in components", or needs guidance on React Server Components architecture in Next.js.