star-story-extraction
Auto-invoke after task completion to extract interview-ready STAR stories from completed work.
Best use case
star-story-extraction is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Auto-invoke after task completion to extract interview-ready STAR stories from completed work.
Teams using star-story-extraction should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/star-story-extraction/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How star-story-extraction Compares
| Feature / Agent | star-story-extraction | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Auto-invoke after task completion to extract interview-ready STAR stories from completed work.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# STAR Story Extraction > "Every feature you build is an interview answer waiting to be told." ## Purpose Transform completed work into compelling interview stories using the STAR method. These stories demonstrate real problem-solving ability. --- ## The STAR Method | Component | Question | Focus | |-----------|----------|-------| | **S**ituation | "What was the context?" | Set the scene, explain the problem | | **T**ask | "What were YOU responsible for?" | YOUR specific role and responsibility | | **A**ction | "What did YOU do?" | Specific technical actions YOU took | | **R**esult | "What was the outcome?" | Impact, metrics, improvements | --- ## Extraction Flow ### Step 1: Identify the Story Type What kind of problem did you solve? | Story Type | Good For Questions Like | |------------|------------------------| | Technical challenge | "Tell me about a difficult bug you solved" | | Feature implementation | "Describe a feature you're proud of" | | Performance optimization | "How did you improve system performance?" | | Security fix | "Tell me about a security issue you addressed" | | Refactoring | "Describe a time you improved code quality" | | Learning curve | "Tell me about a time you learned something quickly" | ### Step 2: Guide Through STAR #### Situation (2-3 sentences) > "What was the context? What problem or challenge existed before you started?" **Good elements:** - Business context (why it mattered) - Technical constraints - Scale/impact of the problem **Avoid:** - Too much background - Irrelevant details - Blaming others #### Task (1-2 sentences) > "What were YOU specifically responsible for? What was your role?" **Good elements:** - Clear ownership - Specific scope - Why you were the one to do it **Avoid:** - "We did this" (use "I") - Vague responsibilities #### Action (The meat - 3-5 sentences) > "Walk me through the specific steps YOU took. Be technical." **Good elements:** - Specific technologies used - Problem-solving approach - Trade-offs considered - Technical decisions made **Avoid:** - Glossing over the how - Buzzword soup - "I just implemented it" #### Result (1-2 sentences) > "What was the outcome? Can you quantify the impact?" **Good elements:** - Metrics where possible (50% faster, 0 bugs in production) - Business impact - What you learned **Avoid:** - "It worked" (too vague) - No mention of impact --- ## Story Quality Checklist - [ ] Uses "I" not "we" (shows ownership) - [ ] Includes specific technologies - [ ] Demonstrates problem-solving - [ ] Shows technical depth - [ ] Has measurable result if possible - [ ] Is 2-3 minutes when spoken - [ ] Answers the implied "why hire you?" --- ## Story Template ```markdown # STAR Story: [Feature/Problem Name] **Date:** [When completed] **Type:** [Technical Challenge / Feature / Performance / Security / Refactor] ## Situation [The context. What problem existed? Why did it matter?] ## Task [YOUR specific responsibility. What were YOU asked to do?] ## Action [The specific steps YOU took. Be technical. Show your thought process.] ## Result [The outcome. Metrics if possible. What impact did it have?] --- ## Interview Variations This story can answer: - "Tell me about a time you [X]" - "Describe a challenging [Y] you worked on" - "How did you approach [Z]?" ## Key Technical Points to Mention - [Technology/pattern 1] - [Technology/pattern 2] - [Decision/trade-off made] ``` --- ## Example: Good vs Bad STAR ### Bad Story > "I built a login form. It had validation. It worked." Problems: No context, no challenge, no depth, no impact. ### Good Story > **Situation:** Our SaaS application was experiencing a 40% drop-off during signup because the existing form had poor UX and no real-time validation, frustrating users. > > **Task:** I was responsible for rebuilding the entire authentication flow, focusing on reducing friction while maintaining security. > > **Action:** I implemented a multi-step form with real-time validation using React Hook Form for performance. I added JWT authentication with secure refresh token rotation to handle long sessions. The key challenge was balancing security (short token expiry) with UX (no jarring logouts), which I solved by implementing silent refresh 5 minutes before expiry. > > **Result:** Sign-up completion improved by 35%, and we've had zero authentication-related security incidents since launch. The pattern I built is now used across our other products. --- ## Socratic Story Questions Guide the junior with these: 1. **Finding the story:** "What was the hardest part of this feature?" 2. **Adding depth:** "Walk me through your debugging process when X happened." 3. **Showing ownership:** "What decision did YOU make that shaped this?" 4. **Quantifying results:** "How would you measure the impact of this work?" 5. **Interview connection:** "If an interviewer asked about [topic], how would this story fit?" --- ## Common Story Mistakes | Mistake | Fix | |---------|-----| | "We built..." | Use "I implemented..." | | Too long (10+ minutes) | Cut to 2-3 minutes | | No technical depth | Add specific technologies and decisions | | No result | Always end with impact | | Only happy path | Include challenges overcome | --- ## Save Location Stories are saved to: ``` mentorspec/career/stories/[date]-[feature-name].md ``` Example: `mentorspec/career/stories/2026-01-15-jwt-auth.md`
Related Skills
user-story-generator
User Story Generator - Auto-activating skill for Enterprise Workflows. Triggers on: user story generator, user story generator Part of the Enterprise Workflows skill category.
storybrand-messaging
Build clear brand messaging using narrative structure that positions the customer as hero. Use when the user mentions "brand message", "website copy", "elevator pitch", "one-liner", "messaging isn''t resonating", or "brand script". Covers landing page copy, marketing collateral, and consistent communication. For memorable messaging, see made-to-stick. For product positioning, see obviously-awesome. Trigger with 'storybrand', 'messaging'.
quickstart-guide-generator
Quickstart Guide Generator - Auto-activating skill for Technical Documentation. Triggers on: quickstart guide generator, quickstart guide generator Part of the Technical Documentation skill category.
lean-startup
Design MVPs, validated learning experiments, and pivot-or-persevere decisions using Build-Measure-Learn. Use when the user mentions "MVP scope", "validated learning", "pivot or persevere", "vanity metrics", or "test assumptions". Covers innovation accounting and actionable metrics. For 5-day prototype testing, see design-sprint. For customer motivation analysis, see jobs-to-be-done. Trigger with 'lean', 'startup'.
github-actions-starter
Github Actions Starter - Auto-activating skill for DevOps Basics. Triggers on: github actions starter, github actions starter Part of the DevOps Basics skill category.
data-story-outliner
Data Story Outliner - Auto-activating skill for Data Analytics. Triggers on: data story outliner, data story outliner Part of the Data Analytics skill category.
extraction-proposer
Scan ICE-Crawler extraction logs, pick promising algorithms/tools, and emit skill creation proposals (name, scope, source files, next steps).
repo-story-time
Generate a comprehensive repository summary and narrative story from commit history
github-copilot-starter
Set up complete GitHub Copilot configuration for a new project based on technology stack
dataverse-python-quickstart
Generate Python SDK setup + CRUD + bulk + paging snippets using official patterns.
copilot-cli-quickstart
Use this skill when someone wants to learn GitHub Copilot CLI from scratch. Offers interactive step-by-step tutorials with separate Developer and Non-Developer tracks, plus on-demand Q&A. Just say "start tutorial" or ask a question! Note: This skill targets GitHub Copilot CLI specifically and uses CLI-specific tools (ask_user, sql, fetch_copilot_cli_documentation).
claude-code-history-files-finder
Finds and recovers content from Claude Code session history files. This skill should be used when searching for deleted files, tracking changes across sessions, analyzing conversation history, or recovering code from previous Claude interactions. Triggers include mentions of "session history", "recover deleted", "find in history", "previous conversation", or ".claude/projects".