ask-questions-if-underspecified

Ask clarifying questions when requirements are underspecified

108 stars

Best use case

ask-questions-if-underspecified is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Ask clarifying questions when requirements are underspecified

Teams using ask-questions-if-underspecified should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/ask-questions-if-underspecified/SKILL.md --create-dirs "https://raw.githubusercontent.com/alfredolopez80/multi-agent-ralph-loop/main/.claude/skills/ask-questions-if-underspecified/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/ask-questions-if-underspecified/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How ask-questions-if-underspecified Compares

Feature / Agentask-questions-if-underspecifiedStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Ask clarifying questions when requirements are underspecified

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Skill: Ask Questions If Underspecified

## v2.88 Key Changes (MODEL-AGNOSTIC)

- **Model-agnostic**: Uses model configured in `~/.claude/settings.json` or CLI/env vars
- **No flags required**: Works with the configured default model
- **Flexible**: Works with GLM-5, Claude, Minimax, or any configured model
- **Settings-driven**: Model selection via `ANTHROPIC_DEFAULT_*_MODEL` env vars

**ultrathink** - Take a deep breath. We're not here to write code. We're here to make a dent in the universe.

## The Vision
Clarity is the foundation of inevitable solutions. Every question should narrow the path to truth.

## Your Work, Step by Step
1. **Analyze requirements**: Identify missing inputs and ambiguities.
2. **Separate MUST vs NICE**: Block on essentials, assume the rest.
3. **Ask precisely**: Short, concrete questions with defaults.
4. **Confirm understanding**: Summarize before proceeding.

## Ultrathink Principles in Practice
- **Think Different**: Question hidden assumptions.
- **Obsess Over Details**: Align questions with real constraints.
- **Plan Like Da Vinci**: Build the question set before asking.
- **Craft, Don't Code**: Precision over volume.
- **Iterate Relentlessly**: Refine questions as context evolves.
- **Simplify Ruthlessly**: Ask only what matters.

## Purpose
Ensure task clarity BEFORE implementation by identifying ambiguities.

## When to Use
- ANY new task or feature request
- Complex modifications
- Unclear requirements

## Process

### 1. Analyze Requirements
Identify:
- Missing technical details
- Unclear scope boundaries
- Ambiguous terminology
- Unstated assumptions

### 2. Categorize Questions

#### MUST_HAVE (Blocking)
Questions that BLOCK implementation until answered:
- Critical architecture decisions
- Security requirements
- Data model choices
- Integration points

#### NICE_TO_HAVE (Assumptions)
Questions where you can make reasonable assumptions:
- UI/UX preferences
- Performance targets
- Edge case handling

### 3. Output Format

```markdown
## 🔍 Clarification Needed

### MUST_HAVE (Please answer before I proceed):
1. [Critical question 1]
2. [Critical question 2]

### NICE_TO_HAVE (I'll assume these if not specified):
- [Optional question] → I'll assume: [default value]
- [Optional question] → I'll assume: [default value]

### My Understanding:
[Summarize what you understand so far]
```

### 4. Wait for Answers
DO NOT proceed with implementation until MUST_HAVE questions are answered.

## Examples

### Good Clarification
```
MUST_HAVE:
1. Should auth support both email/password AND OAuth providers?
2. What's the session timeout requirement?

NICE_TO_HAVE:
- Rate limiting? → I'll assume: 100 req/min
- Password complexity? → I'll assume: min 8 chars, 1 number, 1 special
```

### Bad (Too Vague)
```
What do you want?
Can you give more details?
```

Related Skills

worktree-pr

108
from alfredolopez80/multi-agent-ralph-loop

Manage git worktrees with PR workflow and multi-agent review (Claude + Codex). Use when developing features in isolation with easy rollback.

vercel-react-best-practices

108
from alfredolopez80/multi-agent-ralph-loop

React and Next.js performance optimization guidelines from Vercel Engineering. Use when writing, reviewing, or refactoring React/Next.js code. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements.

vault

108
from alfredolopez80/multi-agent-ralph-loop

Living knowledge base management. Actions: search (query vault), save (store learning), index (update indices), compile (raw->wiki->rules graduation), init (create vault structure). Follows Karpathy pipeline: ingest->compile->query. Use when: (1) searching accumulated knowledge, (2) saving learnings, (3) compiling raw notes into wiki, (4) initializing a new vault. Triggers: /vault, 'vault search', 'knowledge base', 'save learning'.

testing-anti-patterns

108
from alfredolopez80/multi-agent-ralph-loop

Custom skill for testing-anti-patterns

task-visualizer

108
from alfredolopez80/multi-agent-ralph-loop

Visualize task dependencies and progress (Gastown-style)

task-classifier

108
from alfredolopez80/multi-agent-ralph-loop

Classifies task complexity (1-10) for model and agent routing

task-batch

108
from alfredolopez80/multi-agent-ralph-loop

Autonomous batch task execution with PRD parsing, task decomposition, and continuous execution until all tasks complete. Uses /orchestrator internally. Stops only for major failures (no internet, token limit, system crash). Use when: (1) processing task lists autonomously, (2) PRD-driven development, (3) batch feature implementation. Triggers: /task-batch, 'batch tasks', 'process PRD', 'run task queue'.

tap-explorer

108
from alfredolopez80/multi-agent-ralph-loop

Tree of Attacks with Pruning for systematic code analysis

stop-slop

108
from alfredolopez80/multi-agent-ralph-loop

A skill for removing AI-generated writing patterns ('slop') from prose. Eliminates telltale signs of AI writing like filler phrases, excessive hedging, overly formal language, and mechanical sentence structures. Use when: writing content that should sound human and natural, editing AI-generated drafts, cleaning up prose for publication, or any content that needs to sound authentic rather than AI-generated. Triggers: 'stop-slop', 'remove AI tells', 'clean up prose', 'make it sound human', 'edit AI writing'.

spec

108
from alfredolopez80/multi-agent-ralph-loop

Produce a verifiable technical specification before coding. 6 mandatory sections: Interfaces, Behaviors, Invariants (from Aristotle Phase 2), File Plan, Test Plan, Exit Criteria (executable bash commands + expected results). Use when: (1) before implementing features with complexity > 4, (2) as Step 1.5 in orchestrator workflow, (3) when requirements need formalization. Triggers: /spec, 'create spec', 'write specification', 'technical spec'.

smart-fork

108
from alfredolopez80/multi-agent-ralph-loop

Smart Forking - Find and fork from relevant historical sessions using parallel memory search across vault, memvid, handoffs, and ledgers

ship

108
from alfredolopez80/multi-agent-ralph-loop

Pre-launch shipping checklist orchestrating /gates, /security, /browser-test, /perf. Ensures nothing ships without passing all quality checks. Use when: (1) before deploying, (2) before merging to main, (3) before release. Triggers: /ship, 'ship it', 'ready to deploy', 'pre-launch check'.