prompt-improver

This skill enriches vague prompts with targeted research and clarification before execution. Should be used when a prompt is determined to be vague and requires systematic research, question generation, and execution guidance.

153 stars

Best use case

prompt-improver is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

This skill enriches vague prompts with targeted research and clarification before execution. Should be used when a prompt is determined to be vague and requires systematic research, question generation, and execution guidance.

Teams using prompt-improver should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/prompt-improver/SKILL.md --create-dirs "https://raw.githubusercontent.com/Microck/ordinary-claude-skills/main/skills_all/prompt-improver/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/prompt-improver/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How prompt-improver Compares

Feature / Agentprompt-improverStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

This skill enriches vague prompts with targeted research and clarification before execution. Should be used when a prompt is determined to be vague and requires systematic research, question generation, and execution guidance.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Prompt Improver Skill

## Purpose

Transform vague, ambiguous prompts into actionable, well-defined requests through systematic research and targeted clarification. This skill is invoked when the hook has already determined a prompt needs enrichment.

## When This Skill is Invoked

**Automatic invocation:**
- UserPromptSubmit hook evaluates prompt
- Hook determines prompt is vague (missing specifics, context, or clear target)
- Hook invokes this skill to guide research and questioning

**Manual invocation:**
- To enrich a vague prompt with research-based questions
- When building or testing prompt evaluation systems
- When prompt lacks sufficient context even with conversation history

**Assumptions:**
- Prompt has already been identified as vague
- Evaluation phase is complete (done by hook)
- Proceed directly to research and clarification

## Core Workflow

This skill follows a 4-phase approach to prompt enrichment:

### Phase 1: Research

Create a dynamic research plan using TodoWrite before asking questions.

**Research Plan Template:**
1. **Check conversation history first** - Avoid redundant exploration if context already exists
2. **Review codebase** if needed:
   - Task/Explore for architecture and project structure
   - Grep/Glob for specific patterns, related files
   - Check git log for recent changes
   - Search for errors, failing tests, TODO/FIXME comments
3. **Gather additional context** as needed:
   - Read local documentation files
   - WebFetch for online documentation
   - WebSearch for best practices, common approaches, current information
4. **Document findings** to ground questions in actual project context

**Critical Rules:**
- NEVER skip research
- Check conversation history before exploring codebase
- Questions must be grounded in actual findings, not assumptions or base knowledge

For detailed research strategies, patterns, and examples, see [references/research-strategies.md](references/research-strategies.md).

### Phase 2: Generate Targeted Questions

Based on research findings, formulate 1-6 questions that will clarify the ambiguity.

**Question Guidelines:**
- **Grounded**: Every option comes from research (codebase findings, documentation, common patterns)
- **Specific**: Avoid vague options like "Other approach"
- **Multiple choice**: Provide 2-4 concrete options per question
- **Focused**: Each question addresses one decision point
- **Contextual**: Include brief explanations of trade-offs

**Number of Questions:**
- **1-2 questions**: Simple ambiguity (which file? which approach?)
- **3-4 questions**: Moderate complexity (scope + approach + validation)
- **5-6 questions**: Complex scenarios (major feature with multiple decision points)

For question templates, effective patterns, and examples, see [references/question-patterns.md](references/question-patterns.md).

### Phase 3: Get Clarification

Use the AskUserQuestion tool to present your research-grounded questions.

**AskUserQuestion Format:**
```
- question: Clear, specific question ending with ?
- header: Short label (max 12 chars) for UI display
- multiSelect: false (unless choices aren't mutually exclusive)
- options: Array of 2-4 specific choices from research
  - label: Concise choice text (1-5 words)
  - description: Context about this option (trade-offs, implications)
```

**Important:** Always include multiSelect field (true/false). User can always select "Other" for custom input.

### Phase 4: Execute with Context

Proceed with the original user request using:
- Original prompt intent
- Clarification answers from user
- Research findings and context
- Conversation history

Execute the request as if it had been clear from the start.

## Examples

### Example 1: Skill Invocation → Research → Questions → Execution

**Hook evaluation:** Determined prompt is vague
**Original prompt:** "fix the bug"
**Skill invoked:** Yes (prompt lacks target and context)

**Research plan:**
1. Check conversation history for recent errors
2. Explore codebase for failing tests
3. Grep for TODO/FIXME comments
4. Check git log for recent problem areas

**Research findings:**
- Recent conversation mentions login failures
- auth.py:145 has try/catch swallowing errors
- Tests failing in test_auth.py

**Questions generated:**
1. Which bug are you referring to?
   - Login authentication failure (auth.py:145)
   - Session timeout issues (session.py:89)
   - Other

**User answer:** Login authentication failure

**Execution:** Fix the error handling in auth.py:145 that's causing login failures

### Example 2: Clear Prompt (Skill Not Invoked)

**Original prompt:** "Refactor the getUserById function in src/api/users.ts to use async/await instead of promises"

**Hook evaluation:** Passes all checks
- Specific target: getUserById in src/api/users.ts
- Clear action: refactor to async/await
- Success criteria: use async/await instead of promises

**Skill invoked:** No (prompt is clear, proceeds immediately without skill invocation)

For comprehensive examples showing various prompt types and transformations, see [references/examples.md](references/examples.md).

## Key Principles

1. **Assume Vagueness**: Skill is only invoked for vague prompts (evaluation done by hook)
2. **Research First**: Always gather context before formulating questions
3. **Ground Questions**: Use research findings, not assumptions or base knowledge
4. **Be Specific**: Provide concrete options from actual codebase/context
5. **Stay Focused**: Max 1-6 questions, each addressing one decision point
6. **Systematic Approach**: Follow 4-phase workflow (Research → Questions → Clarify → Execute)

## Progressive Disclosure

This SKILL.md contains the core workflow and essentials. For deeper guidance:

- **Research strategies**: [references/research-strategies.md](references/research-strategies.md)
- **Question patterns**: [references/question-patterns.md](references/question-patterns.md)
- **Comprehensive examples**: [references/examples.md](references/examples.md)

Load these references only when detailed guidance is needed on specific aspects of prompt improvement.

Related Skills

prompt-engineering-patterns

153
from Microck/ordinary-claude-skills

Master advanced prompt engineering techniques to maximize LLM performance, reliability, and controllability in production. Use when optimizing prompts, improving LLM outputs, or designing production prompt templates.

zapier-workflows

153
from Microck/ordinary-claude-skills

Manage and trigger pre-built Zapier workflows and MCP tool orchestration. Use when user mentions workflows, Zaps, automations, daily digest, research, search, lead tracking, expenses, or asks to "run" any process. Also handles Perplexity-based research and Google Sheets data tracking.

writing-skills

153
from Microck/ordinary-claude-skills

Create and manage Claude Code skills in HASH repository following Anthropic best practices. Use when creating new skills, modifying skill-rules.json, understanding trigger patterns, working with hooks, debugging skill activation, or implementing progressive disclosure. Covers skill structure, YAML frontmatter, trigger types (keywords, intent patterns), UserPromptSubmit hook, and the 500-line rule. Includes validation and debugging with SKILL_DEBUG. Examples include rust-error-stack, cargo-dependencies, and rust-documentation skills.

writing-plans

153
from Microck/ordinary-claude-skills

Use when design is complete and you need detailed implementation tasks for engineers with zero codebase context - creates comprehensive implementation plans with exact file paths, complete code examples, and verification steps assuming engineer has minimal domain knowledge

workflow-orchestration-patterns

153
from Microck/ordinary-claude-skills

Design durable workflows with Temporal for distributed systems. Covers workflow vs activity separation, saga patterns, state management, and determinism constraints. Use when building long-running processes, distributed transactions, or microservice orchestration.

workflow-management

153
from Microck/ordinary-claude-skills

Create, debug, or modify QStash workflows for data updates and social media posting in the API service. Use when adding new automated jobs, fixing workflow errors, or updating scheduling logic.

workflow-interactive-dev

153
from Microck/ordinary-claude-skills

用于开发 FastGPT 工作流中的交互响应。详细说明了交互节点的架构、开发流程和需要修改的文件。

woocommerce-dev-cycle

153
from Microck/ordinary-claude-skills

Run tests, linting, and quality checks for WooCommerce development. Use when running tests, fixing code style, or following the development workflow.

woocommerce-code-review

153
from Microck/ordinary-claude-skills

Review WooCommerce code changes for coding standards compliance. Use when reviewing code locally, performing automated PR reviews, or checking code quality.

Wheels Migration Generator

153
from Microck/ordinary-claude-skills

Generate database-agnostic Wheels migrations for creating tables, altering schemas, and managing database changes. Use when creating or modifying database schema, adding tables, columns, indexes, or foreign keys. Prevents database-specific SQL and ensures cross-database compatibility.

webapp-testing

153
from Microck/ordinary-claude-skills

Toolkit for interacting with and testing local web applications using Playwright. Supports verifying frontend functionality, debugging UI behavior, capturing browser screenshots, and viewing browser logs.

web3-testing

153
from Microck/ordinary-claude-skills

Test smart contracts comprehensively using Hardhat and Foundry with unit tests, integration tests, and mainnet forking. Use when testing Solidity contracts, setting up blockchain test suites, or validating DeFi protocols.