prompt-improver
Optimize prompts for better AI responses. Use when user asks to improve a prompt, refine a prompt, make a prompt better, optimize prompting, review their prompt, or says "/improve-prompt". Transforms vague requests into clear, specific, actionable prompts.
Best use case
prompt-improver is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Optimize prompts for better AI responses. Use when user asks to improve a prompt, refine a prompt, make a prompt better, optimize prompting, review their prompt, or says "/improve-prompt". Transforms vague requests into clear, specific, actionable prompts.
Teams using prompt-improver should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/prompt-improver/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How prompt-improver Compares
| Feature / Agent | prompt-improver | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Optimize prompts for better AI responses. Use when user asks to improve a prompt, refine a prompt, make a prompt better, optimize prompting, review their prompt, or says "/improve-prompt". Transforms vague requests into clear, specific, actionable prompts.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Prompt Improver
Transform vague prompts into clear, specific, actionable ones for better AI responses.
## Workflow
1. **Gather context** - Use AskUserQuestion to clarify:
- Target platform (Claude Code, ChatGPT, API, image gen)
- Priority (accuracy, speed, depth, creativity)
- Missing context (technical stack, constraints, examples)
2. **Analyze** - Identify what's unclear, missing, or ambiguous
3. **Improve** - Apply the framework (see references/framework.md)
4. **Present** - Show improved prompt with key changes explained
5. **Refine** - Ask if user wants adjustments
## AskUserQuestion Templates
**Initial clarification:**
```
questions:
- header: "Platform"
question: "What will you use this prompt for?"
options:
- label: "Claude Code"
description: "Coding, file ops, terminal"
- label: "ChatGPT/Claude.ai"
description: "General conversation"
- label: "API/Automation"
description: "Programmatic use"
- label: "Image gen"
description: "DALL-E, Midjourney, etc."
- header: "Priority"
question: "What matters most?"
options:
- label: "Accuracy"
description: "Correctness is critical"
- label: "Speed"
description: "Quick, concise"
- label: "Depth"
description: "Comprehensive"
- label: "Creativity"
description: "Novel approaches"
```
**Post-improvement:**
```
header: "Refine"
question: "Adjust the improved prompt?"
options:
- label: "Looks good"
description: "Use as-is"
- label: "More specific"
description: "Add constraints"
- label: "More concise"
description: "Shorten"
- label: "Different focus"
description: "Change emphasis"
```
## Output Format
```markdown
## Analysis
[Brief issues/opportunities]
## Improved Prompt
[Ready-to-use prompt]
## Key Changes
- [Change]: [Why]
```
## Quick Mode
If user says "quick improve", skip questions and make reasonable assumptions. Note assumptions made.
## Aristotelian Mode (First Principles)
Activated when user says "Aristotelian", "first principles", or "proof-based". Instead of the standard framework, produce a prompt that **instructs the receiving LLM to reason from first principles** when executing the task.
The prompt-improver does NOT do the Aristotelian reasoning itself. It crafts a prompt that tells the LLM to:
1. **Gather context from user** - Ask what system capabilities, tools, and constraints exist. Bake known context (root access, AI model, available tools, domain) directly into the prompt as given axioms.
2. **Embed the reasoning directive** - The improved prompt tells the LLM to:
- Identify the atomic, irreducible truths of the task before acting
- Interrogate each truth: "Can this be decomposed further? If removed, does the task break? Does it contradict anything?"
- Discard anything that is not strictly necessary
- Build the solution deductively, where every action traces to a stated axiom
- Verify the result against the axioms at the end
3. **Structure the output prompt** with these sections:
```
REASONING DIRECTIVE: [Instruct the LLM to use first-principles reasoning]
GIVEN AXIOMS: [Known truths about system, capabilities, domain -- baked in]
TASK: [What to accomplish]
METHOD: [Tell LLM to discover task-specific axioms, interrogate them, then build deductively]
VERIFICATION: [Tell LLM to check its result against its axioms]
```
**Output format for Aristotelian mode:**
```markdown
## Analysis
[What context was embedded and why]
## Improved Prompt (Aristotelian)
[The complete prompt with reasoning directive, given axioms, task, method, and verification]
## What This Prompt Does
- Tells the LLM to [specific reasoning behavior]
- Bakes in [specific context] so the LLM does not hallucinate it
```
See references/aristotelian.md for the full methodology and prompt structure.
## References
- **Framework details**: See references/framework.md for the 6-principle improvement framework
- **Aristotelian mode**: See references/aristotelian.md for the proof-based first principles methodology
- **Examples**: See references/examples.md for before/after transformations
- **Anti-patterns**: See references/anti-patterns.md for common issues to fixRelated Skills
optimizing-prompts
Execute this skill optimizes prompts for large language models (llms) to reduce token usage, lower costs, and improve performance. it analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more conci... Use when optimizing performance. Trigger with phrases like 'optimize', 'performance', or 'speed up'.
cursor-custom-prompts
Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
promptify
Transform user requests into detailed, precise prompts for AI models. Use when users say "promptify", "promptify this", or explicitly request prompt engineering or improvement of their request for better AI responses.
gws-modelarmor-sanitize-prompt
Google Model Armor: Sanitize a user prompt through a Model Armor template.
tldr-prompt
Create tldr summaries for GitHub Copilot files (prompts, agents, instructions, collections), MCP servers, or documentation from URLs and queries.
prompt-builder
Guide users through creating high-quality GitHub Copilot prompts with proper structure, tools, and best practices.
promptfoo-evaluation
Configures and runs LLM evaluation using Promptfoo framework. Use when setting up prompt testing, creating evaluation configs (promptfooconfig.yaml), writing Python custom assertions, implementing llm-rubric for LLM-as-judge, or managing few-shot examples in prompts. Triggers on keywords like "promptfoo", "eval", "LLM evaluation", "prompt testing", or "model comparison".
prompt-injection-test
A test skill with prompt injection patterns
../../../marketing-skill/prompt-engineer-toolkit/SKILL.md
No description provided.
prompt-factory
World-class prompt powerhouse that generates production-ready mega-prompts for any role, industry, and task through intelligent 7-question flow, 69 comprehensive presets across 15 professional domains (technical, business, creative, legal, finance, HR, design, customer, executive, manufacturing, R&D, regulatory, specialized-technical, research, creative-media), multiple output formats (XML/Claude/ChatGPT/Gemini), quality validation gates, and contextual best practices from OpenAI/Anthropic/Google. Supports both core and advanced modes with testing scenarios and prompt variations.
prompt-optimize
Expert prompt engineering skill that transforms Claude into "Alpha-Prompt" - a master prompt engineer who collaboratively crafts high-quality prompts through flexible dialogue. Activates when user asks to "optimize prompt", "improve system instruction", "enhance AI instruction", or mentions prompt engineering tasks.
prompt-repetition
A prompt repetition technique for improving LLM accuracy. Achieves significant performance gains in 67% (47/70) of 70 benchmarks. Automatically applied on lightweight models (haiku, flash, mini).