write-query
Write optimized SQL for your dialect with best practices. Use when translating a natural-language data need into SQL, building a multi-CTE query with joins and aggregations, optimizing a query against a large partitioned table, or getting dialect-specific syntax for Snowflake, BigQuery, Postgres, etc.
Best use case
write-query is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Write optimized SQL for your dialect with best practices. Use when translating a natural-language data need into SQL, building a multi-CTE query with joins and aggregations, optimizing a query against a large partitioned table, or getting dialect-specific syntax for Snowflake, BigQuery, Postgres, etc.
Teams using write-query should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/write-query/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How write-query Compares
| Feature / Agent | write-query | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Write optimized SQL for your dialect with best practices. Use when translating a natural-language data need into SQL, building a multi-CTE query with joins and aggregations, optimizing a query against a large partitioned table, or getting dialect-specific syntax for Snowflake, BigQuery, Postgres, etc.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
AI Agents for Marketing
Discover AI agents for marketing workflows, from SEO and content production to campaign research, outreach, and analytics.
AI Agents for Startups
Explore AI agent skills for startup validation, product research, growth experiments, documentation, and fast execution with small teams.
SKILL.md Source
# /write-query - Write Optimized SQL > If you see unfamiliar placeholders or need to check which tools are connected, see [CONNECTORS.md](../../CONNECTORS.md). Write a SQL query from a natural language description, optimized for your specific SQL dialect and following best practices. ## Usage ``` /write-query <description of what data you need> ``` ## Workflow ### 1. Understand the Request Parse the user's description to identify: - **Output columns**: What fields should the result include? - **Filters**: What conditions limit the data (time ranges, segments, statuses)? - **Aggregations**: Are there GROUP BY operations, counts, sums, averages? - **Joins**: Does this require combining multiple tables? - **Ordering**: How should results be sorted? - **Limits**: Is there a top-N or sample requirement? ### 2. Determine SQL Dialect If the user's SQL dialect is not already known, ask which they use: - **PostgreSQL** (including Aurora, RDS, Supabase, Neon) - **Snowflake** - **BigQuery** (Google Cloud) - **Redshift** (Amazon) - **Databricks SQL** - **MySQL** (including Aurora MySQL, PlanetScale) - **SQL Server** (Microsoft) - **DuckDB** - **SQLite** - **Other** (ask for specifics) Remember the dialect for future queries in the same session. ### 3. Discover Schema (If Warehouse Connected) If a data warehouse MCP server is connected: 1. Search for relevant tables based on the user's description 2. Inspect column names, types, and relationships 3. Check for partitioning or clustering keys that affect performance 4. Look for pre-built views or materialized views that might simplify the query ### 4. Write the Query Follow these best practices: **Structure:** - Use CTEs (WITH clauses) for readability when queries have multiple logical steps - One CTE per logical transformation or data source - Name CTEs descriptively (e.g., `daily_signups`, `active_users`, `revenue_by_product`) **Performance:** - Never use `SELECT *` in production queries -- specify only needed columns - Filter early (push WHERE clauses as close to the base tables as possible) - Use partition filters when available (especially date partitions) - Prefer `EXISTS` over `IN` for subqueries with large result sets - Use appropriate JOIN types (don't use LEFT JOIN when INNER JOIN is correct) - Avoid correlated subqueries when a JOIN or window function works - Be mindful of exploding joins (many-to-many) **Readability:** - Add comments explaining the "why" for non-obvious logic - Use consistent indentation and formatting - Alias tables with meaningful short names (not just `a`, `b`, `c`) - Put each major clause on its own line **Dialect-specific optimizations:** - Apply dialect-specific syntax and functions (see `sql-queries` skill for details) - Use dialect-appropriate date functions, string functions, and window syntax - Note any dialect-specific performance features (e.g., Snowflake clustering, BigQuery partitioning) ### 5. Present the Query Provide: 1. **The complete query** in a SQL code block with syntax highlighting 2. **Brief explanation** of what each CTE or section does 3. **Performance notes** if relevant (expected cost, partition usage, potential bottlenecks) 4. **Modification suggestions** -- how to adjust for common variations (different time range, different granularity, additional filters) ### 6. Offer to Execute If a data warehouse is connected, offer to run the query and analyze the results. If the user wants to run it themselves, the query is ready to copy-paste. ## Examples **Simple aggregation:** ``` /write-query Count of orders by status for the last 30 days ``` **Complex analysis:** ``` /write-query Cohort retention analysis -- group users by their signup month, then show what percentage are still active (had at least one event) at 1, 3, 6, and 12 months after signup ``` **Performance-critical:** ``` /write-query We have a 500M row events table partitioned by date. Find the top 100 users by event count in the last 7 days with their most recent event type. ``` ## Tips - Mention your SQL dialect upfront to get the right syntax immediately - If you know the table names, include them -- otherwise Claude will help you find them - Specify if you need the query to be idempotent (safe to re-run) or one-time - For recurring queries, mention if it should be parameterized for date ranges
Related Skills
write-spec
Write a feature spec or PRD from a problem statement or feature idea. Use when turning a vague idea or user request into a structured document, scoping a feature with goals and non-goals, defining success metrics and acceptance criteria, or breaking a big ask into a phased spec.
pipeline-review
Analyze pipeline health — prioritize deals, flag risks, get a weekly action plan. Use when running a weekly pipeline review, deciding which deals to focus on this week, spotting stale or stuck opportunities, auditing for hygiene issues like bad close dates, or identifying single-threaded deals.
forecast
Generate a weighted sales forecast with best/likely/worst scenarios, commit vs. upside breakdown, and gap analysis. Use when preparing a quarterly forecast call, assessing gap-to-quota from a pipeline CSV, deciding which deals to commit vs. call upside, or checking pipeline coverage against your number.
draft-outreach
Research a prospect then draft personalized outreach. Uses web research by default, supercharged with enrichment and CRM. Trigger with "draft outreach to [person/company]", "write cold email to [prospect]", "reach out to [name]".
daily-briefing
Start your day with a prioritized sales briefing. Works standalone when you tell me your meetings and priorities, supercharged when you connect your calendar, CRM, and email. Trigger with "morning briefing", "daily brief", "what's on my plate today", "prep my day", or "start my day".
create-an-asset
Generate tailored sales assets (landing pages, decks, one-pagers, workflow demos) from your deal context. Describe your prospect, audience, and goal — get a polished, branded asset ready to share with customers.
competitive-intelligence
Research your competitors and build an interactive battlecard. Outputs an HTML artifact with clickable competitor cards and a comparison matrix. Trigger with "competitive intel", "research competitors", "how do we compare to [competitor]", "battlecard for [competitor]", or "what's new with [competitor]".
call-summary
Process call notes or a transcript — extract action items, draft follow-up email, generate internal summary. Use when pasting rough notes or a transcript after a discovery, demo, or negotiation call, drafting a customer follow-up, logging the activity for your CRM, or capturing objections and next steps for your team.
update
Sync tasks and refresh memory from your current activity. Use when pulling new assignments from your project tracker into TASKS.md, triaging stale or overdue tasks, filling memory gaps for unknown people or projects, or running a comprehensive scan to catch todos buried in chat and email.
task-management
Simple task management using a shared TASKS.md file. Reference this when the user asks about their tasks, wants to add/complete tasks, or needs help tracking commitments.
memory-management
Two-tier memory system that makes Claude a true workplace collaborator. Decodes shorthand, acronyms, nicknames, and internal language so Claude understands requests like a colleague would. CLAUDE.md for working memory, memory/ directory for the full knowledge base.
synthesize-research
Synthesize user research from interviews, surveys, and feedback into structured insights. Use when you have a pile of interview notes, survey responses, or support tickets to make sense of, need to extract themes and rank findings by frequency and impact, or want to turn raw feedback into roadmap recommendations.