system-prompt
Edit or improve the AI system prompt used in DBX Studio's AI chat. Invoke when the user wants to change how the AI responds, its tone, tool usage order, or response format.
Best use case
system-prompt is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Edit or improve the AI system prompt used in DBX Studio's AI chat. Invoke when the user wants to change how the AI responds, its tone, tool usage order, or response format.
Teams using system-prompt should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/system-prompt/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How system-prompt Compares
| Feature / Agent | system-prompt | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Edit or improve the AI system prompt used in DBX Studio's AI chat. Invoke when the user wants to change how the AI responds, its tone, tool usage order, or response format.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# System Prompt Editor — DBX Studio
## Prompt Locations
There are **two** system prompts in this project:
### 1. Streaming Prompt (main, used in production)
**File**: [apps/api/src/routes/ai-stream.ts](../../../apps/api/src/routes/ai-stream.ts)
**Lines**: ~132–172 (with schema) and ~176–202 (without schema)
**Variable**: `contextPrompt` (built inline, not a constant)
### 2. oRPC Provider Prompt (used in `callAnthropicWithTools`, `callOpenAIWithTools`)
**File**: [apps/api/src/orpc/routers/ai/providersWithTools.ts](../../../apps/api/src/orpc/routers/ai/providersWithTools.ts)
**Variable**: `SYSTEM_PROMPT_WITH_TOOLS` (top of file)
## Current Prompt Structure (Streaming)
```
You are a SQL assistant...
## Tools Available ← list 5 tools
## Response Style ← 5 rules: be direct, show results, use tools, minimal explanation, SQL format
## Examples ← 2-3 concrete input/output examples
## Context ← dynamic schema from generateSQLPrompt()
Schema: "<schema>"
## User Query ← the actual user message
```
## Prompt Design Rules for DBX Studio
1. **Results first** — answer the question before showing SQL
2. **Use tools always** — never guess schema or data
3. **Be concise** — this is a data tool, not a chatbot
4. **Show SQL only when asked** — use ```sql blocks with uppercase keywords
5. **Format numbers clearly** — "**1,247 orders**" not "1247"
## When Editing the Prompt
- Keep the `## Tools Available` section in sync with actual tools in `tools.ts`
- Keep `## Examples` realistic to real user queries
- The `${enhancedPrompt}` injection must stay — it contains live schema context
- Do not remove `Schema: "${schema || 'public'}"` line — it scopes queries
- Both prompts (streaming + oRPC) should stay consistent in style
## Current Prompt Structure (as of last update)
Both prompts now follow this unified structure:
```
You are DBX Studio's AI assistant — expert SQL analyst and data explorer.
## Tools Available (ordered by when to use)
1. read_schema / get_table_schema — FIRST, when schema is unknown
2. execute_query / execute_sql_query — run SELECT/WITH queries
3. get_table_data / select_data — preview or filter rows
4. get_table_stats — distributions and row counts
5. generate_chart / generate_bar_graph — visualization
6. describe_table / get_enums — column details, enum values
## Response Rules
1. Results first — answer before explaining
2. Always use tools — never guess schema or data
3. Tool order matters (schema → query → chart)
4. Show SQL only when asked — use ```sql with UPPERCASE
5. Format numbers clearly — **bold** key values
6. No filler words
## Chart Selection Guide
[line / bar / pie / scatter / histogram guidance]
## Query Safety
[SELECT/WITH only, always LIMIT, quote identifiers]
## Context / Schema (streaming only)
{enhancedPrompt}
Schema: "{schema}"
## User Query
{query}
```Related Skills
cursor-custom-prompts
Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
building-recommendation-systems
Execute this skill empowers AI assistant to construct recommendation systems using collaborative filtering, content-based filtering, or hybrid approaches. it analyzes user preferences, item features, and interaction data to generate personalized recommendations... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
box-cloud-filesystem
Cloud filesystem operations via Box CLI. Use when the user mentions Box, cloud files, cloud storage, uploading to the cloud, sharing files, document management, or syncing project files offsite. Trigger with "upload to box", "save to cloud", "pull from box", "search my box files", "share this file", "box sync", "cloud backup", or "box filesystem".
analyzing-system-throughput
Analyze and optimize system throughput including request handling, data processing, and resource utilization. Use when identifying capacity limits or evaluating scaling strategies. Trigger with phrases like "analyze throughput", "optimize capacity", or "identify bottlenecks".
optimizing-prompts
This skill optimizes prompts for Large Language Models (LLMs) to reduce token usage, lower costs, and improve performance. It analyzes the prompt, identifies areas for simplification and redundancy removal, and rewrites the prompt to be more concise and effective. It is used when the user wants to reduce LLM costs, improve response speed, or enhance the quality of LLM outputs by optimizing the prompt. Trigger terms include "optimize prompt", "reduce LLM cost", "improve prompt performance", "rewrite prompt", "prompt optimization".
promptify
Transform user requests into detailed, precise prompts for AI models. Use when users say "promptify", "promptify this", or explicitly request prompt engineering or improvement of their request for better AI responses.
prompt-improver
Optimize prompts for better AI responses. Use when user asks to improve a prompt, refine a prompt, make a prompt better, optimize prompting, review their prompt, or says "/improve-prompt". Transforms vague requests into clear, specific, actionable prompts.
create-design-system-rules
Generates custom design system rules for the user's codebase. Use when user says "create design system rules", "generate rules for my project", "set up design rules", "customize design system guidelines", or wants to establish project-specific conventions for Figma-to-code workflows. Requires Figma MCP server connection.
filesystem-context
This skill should be used when the user asks to "offload context to files", "implement dynamic context discovery", "use filesystem for agent memory", "reduce context window bloat", or mentions file-based context management, tool output persistence, agent scratch pads, or just-in-time context loading.
gws-modelarmor-sanitize-prompt
Google Model Armor: Sanitize a user prompt through a Model Armor template.
tldr-prompt
Create tldr summaries for GitHub Copilot files (prompts, agents, instructions, collections), MCP servers, or documentation from URLs and queries.
prompt-builder
Guide users through creating high-quality GitHub Copilot prompts with proper structure, tools, and best practices.