deploy-model
Unified Azure OpenAI model deployment skill with intelligent intent-based routing. Handles quick preset deployments, fully customized deployments (version/SKU/capacity/RAI policy), and capacity discovery across regions and projects. USE FOR: deploy model, deploy gpt, create deployment, model deployment, deploy openai model, set up model, provision model, find capacity, check model availability, where can I deploy, best region for model, capacity analysis. DO NOT USE FOR: listing existing deployments (use foundry_models_deployments_list MCP tool), deleting deployments, agent creation (use agent/create), project creation (use project/create).
Best use case
deploy-model is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Unified Azure OpenAI model deployment skill with intelligent intent-based routing. Handles quick preset deployments, fully customized deployments (version/SKU/capacity/RAI policy), and capacity discovery across regions and projects. USE FOR: deploy model, deploy gpt, create deployment, model deployment, deploy openai model, set up model, provision model, find capacity, check model availability, where can I deploy, best region for model, capacity analysis. DO NOT USE FOR: listing existing deployments (use foundry_models_deployments_list MCP tool), deleting deployments, agent creation (use agent/create), project creation (use project/create).
Teams using deploy-model should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/deploy-model/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How deploy-model Compares
| Feature / Agent | deploy-model | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Unified Azure OpenAI model deployment skill with intelligent intent-based routing. Handles quick preset deployments, fully customized deployments (version/SKU/capacity/RAI policy), and capacity discovery across regions and projects. USE FOR: deploy model, deploy gpt, create deployment, model deployment, deploy openai model, set up model, provision model, find capacity, check model availability, where can I deploy, best region for model, capacity analysis. DO NOT USE FOR: listing existing deployments (use foundry_models_deployments_list MCP tool), deleting deployments, agent creation (use agent/create), project creation (use project/create).
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Deploy Model
Unified entry point for all Azure OpenAI model deployment workflows. Analyzes user intent and routes to the appropriate deployment mode.
## Quick Reference
| Mode | When to Use | Sub-Skill |
|------|-------------|-----------|
| **Preset** | Quick deployment, no customization needed | [preset/SKILL.md](preset/SKILL.md) |
| **Customize** | Full control: version, SKU, capacity, RAI policy | [customize/SKILL.md](customize/SKILL.md) |
| **Capacity Discovery** | Find where you can deploy with specific capacity | [capacity/SKILL.md](capacity/SKILL.md) |
## Intent Detection
Analyze the user's prompt and route to the correct mode:
```
User Prompt
│
├─ Simple deployment (no modifiers)
│ "deploy gpt-4o", "set up a model"
│ └─> PRESET mode
│
├─ Customization keywords present
│ "custom settings", "choose version", "select SKU",
│ "set capacity to X", "configure content filter",
│ "PTU deployment", "with specific quota"
│ └─> CUSTOMIZE mode
│
├─ Capacity/availability query
│ "find where I can deploy", "check capacity",
│ "which region has X capacity", "best region for 10K TPM",
│ "where is this model available"
│ └─> CAPACITY DISCOVERY mode
│
└─ Ambiguous (has capacity target + deploy intent)
"deploy gpt-4o with 10K capacity to best region"
└─> CAPACITY DISCOVERY first → then PRESET or CUSTOMIZE
```
### Routing Rules
| Signal in Prompt | Route To | Reason |
|------------------|----------|--------|
| Just model name, no options | **Preset** | User wants quick deployment |
| "custom", "configure", "choose", "select" | **Customize** | User wants control |
| "find", "check", "where", "which region", "available" | **Capacity** | User wants discovery |
| Specific capacity number + "best region" | **Capacity → Preset** | Discover then deploy quickly |
| Specific capacity number + "custom" keywords | **Capacity → Customize** | Discover then deploy with options |
| "PTU", "provisioned throughput" | **Customize** | PTU requires SKU selection |
| "optimal region", "best region" (no capacity target) | **Preset** | Region optimization is preset's specialty |
### Multi-Mode Chaining
Some prompts require two modes in sequence:
**Pattern: Capacity → Deploy**
When a user specifies a capacity requirement AND wants deployment:
1. Run **Capacity Discovery** to find regions/projects with sufficient quota
2. Present findings to user
3. Ask: "Would you like to deploy with **quick defaults** or **customize settings**?"
4. Route to **Preset** or **Customize** based on answer
> 💡 **Tip:** If unsure which mode the user wants, default to **Preset** (quick deployment). Users who want customization will typically use explicit keywords like "custom", "configure", or "with specific settings".
## Project Selection (All Modes)
Before any deployment, resolve which project to deploy to. This applies to **all** modes (preset, customize, and after capacity discovery).
### Resolution Order
1. **Check `PROJECT_RESOURCE_ID` env var** — if set, use it as the default
2. **Check user prompt** — if user named a specific project or region, use that
3. **If neither** — query the user's projects and suggest the current one
### Confirmation Step (Required)
**Always confirm the target before deploying.** Show the user what will be used and give them a chance to change it:
```
Deploying to:
Project: <project-name>
Region: <region>
Resource: <resource-group>
Is this correct? Or choose a different project:
1. ✅ Yes, deploy here (default)
2. 📋 Show me other projects in this region
3. 🌍 Choose a different region
```
If user picks option 2, show top 5 projects in that region:
```
Projects in <region>:
1. project-alpha (rg-alpha)
2. project-beta (rg-beta)
3. project-gamma (rg-gamma)
...
```
> ⚠️ **Never deploy without showing the user which project will be used.** This prevents accidental deployments to the wrong resource.
## Pre-Deployment Validation (All Modes)
Before presenting any deployment options (SKU, capacity), always validate both of these:
1. **Model supports the SKU** — query the model catalog to confirm the selected model+version supports the target SKU:
```bash
az cognitiveservices model list --location <region> --subscription <sub-id> -o json
```
Filter for the model, extract `.model.skus[].name` to get supported SKUs.
2. **Subscription has available quota** — check that the user's subscription has unallocated quota for the SKU+model combination:
```bash
az cognitiveservices usage list --location <region> --subscription <sub-id> -o json
```
Match by usage name pattern `OpenAI.<SKU>.<model-name>` (e.g., `OpenAI.GlobalStandard.gpt-4o`). Compute `available = limit - currentValue`.
> ⚠️ **Warning:** Only present options that pass both checks. Do NOT show hardcoded SKU lists — always query dynamically. SKUs with 0 available quota should be shown as ❌ informational items, not selectable options.
> 💡 **Quota management:** For quota increase requests, usage monitoring, and troubleshooting quota errors, defer to the [quota skill](../../quota/quota.md) instead of duplicating that guidance inline.
## Prerequisites
All deployment modes require:
- Azure CLI installed and authenticated (`az login`)
- Active Azure subscription with deployment permissions
- Azure AI Foundry project resource ID (or agent will help discover it via `PROJECT_RESOURCE_ID` env var)
## Sub-Skills
- **[preset/SKILL.md](preset/SKILL.md)** — Quick deployment to optimal region with sensible defaults
- **[customize/SKILL.md](customize/SKILL.md)** — Interactive guided flow with full configuration control
- **[capacity/SKILL.md](capacity/SKILL.md)** — Discover available capacity across regions and projectsRelated Skills
vertex-ai-deployer
Vertex Ai Deployer - Auto-activating skill for ML Deployment. Triggers on: vertex ai deployer, vertex ai deployer Part of the ML Deployment skill category.
adapting-transfer-learning-models
This skill automates the adaptation of pre-trained machine learning models using transfer learning techniques. It is triggered when the user requests assistance with fine-tuning a model, adapting a pre-trained model to a new dataset, or performing transfer learning. It analyzes the user's requirements, generates code for adapting the model, includes data validation and error handling, provides performance metrics, and saves artifacts with documentation. Use this skill when you need to leverage existing models for new tasks or datasets, optimizing for performance and efficiency.
training-machine-learning-models
Build train machine learning models with automated workflows. Analyzes datasets, selects model types (classification, regression), configures parameters, trains with cross-validation, and saves model artifacts. Use when asked to "train model" or "evalua... Trigger with relevant phrases based on skill purpose.
tracking-model-versions
Build this skill enables AI assistant to track and manage ai/ml model versions using the model-versioning-tracker plugin. it should be used when the user asks to manage model versions, track model lineage, log model performance, or implement version control f... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
threat-model-creator
Threat Model Creator - Auto-activating skill for Security Advanced. Triggers on: threat model creator, threat model creator Part of the Security Advanced skill category.
tensorflow-savedmodel-creator
Tensorflow Savedmodel Creator - Auto-activating skill for ML Deployment. Triggers on: tensorflow savedmodel creator, tensorflow savedmodel creator Part of the ML Deployment skill category.
tensorflow-model-trainer
Tensorflow Model Trainer - Auto-activating skill for ML Training. Triggers on: tensorflow model trainer, tensorflow model trainer Part of the ML Training skill category.
sequelize-model-creator
Sequelize Model Creator - Auto-activating skill for Backend Development. Triggers on: sequelize model creator, sequelize model creator Part of the Backend Development skill category.
sagemaker-endpoint-deployer
Sagemaker Endpoint Deployer - Auto-activating skill for ML Deployment. Triggers on: sagemaker endpoint deployer, sagemaker endpoint deployer Part of the ML Deployment skill category.
pytorch-model-trainer
Pytorch Model Trainer - Auto-activating skill for ML Training. Triggers on: pytorch model trainer, pytorch model trainer Part of the ML Training skill category.
orchestrating-deployment-pipelines
Deploy use when you need to work with deployment and CI/CD. This skill provides deployment automation and orchestration with comprehensive guidance and automation. Trigger with phrases like "deploy application", "create pipeline", or "automate deployment".
modeling-nosql-data
This skill enables Claude to design NoSQL data models. It activates when the user requests assistance with NoSQL database design, including schema creation, data modeling for MongoDB or DynamoDB, or defining document structures. Use this skill when the user mentions "NoSQL data model", "design MongoDB schema", "create DynamoDB table", or similar phrases related to NoSQL database architecture. It assists in understanding NoSQL modeling principles like embedding vs. referencing, access pattern optimization, and sharding key selection.