azure-cosmos-db
Build globally distributed apps with Azure Cosmos DB. Work with multiple data models (document, key-value, graph), configure global replication with tunable consistency levels, manage throughput with RU/s, and query with SQL API.
Best use case
azure-cosmos-db is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Build globally distributed apps with Azure Cosmos DB. Work with multiple data models (document, key-value, graph), configure global replication with tunable consistency levels, manage throughput with RU/s, and query with SQL API.
Teams using azure-cosmos-db should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/azure-cosmos-db/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How azure-cosmos-db Compares
| Feature / Agent | azure-cosmos-db | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Build globally distributed apps with Azure Cosmos DB. Work with multiple data models (document, key-value, graph), configure global replication with tunable consistency levels, manage throughput with RU/s, and query with SQL API.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Azure Cosmos DB
Azure Cosmos DB is a globally distributed, multi-model database with guaranteed single-digit millisecond latency at the 99th percentile. It supports document (NoSQL), key-value, graph, and column-family data models with five tunable consistency levels.
## Core Concepts
- **Account** — top-level resource, defines global regions and consistency
- **Database** — a namespace for containers
- **Container** — equivalent to a collection/table, holds items
- **Partition Key** — determines data distribution; critical for performance
- **Request Unit (RU)** — normalized cost of database operations
- **Consistency Level** — Strong, Bounded Staleness, Session, Consistent Prefix, Eventual
## Account and Database Setup
```bash
# Create a Cosmos DB account with global replication
az cosmosdb create \
--name my-app-cosmos \
--resource-group my-app-rg \
--kind GlobalDocumentDB \
--default-consistency-level Session \
--locations regionName=eastus failoverPriority=0 \
--locations regionName=westeurope failoverPriority=1 \
--enable-automatic-failover true
```
```bash
# Create a database with shared throughput
az cosmosdb sql database create \
--account-name my-app-cosmos \
--resource-group my-app-rg \
--name app-db \
--throughput 400
```
```bash
# Create a container with partition key and autoscale
az cosmosdb sql container create \
--account-name my-app-cosmos \
--resource-group my-app-rg \
--database-name app-db \
--name orders \
--partition-key-path /customerId \
--max-throughput 4000 \
--idx '{"indexingMode":"consistent","automatic":true,"includedPaths":[{"path":"/*"}],"excludedPaths":[{"path":"/payload/*"}]}'
```
## CRUD Operations
```python
# Initialize client and perform CRUD
from azure.cosmos import CosmosClient, PartitionKey
client = CosmosClient(
url="https://my-app-cosmos.documents.azure.com:443/",
credential="your-key-here"
)
database = client.get_database_client("app-db")
container = database.get_container_client("orders")
# Create an item
order = {
"id": "order-001",
"customerId": "customer-123",
"items": [
{"name": "Widget", "qty": 2, "price": 29.99},
{"name": "Gadget", "qty": 1, "price": 49.99}
],
"total": 109.97,
"status": "pending",
"createdAt": "2024-01-15T10:30:00Z"
}
container.create_item(body=order)
```
```python
# Read an item (requires partition key)
item = container.read_item(item="order-001", partition_key="customer-123")
print(f"Order: {item['status']}, Total: ${item['total']}")
```
```python
# Replace (full update)
item['status'] = 'shipped'
item['shippedAt'] = '2024-01-16T14:00:00Z'
container.replace_item(item=item['id'], body=item)
```
```python
# Partial update with patch operations
container.patch_item(
item="order-001",
partition_key="customer-123",
patch_operations=[
{"op": "set", "path": "/status", "value": "delivered"},
{"op": "add", "path": "/deliveredAt", "value": "2024-01-17T09:00:00Z"},
{"op": "incr", "path": "/updateCount", "value": 1}
]
)
```
```python
# Delete an item
container.delete_item(item="order-001", partition_key="customer-123")
```
## Querying
```python
# SQL queries on Cosmos DB
# Query orders for a customer
orders = container.query_items(
query="SELECT * FROM c WHERE c.customerId = @customerId AND c.status = @status",
parameters=[
{"name": "@customerId", "value": "customer-123"},
{"name": "@status", "value": "pending"}
],
partition_key="customer-123"
)
for order in orders:
print(f"{order['id']}: ${order['total']}")
```
```python
# Cross-partition query (more expensive, use sparingly)
all_pending = container.query_items(
query="SELECT c.id, c.customerId, c.total FROM c WHERE c.status = 'pending' ORDER BY c.total DESC",
enable_cross_partition_query=True,
max_item_count=50
)
```
```python
# Aggregation query
result = container.query_items(
query="SELECT VALUE COUNT(1) FROM c WHERE c.status = 'shipped'",
enable_cross_partition_query=True
)
count = list(result)[0]
```
## Consistency Levels
```bash
# Update default consistency level
az cosmosdb update \
--name my-app-cosmos \
--resource-group my-app-rg \
--default-consistency-level BoundedStaleness \
--max-staleness-prefix 100 \
--max-interval 5
```
| Level | Guarantee | RU Cost | Use Case |
|-------|-----------|---------|----------|
| Strong | Linearizable reads | Highest | Financial transactions |
| Bounded Staleness | Reads lag by ≤K versions or T time | High | Leaderboards, counters |
| Session | Read-your-writes per session | Medium | **Default — most apps** |
| Consistent Prefix | Reads never see out-of-order writes | Low | Social feeds |
| Eventual | No ordering guarantee | Lowest | Non-critical analytics |
## Change Feed
```python
# Process change feed for event-driven architecture
from azure.cosmos import CosmosClient
container = CosmosClient(url, credential).get_database_client("app-db").get_container_client("orders")
# Read changes from beginning
change_feed = container.query_items_change_feed(
is_start_from_beginning=True,
partition_key_range_id="0"
)
for change in change_feed:
print(f"Changed item: {change['id']}, status: {change.get('status')}")
```
## Global Distribution
```bash
# Add a read region
az cosmosdb update \
--name my-app-cosmos \
--resource-group my-app-rg \
--locations regionName=eastus failoverPriority=0 \
--locations regionName=westeurope failoverPriority=1 \
--locations regionName=southeastasia failoverPriority=2
```
```bash
# Enable multi-region writes
az cosmosdb update \
--name my-app-cosmos \
--resource-group my-app-rg \
--enable-multiple-write-locations true
```
## Throughput Management
```bash
# Enable autoscale on a container
az cosmosdb sql container throughput migrate \
--account-name my-app-cosmos \
--resource-group my-app-rg \
--database-name app-db \
--name orders \
--throughput-type autoscale
```
```bash
# Check current throughput and usage
az cosmosdb sql container throughput show \
--account-name my-app-cosmos \
--resource-group my-app-rg \
--database-name app-db \
--name orders
```
## Best Practices
- Choose partition key carefully — it determines scalability and query performance
- Use Session consistency for most applications (best balance of performance and guarantees)
- Use autoscale throughput for variable workloads to avoid over-provisioning
- Query within a single partition whenever possible to minimize RU consumption
- Use the change feed for event-driven patterns instead of polling
- Enable automatic failover for production accounts
- Exclude large payload paths from indexing to save RUs on writes
- Use point reads (by id + partition key) instead of queries when possible — 1 RURelated Skills
azure-openai
Azure OpenAI Service — OpenAI models (GPT-4o, DALL-E 3, Whisper) on Azure infrastructure. Use when deploying OpenAI models with enterprise compliance (GDPR, HIPAA, SOC2), Azure-native auth via Managed Identity, content filtering, or VNET-isolated deployments. Same OpenAI API, hosted on Azure.
azure-functions
Build serverless applications with Azure Functions. Create HTTP and event-driven functions with input/output bindings, configure triggers for queues, timers, and blob storage. Use Durable Functions for stateful orchestration workflows.
azure-cli
Azure Command Line Interface for managing Microsoft Azure resources. Use when the user needs to create VMs, manage storage accounts, deploy functions, configure resource groups, and automate Azure operations from the terminal.
azure-blob-storage
Store and manage unstructured data with Azure Blob Storage. Create containers, upload and organize blobs, configure access tiers (Hot, Cool, Archive) for cost optimization, generate SAS tokens for secure temporary access, and set lifecycle management policies.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.
zapier
Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.