modal-deployment

Run Python code in the cloud with serverless containers, GPUs, and autoscaling using Modal. This skill enables agents to generate code for deploying ML models, running batch jobs, serving APIs, and scaling compute-intensive workloads.

159 stars
Complexity: easy

About this skill

The `modal-deployment` skill provides AI agents with the knowledge and syntax to interact with Modal, a serverless platform designed for running Python code in the cloud with zero configuration. It eliminates the need for manual YAML, Docker, or Kubernetes setups, allowing developers to define their cloud infrastructure directly in Python code. This skill teaches agents how to construct Modal applications, define remote functions, configure container environments with specific Python and system packages, and attach powerful GPUs. Agents leveraging this skill can assist users in deploying a wide range of Python-based applications, from machine learning model training and inference (including LLMs) to parallel data processing, scheduled cron jobs, and high-performance API services. It's particularly valuable for tasks requiring substantial compute resources, such as GPU acceleration, where Modal simplifies resource allocation and autoscaling. Developers would use this skill to quickly translate their Python code into a scalable, cloud-native application without infrastructure overhead. The agent can generate the necessary Modal boilerplate, function decorators, and image configurations, streamlining the deployment process and allowing users to focus purely on their application logic.

Best use case

This skill is primarily for developers, data scientists, and MLOps engineers who need to deploy and scale Python applications, especially those involving machine learning, data processing, or GPU-accelerated tasks. It allows them to quickly leverage cloud resources for compute-intensive workloads without deep DevOps expertise, benefiting from serverless execution and automatic scaling. An AI agent assists by generating the specific Modal Python code needed for their deployment requirements.

Run Python code in the cloud with serverless containers, GPUs, and autoscaling using Modal. This skill enables agents to generate code for deploying ML models, running batch jobs, serving APIs, and scaling compute-intensive workloads.

Users can expect to receive Python code snippets and instructions from the agent that define and deploy their application as a serverless function on the Modal platform, often with specified compute resources like GPUs.

Practical example

Example input

Help me deploy a Python function called `process_data` to Modal. It needs an A100-80GB GPU, 64GB of memory, and should have `pandas` and `scikit-learn` installed. The function takes a single argument, `data_path`.

Example output

```python
import modal

app = modal.App("my-data-app")

image = (
    modal.Image.debian_slim(python_version="3.10")
    .uv_pip_install("pandas", "scikit-learn")
)

@app.function(image=image, gpu="A100-80GB", memory=65536, timeout=1800) # memory in MB, 1800s timeout
def process_data(data_path):
    import pandas as pd
    from sklearn.ensemble import RandomForestClassifier
    # Your data processing logic here
    df = pd.read_csv(data_path)
    print(f"Processing data from: {data_path}")
    # ... model training/inference ...
    return "Data processed successfully!"

@app.local_entrypoint()
def main():
    # Example local call to remote function
    result = process_data.remote("s3://my-bucket/input.csv")
    print(result)
```
To deploy and run, save this as `my_app.py` and execute: `modal run my_app.py`

When to use this skill

  • When deploying ML models or running LLM inference with GPU acceleration.
  • For executing long-running batch jobs or parallel data processing tasks in the cloud.
  • To schedule Python-based cron jobs or build scalable API endpoints.
  • When needing serverless compute for Python applications without managing infrastructure.

When not to use this skill

  • When the application is not written in Python.
  • For deploying static websites or purely frontend applications.
  • When extreme low-level control over the server operating system or custom network configurations is required.
  • For very simple scripts that can run locally without significant resource needs.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/modal-deployment/SKILL.md --create-dirs "https://raw.githubusercontent.com/majiayu000/claude-skill-registry/main/skills/data/modal-deployment/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/modal-deployment/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How modal-deployment Compares

Feature / Agentmodal-deploymentStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityeasyN/A

Frequently Asked Questions

What does this skill do?

Run Python code in the cloud with serverless containers, GPUs, and autoscaling using Modal. This skill enables agents to generate code for deploying ML models, running batch jobs, serving APIs, and scaling compute-intensive workloads.

How difficult is it to install?

The installation complexity is rated as easy. You can find the installation instructions above.

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

Related Guides

SKILL.md Source

# Modal

Modal is a serverless platform for running Python in the cloud with zero configuration. Define everything in code—no YAML, Docker, or Kubernetes required.

## Quick Start

```python
import modal

app = modal.App("my-app")

@app.function()
def hello():
    return "Hello from Modal!"

@app.local_entrypoint()
def main():
    print(hello.remote())
```

Run: `modal run app.py`

## Core Concepts

### Functions
Decorate Python functions to run remotely:

```python
@app.function(gpu="H100", memory=32768, timeout=600)
def train_model(data):
    # Runs on H100 GPU with 32GB RAM, 10min timeout
    return model.fit(data)
```

### Images
Define container environments via method chaining:

```python
image = (
    modal.Image.debian_slim(python_version="3.12")
    .apt_install("ffmpeg", "libsndfile1")
    .uv_pip_install("torch", "transformers", "numpy")
    .env({"CUDA_VISIBLE_DEVICES": "0"})
)

app = modal.App("ml-app", image=image)
```

Key image methods:
- `.debian_slim()` / `.micromamba()` - Base images
- `.uv_pip_install()` / `.pip_install()` - Python packages
- `.apt_install()` - System packages
- `.run_commands()` - Shell commands
- `.add_local_python_source()` - Local modules
- `.env()` - Environment variables

### GPUs
Attach GPUs with a single parameter:

```python
@app.function(gpu="H100")      # Single H100
@app.function(gpu="A100-80GB") # 80GB A100
@app.function(gpu="H100:4")    # 4x H100
@app.function(gpu=["H100", "A100-40GB:2"])  # Fallback options
```

Available: B200, H200, H100, A100-80GB, A100-40GB, L40S, L4, A10G, T4

### Classes with Lifecycle Hooks
Load models once at container startup:

```python
@app.cls(gpu="L40S")
class Model:
    @modal.enter()
    def load(self):
        self.model = load_pretrained("model-name")
    
    @modal.method()
    def predict(self, x):
        return self.model(x)

# Usage
Model().predict.remote(data)
```

### Web Endpoints
Deploy APIs instantly:

```python
@app.function()
@modal.fastapi_endpoint()
def api(text: str):
    return {"result": process(text)}

# For complex apps
@app.function()
@modal.asgi_app()
def fastapi_app():
    from fastapi import FastAPI
    web = FastAPI()
    
    @web.get("/health")
    def health():
        return {"status": "ok"}
    
    return web
```

### Volumes (Persistent Storage)

```python
volume = modal.Volume.from_name("my-data", create_if_missing=True)

@app.function(volumes={"/data": volume})
def save_file(content: str):
    with open("/data/output.txt", "w") as f:
        f.write(content)
    volume.commit()  # Persist changes
```

### Secrets

```python
@app.function(secrets=[modal.Secret.from_name("my-api-key")])
def call_api():
    import os
    key = os.environ["API_KEY"]
```

Create secrets: Dashboard or `modal secret create my-secret KEY=value`

### Dicts (Distributed Key-Value Store)

```python
cache = modal.Dict.from_name("my-cache", create_if_missing=True)

@app.function()
def cached_compute(key: str):
    if key in cache:
        return cache[key]
    result = expensive_computation(key)
    cache[key] = result
    return result
```

### Queues (Distributed FIFO)

```python
queue = modal.Queue.from_name("task-queue", create_if_missing=True)

@app.function()
def producer():
    queue.put_many([{"task": i} for i in range(10)])

@app.function()
def consumer():
    while task := queue.get(timeout=60):
        process(task)
```

### Parallel Processing

```python
# Map over inputs (auto-parallelized)
results = list(process.map(items))

# Spawn async jobs
calls = [process.spawn(item) for item in items]
results = [call.get() for call in calls]

# Batch processing (up to 1M inputs)
process.spawn_map(range(100_000))
```

### Scheduling

```python
@app.function(schedule=modal.Period(hours=1))
def hourly_job():
    pass

@app.function(schedule=modal.Cron("0 9 * * 1-5"))  # 9am weekdays
def daily_report():
    pass
```

## CLI Commands

```bash
modal run app.py          # Run locally-triggered function
modal serve app.py        # Hot-reload web endpoints
modal deploy app.py       # Deploy persistently
modal shell app.py        # Interactive shell in container
modal app list            # List deployed apps
modal app logs <name>     # Stream logs
modal volume list         # List volumes
modal secret list         # List secrets
```

## Common Patterns

### LLM Inference

```python
@app.cls(gpu="H100", image=image)
class LLM:
    @modal.enter()
    def load(self):
        from vllm import LLM
        self.llm = LLM("meta-llama/Llama-3-8B")
    
    @modal.method()
    def generate(self, prompt: str):
        return self.llm.generate(prompt)
```

### Download Models at Build Time

```python
def download_model():
    from huggingface_hub import snapshot_download
    snapshot_download("model-id", local_dir="/models")

image = (
    modal.Image.debian_slim()
    .pip_install("huggingface-hub")
    .run_function(download_model)
)
```

### Concurrency for I/O-bound Work

```python
@app.function()
@modal.concurrent(max_inputs=100)
async def fetch_urls(url: str):
    async with aiohttp.ClientSession() as session:
        return await session.get(url)
```

### Memory Snapshots (Faster Cold Starts)

```python
@app.cls(enable_memory_snapshot=True, gpu="A10G")
class FastModel:
    @modal.enter(snap=True)
    def load(self):
        self.model = load_model()  # Snapshot this state
```

## Autoscaling

```python
@app.function(
    min_containers=2,       # Always keep 2 warm
    max_containers=100,     # Scale up to 100
    buffer_containers=5,    # Extra buffer for bursts
    scaledown_window=300,   # Keep idle for 5 min
)
def serve():
    pass
```

## Best Practices

1. **Put imports inside functions** when packages aren't installed locally
2. **Use `@modal.enter()`** for expensive initialization (model loading)
3. **Pin dependency versions** for reproducible builds
4. **Use Volumes** for model weights and persistent data
5. **Use memory snapshots** for sub-second cold starts in production
6. **Set appropriate timeouts** for long-running tasks
7. **Use `min_containers=1`** for production APIs to keep containers warm
8. **Use absolute imports** with full package paths (not relative imports)

### Fast Image Builds with uv_sync

Use `.uv_sync()` instead of `.pip_install()` for faster dependency installation:

```python
# In pyproject.toml, define dependency groups:
# [dependency-groups]
# modal = ["fastapi", "pydantic-ai>=1.0.0", "logfire"]

image = (
    modal.Image.debian_slim(python_version="3.12")
    .uv_sync("agent", groups=["modal"], frozen=False)
    .add_local_python_source("agent.src")  # Use dot notation for packages
)
```

**Key points:**
- Deploy from project root: `modal deploy agent/src/api.py`
- Use dot notation in `.add_local_python_source("package.subpackage")`
- Imports must match: `from agent.src.config import ...` (not relative `from .config`)

### Logfire Observability

Add observability with Logfire (especially for pydantic-ai):

```python
@app.cls(image=image, secrets=[..., modal.Secret.from_name("logfire")], min_containers=1)
class Web:
    @modal.enter()
    def startup(self):
        import logfire
        logfire.configure(send_to_logfire="if-token-present", environment="production", service_name="my-agent")
        logfire.instrument_pydantic_ai()
        self.agent = create_agent()
```

## Reference Documentation

See `references/` for detailed guides on images, functions, GPUs, scaling, web endpoints, storage, dicts, queues, sandboxes, and networking.

Official docs: https://modal.com/docs

Related Skills

grail-miner

159
from majiayu000/claude-skill-registry

This skill assists in setting up, managing, and optimizing Grail miners on Bittensor Subnet 81, handling tasks like environment configuration, R2 storage, model checkpoint management, and performance tuning.

DevOps & Infrastructure

deployment-procedures

31392
from sickn33/antigravity-awesome-skills

Production deployment principles and decision-making. Safe deployment workflows, rollback strategies, and verification. Teaches thinking, not scripts.

DevOps & InfrastructureClaude

deployment-pipeline-design

31392
from sickn33/antigravity-awesome-skills

Architecture patterns for multi-stage CI/CD pipelines with approval gates and deployment strategies.

DevOps & InfrastructureClaude

kubernetes-deployment

31355
from sickn33/antigravity-awesome-skills

Kubernetes deployment workflow for container orchestration, Helm charts, service mesh, and production-ready K8s configurations.

DevOps & InfrastructureClaude

linux-shell-scripting

31392
from sickn33/antigravity-awesome-skills

Provide production-ready shell script templates for common Linux system administration tasks including backups, monitoring, user management, log analysis, and automation. These scripts serve as building blocks for security operations and penetration testing environments.

DevOps & InfrastructureClaude

iterate-pr

31392
from sickn33/antigravity-awesome-skills

Iterate on a PR until CI passes. Use when you need to fix CI failures, address review feedback, or continuously push fixes until all checks are green. Automates the feedback-fix-push-wait cycle.

DevOps & InfrastructureClaude

istio-traffic-management

31392
from sickn33/antigravity-awesome-skills

Comprehensive guide to Istio traffic management for production service mesh deployments.

DevOps & InfrastructureClaude

incident-runbook-templates

31392
from sickn33/antigravity-awesome-skills

Production-ready templates for incident response runbooks covering detection, triage, mitigation, resolution, and communication.

DevOps & InfrastructureClaude

incident-response-smart-fix

31392
from sickn33/antigravity-awesome-skills

[Extended thinking: This workflow implements a sophisticated debugging and resolution pipeline that leverages AI-assisted debugging tools and observability platforms to systematically diagnose and res

DevOps & InfrastructureClaudeGitHub Copilot

incident-responder

31392
from sickn33/antigravity-awesome-skills

Expert SRE incident responder specializing in rapid problem resolution, modern observability, and comprehensive incident management.

DevOps & InfrastructureClaude

expo-cicd-workflows

31392
from sickn33/antigravity-awesome-skills

Helps understand and write EAS workflow YAML files for Expo projects. Use this skill when the user asks about CI/CD or workflows in an Expo or EAS context, mentions .eas/workflows/, or wants help with EAS build pipelines or deployment automation.

DevOps & InfrastructureClaude

error-diagnostics-error-trace

31392
from sickn33/antigravity-awesome-skills

You are an error tracking and observability expert specializing in implementing comprehensive error monitoring solutions. Set up error tracking systems, configure alerts, implement structured logging,

DevOps & InfrastructureClaude