cloudflare-ai
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
Best use case
cloudflare-ai is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
Teams using cloudflare-ai should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/cloudflare-ai/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How cloudflare-ai Compares
| Feature / Agent | cloudflare-ai | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Cloudflare Workers AI — AI Inference at the Edge
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
## Core Capabilities
### AI Inference in Workers
```typescript
// src/worker.ts — AI-powered API at the edge
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Text generation (LLM)
if (url.pathname === "/api/chat") {
const { messages } = await request.json();
const response = await env.AI.run("@cf/meta/llama-3.1-8b-instruct", {
messages,
max_tokens: 1024,
temperature: 0.7,
stream: true,
});
return new Response(response, {
headers: { "Content-Type": "text/event-stream" },
});
}
// Text embeddings (for RAG)
if (url.pathname === "/api/embed") {
const { text } = await request.json();
const embeddings = await env.AI.run("@cf/baai/bge-base-en-v1.5", {
text: Array.isArray(text) ? text : [text],
});
return Response.json({ embeddings: embeddings.data });
}
// Image generation
if (url.pathname === "/api/generate-image") {
const { prompt } = await request.json();
const image = await env.AI.run("@cf/stabilityai/stable-diffusion-xl-base-1.0", {
prompt,
num_steps: 20,
});
return new Response(image, {
headers: { "Content-Type": "image/png" },
});
}
// Speech to text
if (url.pathname === "/api/transcribe") {
const audioData = await request.arrayBuffer();
const result = await env.AI.run("@cf/openai/whisper", {
audio: [...new Uint8Array(audioData)],
});
return Response.json({ text: result.text });
}
// Translation
if (url.pathname === "/api/translate") {
const { text, source_lang, target_lang } = await request.json();
const result = await env.AI.run("@cf/meta/m2m100-1.2b", {
text,
source_lang,
target_lang,
});
return Response.json({ translated: result.translated_text });
}
return new Response("Not Found", { status: 404 });
},
};
```
### RAG with Vectorize
```typescript
// RAG pipeline: Embed → Store in Vectorize → Query → Generate
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const { question } = await request.json();
// Step 1: Embed the question
const queryEmbedding = await env.AI.run("@cf/baai/bge-base-en-v1.5", {
text: [question],
});
// Step 2: Search Vectorize
const matches = await env.VECTORIZE.query(queryEmbedding.data[0], {
topK: 5,
returnMetadata: "all",
});
// Step 3: Generate answer with context
const context = matches.matches.map(m => m.metadata?.text).join("\n\n");
const answer = await env.AI.run("@cf/meta/llama-3.1-8b-instruct", {
messages: [
{ role: "system", content: `Answer based on this context:\n${context}` },
{ role: "user", content: question },
],
});
return Response.json({
answer: answer.response,
sources: matches.matches.map(m => ({ text: m.metadata?.text, score: m.score })),
});
},
};
```
## Installation
```bash
# Create Workers project
npm create cloudflare@latest my-ai-app
# wrangler.toml
[ai]
binding = "AI"
[[vectorize]]
binding = "VECTORIZE"
index_name = "my-index"
# Deploy
npx wrangler deploy
```
## Best Practices
1. **Edge inference** — Models run on Cloudflare's network; <50ms latency worldwide, zero cold starts
2. **Streaming** — Use `stream: true` for LLM responses; first token in ~200ms at the edge
3. **Vectorize for RAG** — Use Cloudflare Vectorize for embedding storage; integrated with Workers AI
4. **Free tier** — 10K neurons/day free; enough for prototyping and low-volume production
5. **Model catalog** — Browse `@cf/` models; Llama 3.1, Mistral, Stable Diffusion, Whisper, BGE all available
6. **Gateway for routing** — Use AI Gateway for caching, rate limiting, analytics, and fallback to OpenAI/Anthropic
7. **R2 for storage** — Store generated images, audio in R2 (S3-compatible); zero egress fees
8. **No GPU management** — Cloudflare manages GPU fleet; you pay per inference, not per GPU-hourRelated Skills
cloudflare-workers
Assists with building and deploying applications on Cloudflare Workers edge computing platform. Use when working with Workers runtime, Wrangler CLI, KV, D1, R2, Durable Objects, Queues, or Hyperdrive. Trigger words: cloudflare, workers, edge functions, wrangler, KV, D1, R2, durable objects, edge computing.
cloudflare-vectorize
Serverless vector database at the edge with Cloudflare Vectorize. Use when: building semantic search on Cloudflare Workers, RAG pipelines at the edge, low-latency vector similarity search, or storing and querying embeddings without managing a separate vector database.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.
zapier
Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.
zabbix
Configure Zabbix for enterprise infrastructure monitoring with templates, triggers, discovery rules, and dashboards. Use when a user needs to set up Zabbix server, configure host monitoring, create custom templates, define trigger expressions, or automate host discovery and registration.
yup
Validate data with Yup schemas. Use when adding form validation, defining API request schemas, validating configuration, or building type-safe validation pipelines in JavaScript/TypeScript.