Best use case
Cerebras — Wafer-Scale LLM Inference is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Cerebras — Wafer-Scale LLM Inference should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/cerebras/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Cerebras — Wafer-Scale LLM Inference Compares
| Feature / Agent | Cerebras — Wafer-Scale LLM Inference | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Cerebras — Wafer-Scale LLM Inference
## Overview
Cerebras Inference, the ultra-fast LLM inference service powered by the world's largest chip (Wafer-Scale Engine). Helps developers integrate Cerebras' API for applications requiring the fastest possible token generation — real-time chat, code completion, and interactive AI experiences.
## Instructions
### Chat Completions
```typescript
// src/llm/cerebras.ts — Cerebras API (OpenAI-compatible)
import OpenAI from "openai";
const cerebras = new OpenAI({
apiKey: process.env.CEREBRAS_API_KEY!,
baseURL: "https://api.cerebras.ai/v1",
});
// Basic completion — up to 2000+ tokens/second
async function chat(prompt: string) {
const response = await cerebras.chat.completions.create({
model: "llama3.3-70b", // Llama 3.3 70B on Cerebras hardware
messages: [
{ role: "system", content: "You are a helpful coding assistant." },
{ role: "user", content: prompt },
],
temperature: 0.7,
max_tokens: 1024,
top_p: 1,
});
// Response includes usage with Cerebras-specific speed metrics
console.log(`Tokens/sec: ${response.usage?.completion_tokens! / (response.usage as any).completion_time}`);
return response.choices[0].message.content;
}
// Streaming — first token in <200ms
async function streamChat(prompt: string, onChunk: (text: string) => void) {
const stream = await cerebras.chat.completions.create({
model: "llama3.3-70b",
messages: [{ role: "user", content: prompt }],
stream: true,
});
let full = "";
for await (const chunk of stream) {
const text = chunk.choices[0]?.delta?.content ?? "";
full += text;
onChunk(text);
}
return full;
}
// JSON mode
async function structuredOutput(prompt: string) {
const response = await cerebras.chat.completions.create({
model: "llama3.3-70b",
messages: [{ role: "user", content: prompt }],
response_format: { type: "json_object" },
temperature: 0,
});
return JSON.parse(response.choices[0].message.content!);
}
```
### Tool Use / Function Calling
```typescript
async function chatWithTools(prompt: string) {
const response = await cerebras.chat.completions.create({
model: "llama3.3-70b",
messages: [{ role: "user", content: prompt }],
tools: [
{
type: "function",
function: {
name: "get_stock_price",
description: "Get the current stock price for a ticker symbol",
parameters: {
type: "object",
properties: {
ticker: { type: "string", description: "Stock ticker (e.g., AAPL)" },
},
required: ["ticker"],
},
},
},
],
tool_choice: "auto",
});
const msg = response.choices[0].message;
if (msg.tool_calls) {
// Execute the tool and send results back
const toolResults = await Promise.all(
msg.tool_calls.map(async (call) => {
const args = JSON.parse(call.function.arguments);
const result = await executeFunction(call.function.name, args);
return {
role: "tool" as const,
tool_call_id: call.id,
content: JSON.stringify(result),
};
})
);
// Get final response with tool results
const final = await cerebras.chat.completions.create({
model: "llama3.3-70b",
messages: [
{ role: "user", content: prompt },
msg,
...toolResults,
],
});
return final.choices[0].message.content;
}
return msg.content;
}
```
### Python Integration
```python
# src/cerebras_client.py — Cerebras with Python
from openai import OpenAI
client = OpenAI(
api_key=os.environ["CEREBRAS_API_KEY"],
base_url="https://api.cerebras.ai/v1",
)
# Chat completion
response = client.chat.completions.create(
model="llama3.3-70b",
messages=[{"role": "user", "content": "Write a Python quicksort implementation"}],
temperature=0.3,
max_tokens=500,
)
print(response.choices[0].message.content)
# Streaming
stream = client.chat.completions.create(
model="llama3.3-70b",
messages=[{"role": "user", "content": "Explain transformers in 5 sentences"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
```
### Available Models
```markdown
## Cerebras Models
- **llama3.3-70b** — Llama 3.3 70B, best quality, ~2000 tok/s output
- **llama3.1-8b** — Llama 3.1 8B, fastest option, ~2500+ tok/s output
- **llama3.1-70b** — Llama 3.1 70B, large context (128K tokens)
## Speed Comparison (approximate)
- Cerebras: 2000+ tok/s (70B model)
- Groq: 300-400 tok/s (70B model)
- Cloud GPU (A100): 50-80 tok/s (70B model)
- Local (M3 Max): 20-40 tok/s (70B quantized)
```
## Installation
```bash
# Use any OpenAI-compatible SDK
npm install openai
pip install openai
# Set base_url to https://api.cerebras.ai/v1
```
## Examples
### Example 1: Setting up an evaluation pipeline for a RAG application
**User request:**
```
I have a RAG chatbot that answers questions from our docs. Set up Cerebras to evaluate answer quality.
```
The agent creates an evaluation suite with appropriate metrics (faithfulness, relevance, answer correctness), configures test datasets from real user questions, runs baseline evaluations, and sets up CI integration so evaluations run on every prompt or retrieval change.
### Example 2: Comparing model performance across prompts
**User request:**
```
We're testing GPT-4o vs Claude on our customer support prompts. Set up a comparison with Cerebras.
```
The agent creates a structured experiment with the existing prompt set, configures both model providers, defines scoring criteria specific to customer support (accuracy, tone, completeness), runs the comparison, and generates a summary report with statistical significance indicators.
## Guidelines
1. **Use for latency-critical applications** — Cerebras is the fastest inference available; ideal for real-time chat and autocomplete
2. **OpenAI SDK drop-in** — Change base URL from OpenAI to Cerebras; your code works unchanged
3. **8B for simple tasks** — Use llama3.1-8b for classification, extraction, and simple Q&A; save 70B for complex reasoning
4. **Stream everything** — First-token latency is <200ms; streaming gives users instant feedback
5. **JSON mode for structured output** — Use `response_format: { type: "json_object" }` for reliable parsing
6. **Batch simple requests** — For bulk processing, send multiple independent prompts in parallel
7. **Monitor rate limits** — Free tier has request limits; check headers for remaining quota
8. **Fallback strategy** — Have a fallback to Groq or OpenAI; Cerebras can have capacity constraints during high demandRelated Skills
fal-upscale
Upscale and enhance image and video resolution using AI
inference-sh
Run 150+ AI apps via inference.sh CLI - image generation, video creation, LLMs, search, 3D, Twitter automation. Models: FLUX, Veo, Gemini, Grok, Claude, Seedance, OmniHuman, Tavily, Exa, OpenRouter, and many more. Use when running AI apps, generating images/videos, calling LLMs, web search, or automating Twitter. Triggers: inference.sh, infsh, ai model, run ai, serverless ai, ai api, flux, veo, claude api, image generation, video generation, openrouter, tavily, exa search, twitter api, grok
fiftyone-dataset-inference
Create a FiftyOne dataset from a directory of media files (images, videos, point clouds), optionally import labels in common formats (COCO, YOLO, VOC), run model inference, and store predictions. Use when users want to load local files into FiftyOne, apply ML models for detection, classification, or segmentation, or build end-to-end inference pipelines.
timescaledb
TimescaleDB - PostgreSQL extension for high-performance time-series and event data analytics, hypertables, continuous aggregates, compression, and real-time analytics
vLLM — High-Throughput LLM Inference Engine
You are an expert in vLLM, the high-throughput LLM serving engine. You help developers deploy open-source models (Llama, Mistral, Qwen, Phi, Gemma) with PagedAttention for efficient memory management, continuous batching, tensor parallelism for multi-GPU, OpenAI-compatible API, and quantization support — achieving 2-24x higher throughput than HuggingFace Transformers for production LLM serving.
NVIDIA Triton Inference Server
## Quick Start with Docker
Tailscale
Build zero-trust mesh networks that connect servers, laptops, and cloud instances as if they're on the same LAN. Built on WireGuard, managed through a control plane.
PlanetScale — Serverless MySQL Platform
## Overview
Groq — Ultra-Fast LLM Inference
## Overview
Fireworks AI — Fast Open-Source Model Inference
## Overview
Cloudflare Workers AI — AI Inference at the Edge
You are an expert in Cloudflare Workers AI, the serverless AI inference platform running on Cloudflare's global network. You help developers run LLMs, embedding models, image generation, speech-to-text, and translation models at the edge with zero cold starts, pay-per-use pricing, and integration with Workers, Pages, and Vectorize — enabling AI features without managing GPU infrastructure.
Speculative Decoding: Accelerating LLM Inference
## When to Use This Skill