clade-model-inference
Stream Claude responses, use system prompts, handle multi-turn conversations, Use when working with model-inference patterns. and process structured output with the Messages API. Trigger with "anthropic streaming", "claude messages api", "claude inference", "stream claude response".
Best use case
clade-model-inference is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Stream Claude responses, use system prompts, handle multi-turn conversations, Use when working with model-inference patterns. and process structured output with the Messages API. Trigger with "anthropic streaming", "claude messages api", "claude inference", "stream claude response".
Teams using clade-model-inference should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/clade-model-inference/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How clade-model-inference Compares
| Feature / Agent | clade-model-inference | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Stream Claude responses, use system prompts, handle multi-turn conversations, Use when working with model-inference patterns. and process structured output with the Messages API. Trigger with "anthropic streaming", "claude messages api", "claude inference", "stream claude response".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Anthropic Messages API — Streaming & Advanced Patterns
## Overview
The Messages API is the only inference endpoint. Every Claude interaction goes through `client.messages.create()`. This skill covers streaming, system prompts, vision, and structured output.
## Prerequisites
- Completed `clade-install-auth`
- Familiarity with `clade-hello-world`
## Instructions
### Step 1: Streaming Responses
```typescript
import Anthropic from '@claude-ai/sdk';
const client = new Anthropic();
const stream = client.messages.stream({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{ role: 'user', content: 'Write a haiku about TypeScript.' }],
});
for await (const event of stream) {
if (event.type === 'content_block_delta' && event.delta.type === 'text_delta') {
process.stdout.write(event.delta.text);
}
}
const finalMessage = await stream.finalMessage();
console.log('\n\nTokens:', finalMessage.usage);
```
### Step 2: Vision — Sending Images
```typescript
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
messages: [{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: 'image/png',
data: fs.readFileSync('screenshot.png').toString('base64'),
},
},
{ type: 'text', text: 'Describe what you see in this image.' },
],
}],
});
```
### Step 3: JSON / Structured Output
```typescript
const message = await client.messages.create({
model: 'claude-sonnet-4-20250514',
max_tokens: 1024,
system: `Respond with valid JSON only. Schema: { "summary": string, "sentiment": "positive"|"negative"|"neutral", "confidence": number }`,
messages: [{ role: 'user', content: 'Analyze: "This product exceeded my expectations!"' }],
});
const result = JSON.parse(message.content[0].text);
// { summary: "Very positive review", sentiment: "positive", confidence: 0.95 }
```
## Python Streaming
```python
import anthropic
client = anthropic.Anthropic()
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a haiku about Python."}],
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
print(f"\nTokens: {stream.get_final_message().usage}")
```
## Output
- **Non-streaming:** Full `Message` object with `content`, `usage`, `stop_reason`
- **Streaming events:**
- `message_start` — message metadata
- `content_block_start` — new content block beginning
- `content_block_delta` — incremental text (`text_delta`) or tool input (`input_json_delta`)
- `message_delta` — final `stop_reason` and usage
- `message_stop` — stream complete
## Error Handling
| Error | Cause | Solution |
|-------|-------|----------|
| `overloaded_error` (529) | Anthropic API temporarily overloaded | Retry with exponential backoff; use `client.messages.create` with built-in retries |
| `rate_limit_error` (429) | Exceeded RPM or TPM | Check `retry-after` header. See `clade-rate-limits` |
| `invalid_request_error` | Image too large or bad format | Max 20 images per request. Supported: PNG, JPEG, GIF, WebP. Max 5MB each |
## Key Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `model` | string | Required. Model ID (e.g. `claude-sonnet-4-20250514`) |
| `max_tokens` | int | Required. Maximum output tokens (1–8192 typical) |
| `messages` | array | Required. Alternating user/assistant messages |
| `system` | string | Optional. System prompt for behavior/persona |
| `temperature` | float | Optional. 0.0–1.0, default 1.0 |
| `top_p` | float | Optional. Nucleus sampling threshold |
| `stop_sequences` | string[] | Optional. Custom stop strings |
| `stream` | boolean | Optional. Enable SSE streaming |
## Examples
See Step 1 (streaming), Step 2 (vision with base64 images), and Step 3 (structured JSON output) above. Python streaming example included.
## Resources
- [Messages API](https://docs.anthropic.com/en/api/messages)
- [Streaming](https://docs.anthropic.com/en/api/messages-streaming)
- [Vision](https://docs.anthropic.com/en/docs/build-with-claude/vision)
## Next Steps
See `clade-embeddings-search` for tool use and function calling patterns.Related Skills
triton-inference-config
Triton Inference Config - Auto-activating skill for ML Deployment. Triggers on: triton inference config, triton inference config Part of the ML Deployment skill category.
adapting-transfer-learning-models
This skill automates the adaptation of pre-trained machine learning models using transfer learning techniques. It is triggered when the user requests assistance with fine-tuning a model, adapting a pre-trained model to a new dataset, or performing transfer learning. It analyzes the user's requirements, generates code for adapting the model, includes data validation and error handling, provides performance metrics, and saves artifacts with documentation. Use this skill when you need to leverage existing models for new tasks or datasets, optimizing for performance and efficiency.
training-machine-learning-models
Build train machine learning models with automated workflows. Analyzes datasets, selects model types (classification, regression), configures parameters, trains with cross-validation, and saves model artifacts. Use when asked to "train model" or "evalua... Trigger with relevant phrases based on skill purpose.
tracking-model-versions
Build this skill enables AI assistant to track and manage ai/ml model versions using the model-versioning-tracker plugin. it should be used when the user asks to manage model versions, track model lineage, log model performance, or implement version control f... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
threat-model-creator
Threat Model Creator - Auto-activating skill for Security Advanced. Triggers on: threat model creator, threat model creator Part of the Security Advanced skill category.
tensorflow-savedmodel-creator
Tensorflow Savedmodel Creator - Auto-activating skill for ML Deployment. Triggers on: tensorflow savedmodel creator, tensorflow savedmodel creator Part of the ML Deployment skill category.
tensorflow-model-trainer
Tensorflow Model Trainer - Auto-activating skill for ML Training. Triggers on: tensorflow model trainer, tensorflow model trainer Part of the ML Training skill category.
sequelize-model-creator
Sequelize Model Creator - Auto-activating skill for Backend Development. Triggers on: sequelize model creator, sequelize model creator Part of the Backend Development skill category.
pytorch-model-trainer
Pytorch Model Trainer - Auto-activating skill for ML Training. Triggers on: pytorch model trainer, pytorch model trainer Part of the ML Training skill category.
modeling-nosql-data
This skill enables Claude to design NoSQL data models. It activates when the user requests assistance with NoSQL database design, including schema creation, data modeling for MongoDB or DynamoDB, or defining document structures. Use this skill when the user mentions "NoSQL data model", "design MongoDB schema", "create DynamoDB table", or similar phrases related to NoSQL database architecture. It assists in understanding NoSQL modeling principles like embedding vs. referencing, access pattern optimization, and sharding key selection.
model-versioning-manager
Model Versioning Manager - Auto-activating skill for ML Deployment. Triggers on: model versioning manager, model versioning manager Part of the ML Deployment skill category.
model-registry-manager
Model Registry Manager - Auto-activating skill for ML Deployment. Triggers on: model registry manager, model registry manager Part of the ML Deployment skill category.