elevenlabs-hello-world
Generate your first ElevenLabs text-to-speech audio file. Use when starting a new ElevenLabs integration, testing your setup, or learning basic TTS API patterns. Trigger: "elevenlabs hello world", "elevenlabs example", "elevenlabs quick start", "first elevenlabs TTS", "text to speech demo".
Best use case
elevenlabs-hello-world is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Generate your first ElevenLabs text-to-speech audio file. Use when starting a new ElevenLabs integration, testing your setup, or learning basic TTS API patterns. Trigger: "elevenlabs hello world", "elevenlabs example", "elevenlabs quick start", "first elevenlabs TTS", "text to speech demo".
Teams using elevenlabs-hello-world should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/elevenlabs-hello-world/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How elevenlabs-hello-world Compares
| Feature / Agent | elevenlabs-hello-world | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Generate your first ElevenLabs text-to-speech audio file. Use when starting a new ElevenLabs integration, testing your setup, or learning basic TTS API patterns. Trigger: "elevenlabs hello world", "elevenlabs example", "elevenlabs quick start", "first elevenlabs TTS", "text to speech demo".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# ElevenLabs Hello World
## Overview
Generate speech from text using the ElevenLabs TTS API. This skill covers the core `POST /v1/text-to-speech/{voice_id}` endpoint with real voice IDs, model selection, and audio output.
## Prerequisites
- Completed `elevenlabs-install-auth` setup
- Valid API key in `ELEVENLABS_API_KEY`
## Instructions
### Step 1: Text-to-Speech with the SDK
**TypeScript (recommended):**
```typescript
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { createWriteStream } from "fs";
import { Readable } from "stream";
import { pipeline } from "stream/promises";
const client = new ElevenLabsClient();
async function generateSpeech() {
// Use a pre-made voice — "Rachel" is a default voice available on all accounts
// Find voice IDs via: GET /v1/voices
const audio = await client.textToSpeech.convert("21m00Tcm4TlvDq8ikWAM", {
text: "Hello! This is your first ElevenLabs text-to-speech generation.",
model_id: "eleven_multilingual_v2", // Best quality, 29 languages
voice_settings: {
stability: 0.5, // 0-1: lower = more expressive
similarity_boost: 0.75, // 0-1: higher = closer to original voice
style: 0.0, // 0-1: higher = more dramatic (costs more latency)
speed: 1.0, // 0.7-1.2: speech speed multiplier
},
});
// audio is a ReadableStream — pipe to file
await pipeline(
Readable.fromWeb(audio as any),
createWriteStream("output.mp3")
);
console.log("Audio saved to output.mp3");
}
generateSpeech().catch(console.error);
```
**Python:**
```python
from elevenlabs.client import ElevenLabsClient
client = ElevenLabsClient()
audio = client.text_to_speech.convert(
voice_id="21m00Tcm4TlvDq8ikWAM", # Rachel
text="Hello! This is your first ElevenLabs text-to-speech generation.",
model_id="eleven_multilingual_v2",
voice_settings={
"stability": 0.5,
"similarity_boost": 0.75,
"style": 0.0,
},
)
with open("output.mp3", "wb") as f:
for chunk in audio:
f.write(chunk)
print("Audio saved to output.mp3")
```
### Step 2: Using cURL (Raw REST API)
```bash
curl -X POST "https://api.elevenlabs.io/v1/text-to-speech/21m00Tcm4TlvDq8ikWAM" \
-H "xi-api-key: ${ELEVENLABS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"text": "Hello from the ElevenLabs API!",
"model_id": "eleven_multilingual_v2",
"voice_settings": {
"stability": 0.5,
"similarity_boost": 0.75
}
}' \
--output output.mp3
```
### Step 3: Streaming TTS (Low Latency)
For real-time playback, use the streaming endpoint:
```typescript
async function streamSpeech() {
const audioStream = await client.textToSpeech.stream("21m00Tcm4TlvDq8ikWAM", {
text: "This audio is streamed in real-time for low-latency playback.",
model_id: "eleven_flash_v2_5", // Optimized for streaming (~75ms latency)
output_format: "mp3_22050_32", // codec_sampleRate_bitrate
});
// Stream chunks arrive as they're generated
const writer = createWriteStream("streamed.mp3");
for await (const chunk of audioStream) {
writer.write(chunk);
}
writer.end();
console.log("Streamed audio saved to streamed.mp3");
}
```
## Available Models
| Model ID | Quality | Latency | Languages | Cost (credits/char) |
|----------|---------|---------|-----------|---------------------|
| `eleven_v3` | Highest expressiveness | Medium | 70+ | 1.0 |
| `eleven_multilingual_v2` | High quality, emotional | Medium | 29 | 1.0 |
| `eleven_flash_v2_5` | Good, ultra-fast | ~75ms | 32 | 0.5 |
| `eleven_turbo_v2_5` | Good, fast | Low | 32 | 0.5 |
| `eleven_monolingual_v1` | English only | Low | 1 | 0.5 |
## Common Default Voice IDs
| Voice | ID | Style |
|-------|----|-------|
| Rachel | `21m00Tcm4TlvDq8ikWAM` | Calm, narration |
| Domi | `AZnzlk1XvdvUeBnXmlld` | Strong, assertive |
| Bella | `EXAVITQu4vr4xnSDxMaL` | Soft, warm |
| Antoni | `ErXwobaYiN019PkySvjV` | Well-rounded, male |
| Josh | `TxGEqnHWrfWFTfGW9XjX` | Deep, narrative |
## Output Formats
Specified as `codec_sampleRate_bitrate`:
- `mp3_44100_128` (default, high quality)
- `mp3_22050_32` (smaller file, streaming)
- `pcm_16000` (raw PCM for processing)
- `pcm_44100` (high-quality raw)
- `ulaw_8000` (telephony)
## Error Handling
| Error | HTTP | Cause | Solution |
|-------|------|-------|----------|
| `voice_not_found` | 404 | Invalid voice_id | Use `GET /v1/voices` to list valid IDs |
| `invalid_api_key` | 401 | Bad or missing key | Check `ELEVENLABS_API_KEY` env var |
| `model_not_found` | 400 | Wrong model_id string | Use exact IDs from models table |
| `text_too_long` | 400 | Exceeds 5,000 chars | Split into chunks; use streaming for long text |
| `quota_exceeded` | 401 | Monthly character limit hit | Check usage at elevenlabs.io/app/usage |
## Resources
- [TTS API Reference](https://elevenlabs.io/docs/api-reference/text-to-speech/convert)
- [Stream API Reference](https://elevenlabs.io/docs/api-reference/text-to-speech/stream)
- [Models Overview](https://elevenlabs.io/docs/overview/models)
- [Voice Library](https://elevenlabs.io/voice-library)
## Next Steps
Proceed to `elevenlabs-local-dev-loop` for development workflow setup, or `elevenlabs-core-workflow-a` for voice cloning.Related Skills
exa-hello-world
Create a minimal working Exa search example with real results. Use when starting a new Exa integration, testing your setup, or learning basic search, searchAndContents, and findSimilar patterns. Trigger with phrases like "exa hello world", "exa example", "exa quick start", "simple exa search", "first exa query".
evernote-hello-world
Create a minimal working Evernote example. Use when starting a new Evernote integration, testing your setup, or learning basic Evernote API patterns. Trigger with phrases like "evernote hello world", "evernote example", "evernote quick start", "simple evernote code", "create first note".
elevenlabs-webhooks-events
Implement ElevenLabs webhook HMAC signature verification and event handling. Use when setting up webhook endpoints for transcription completion, call recording, or agent conversation events from ElevenLabs. Trigger: "elevenlabs webhook", "elevenlabs events", "elevenlabs webhook signature", "handle elevenlabs notifications", "elevenlabs post-call webhook", "elevenlabs transcription webhook".
elevenlabs-upgrade-migration
Upgrade ElevenLabs SDK versions and migrate between API model generations. Use when upgrading the elevenlabs-js or elevenlabs Python SDK, migrating from v1 to v2 models, or handling deprecations. Trigger: "upgrade elevenlabs", "elevenlabs migration", "elevenlabs breaking changes", "update elevenlabs SDK", "migrate elevenlabs model", "eleven_v3 migration".
elevenlabs-security-basics
Apply ElevenLabs security best practices for API keys, webhook HMAC validation, and voice data protection. Use when securing API keys, validating webhook signatures, or auditing ElevenLabs security configuration. Trigger: "elevenlabs security", "elevenlabs secrets", "secure elevenlabs", "elevenlabs API key security", "elevenlabs webhook signature", "elevenlabs HMAC".
elevenlabs-sdk-patterns
Apply production-ready ElevenLabs SDK patterns for TypeScript and Python. Use when implementing ElevenLabs integrations, refactoring SDK usage, or establishing team coding standards for audio AI applications. Trigger: "elevenlabs SDK patterns", "elevenlabs best practices", "elevenlabs code patterns", "idiomatic elevenlabs", "elevenlabs typescript".
elevenlabs-reference-architecture
Implement ElevenLabs reference architecture for production TTS/voice applications. Use when designing new ElevenLabs integrations, reviewing project structure, or building a scalable audio generation service. Trigger: "elevenlabs architecture", "elevenlabs project structure", "how to organize elevenlabs", "TTS service architecture", "elevenlabs design patterns", "voice API architecture".
elevenlabs-rate-limits
Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".
elevenlabs-prod-checklist
Execute ElevenLabs production deployment checklist with health checks and rollback. Use when deploying TTS/voice integrations to production, preparing for launch, or implementing go-live procedures for ElevenLabs-powered apps. Trigger: "elevenlabs production", "deploy elevenlabs", "elevenlabs go-live", "elevenlabs launch checklist", "production TTS".
elevenlabs-performance-tuning
Optimize ElevenLabs TTS latency with model selection, streaming, caching, and audio format tuning. Use when experiencing slow TTS responses, implementing real-time voice features, or optimizing audio generation throughput. Trigger: "elevenlabs performance", "optimize elevenlabs", "elevenlabs latency", "elevenlabs slow", "fast TTS", "reduce elevenlabs latency", "TTS streaming".
elevenlabs-local-dev-loop
Configure local ElevenLabs development with mocking, hot reload, and audio testing. Use when setting up a dev environment for TTS/voice projects, configuring test workflows, or building a fast iteration cycle with ElevenLabs audio. Trigger: "elevenlabs dev setup", "elevenlabs local development", "elevenlabs dev environment", "develop with elevenlabs", "test elevenlabs locally".
elevenlabs-install-auth
Install and configure ElevenLabs SDK authentication for Node.js or Python. Use when setting up a new ElevenLabs project, configuring API keys, or initializing the elevenlabs npm/pip package. Trigger: "install elevenlabs", "setup elevenlabs", "elevenlabs auth", "configure elevenlabs API key", "elevenlabs credentials".