elevenlabs-core-workflow-b
Implement ElevenLabs speech-to-speech, sound effects, audio isolation, and speech-to-text. Use when converting voice to another voice, generating sound effects from text, removing background noise, or transcribing audio. Trigger: "elevenlabs speech to speech", "voice changer", "sound effects", "audio isolation", "remove background noise", "elevenlabs transcribe".
Best use case
elevenlabs-core-workflow-b is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Implement ElevenLabs speech-to-speech, sound effects, audio isolation, and speech-to-text. Use when converting voice to another voice, generating sound effects from text, removing background noise, or transcribing audio. Trigger: "elevenlabs speech to speech", "voice changer", "sound effects", "audio isolation", "remove background noise", "elevenlabs transcribe".
Teams using elevenlabs-core-workflow-b should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/elevenlabs-core-workflow-b/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How elevenlabs-core-workflow-b Compares
| Feature / Agent | elevenlabs-core-workflow-b | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Implement ElevenLabs speech-to-speech, sound effects, audio isolation, and speech-to-text. Use when converting voice to another voice, generating sound effects from text, removing background noise, or transcribing audio. Trigger: "elevenlabs speech to speech", "voice changer", "sound effects", "audio isolation", "remove background noise", "elevenlabs transcribe".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# ElevenLabs Core Workflow B — Speech-to-Speech, Sound Effects & Audio Isolation
## Overview
Secondary ElevenLabs workflows beyond TTS: (1) Speech-to-Speech voice conversion, (2) Sound Effects generation from text descriptions, (3) Audio Isolation for noise removal, and (4) Speech-to-Text transcription.
## Prerequisites
- Completed `elevenlabs-install-auth` setup
- For STS: source audio file in MP3/WAV/M4A format
- For audio isolation: noisy audio file to clean
## Instructions
### Step 1: Speech-to-Speech (Voice Changer)
Transform audio from one voice to another using `POST /v1/speech-to-speech/{voice_id}`:
```typescript
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { createReadStream, createWriteStream } from "fs";
import { Readable } from "stream";
import { pipeline } from "stream/promises";
const client = new ElevenLabsClient();
async function speechToSpeech(
sourceAudioPath: string,
targetVoiceId: string,
outputPath: string
) {
const audio = await client.speechToSpeech.convert(targetVoiceId, {
audio: createReadStream(sourceAudioPath),
model_id: "eleven_english_sts_v2", // STS-specific model
voice_settings: JSON.stringify({
stability: 0.5,
similarity_boost: 0.8,
style: 0.0,
}),
remove_background_noise: true, // Built-in noise removal
});
await pipeline(Readable.fromWeb(audio as any), createWriteStream(outputPath));
console.log(`Voice-converted audio saved to ${outputPath}`);
}
// Convert your voice recording to sound like "Rachel"
await speechToSpeech(
"my_recording.mp3",
"21m00Tcm4TlvDq8ikWAM",
"converted.mp3"
);
```
**cURL equivalent:**
```bash
curl -X POST "https://api.elevenlabs.io/v1/speech-to-speech/21m00Tcm4TlvDq8ikWAM" \
-H "xi-api-key: ${ELEVENLABS_API_KEY}" \
-F "audio=@my_recording.mp3" \
-F "model_id=eleven_english_sts_v2" \
-F 'voice_settings={"stability":0.5,"similarity_boost":0.8}' \
-F "remove_background_noise=true" \
--output converted.mp3
```
### Step 2: Sound Effects Generation
Generate cinematic sound effects from text descriptions using `POST /v1/sound-generation`:
```typescript
async function generateSoundEffect(
description: string,
outputPath: string,
options?: {
duration?: number; // 0.5-30 seconds (null = auto)
promptInfluence?: number; // 0-1 (default 0.3, higher = follows prompt more closely)
loop?: boolean; // Seamless looping (default false)
}
) {
const audio = await client.textToSoundEffects.convert({
text: description,
duration_seconds: options?.duration,
prompt_influence: options?.promptInfluence ?? 0.3,
// model_id: "eleven_text_to_sound_v2", // default
});
await pipeline(Readable.fromWeb(audio as any), createWriteStream(outputPath));
console.log(`Sound effect saved to ${outputPath}`);
}
// Generate various sound effects
await generateSoundEffect(
"Heavy rain on a tin roof with distant thunder",
"rain.mp3",
{ duration: 10, promptInfluence: 0.6 }
);
await generateSoundEffect(
"Sci-fi laser gun firing three quick bursts",
"laser.mp3",
{ duration: 3, promptInfluence: 0.8 }
);
await generateSoundEffect(
"Gentle forest ambiance with birds chirping",
"forest_loop.mp3",
{ duration: 15, loop: true } // Seamless loop for background audio
);
```
**cURL equivalent:**
```bash
curl -X POST "https://api.elevenlabs.io/v1/sound-generation" \
-H "xi-api-key: ${ELEVENLABS_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"text": "Heavy rain on a tin roof with distant thunder",
"duration_seconds": 10,
"prompt_influence": 0.6
}' \
--output rain.mp3
```
### Step 3: Audio Isolation (Voice Isolator)
Remove background noise from audio using `POST /v1/audio-isolation`:
```typescript
async function isolateVoice(
noisyAudioPath: string,
cleanOutputPath: string
) {
const cleanAudio = await client.audioIsolation.audioIsolation({
audio: createReadStream(noisyAudioPath),
});
await pipeline(
Readable.fromWeb(cleanAudio as any),
createWriteStream(cleanOutputPath)
);
console.log(`Clean audio saved to ${cleanOutputPath}`);
}
// Remove background noise from a recording
await isolateVoice("noisy_interview.mp3", "clean_interview.mp3");
```
**Streaming variant** for large files (`POST /v1/audio-isolation/stream`):
```typescript
async function isolateVoiceStreaming(
noisyAudioPath: string,
cleanOutputPath: string
) {
const stream = await client.audioIsolation.audioIsolationStream({
audio: createReadStream(noisyAudioPath),
});
const writer = createWriteStream(cleanOutputPath);
for await (const chunk of stream) {
writer.write(chunk);
}
writer.end();
}
```
**cURL equivalent:**
```bash
curl -X POST "https://api.elevenlabs.io/v1/audio-isolation" \
-H "xi-api-key: ${ELEVENLABS_API_KEY}" \
-F "audio=@noisy_interview.mp3" \
--output clean_interview.mp3
```
### Step 4: Speech-to-Text (Transcription)
Transcribe audio with speaker diarization using `POST /v1/speech-to-text`:
```typescript
async function transcribeAudio(audioPath: string) {
const result = await client.speechToText.convert({
audio: createReadStream(audioPath),
model_id: "scribe_v1", // ElevenLabs' STT model
// language_code: "en", // Optional: force language
// diarize: true, // Enable speaker detection
// timestamps_granularity: "word", // "word" or "character"
});
console.log("Transcription:", result.text);
// Word-level timestamps
if (result.words) {
for (const word of result.words) {
console.log(`[${word.start.toFixed(2)}-${word.end.toFixed(2)}] ${word.text}`);
}
}
return result;
}
await transcribeAudio("podcast_episode.mp3");
```
## API Endpoint Summary
| Feature | Method | Endpoint | Billing |
|---------|--------|----------|---------|
| Speech-to-Speech | POST | `/v1/speech-to-speech/{voice_id}` | Per character |
| Sound Effects | POST | `/v1/sound-generation` | Per generation |
| Audio Isolation | POST | `/v1/audio-isolation` | 1,000 chars/min of audio |
| Audio Isolation Stream | POST | `/v1/audio-isolation/stream` | 1,000 chars/min of audio |
| Speech-to-Text | POST | `/v1/speech-to-text` | Per audio minute |
## Sound Effect Tips
- Be specific: "wooden door creaking slowly open in a quiet room" beats "door sound"
- Specify quantity: "three quick gunshots" vs "gunshots"
- Set mood: "eerie", "cheerful", "aggressive" changes the output character
- Use `prompt_influence: 0.6-0.8` for precise results, `0.2-0.4` for creative variation
- Max duration: 30 seconds per generation
## Audio Isolation Limits
| Aspect | Limit |
|--------|-------|
| Max file size | 500 MB |
| Max duration | 1 hour |
| Supported formats | MP3, WAV, M4A, FLAC, OGG, WEBM |
| PCM optimization | Use `file_format: "pcm_s16le_16"` for lowest latency |
## Error Handling
| Error | HTTP | Cause | Solution |
|-------|------|-------|----------|
| `model_can_not_do_voice_conversion` | 400 | Wrong model for STS | Use `eleven_english_sts_v2` |
| `audio_too_short` | 400 | STS input under 1 second | Use longer audio clip |
| `audio_too_long` | 400 | STS input over limit | Trim to under 5 minutes |
| `invalid_sound_prompt` | 400 | Nonsensical SFX description | Write descriptive, specific prompts |
| `file_too_large` | 413 | Audio isolation over 500MB | Compress or split the file |
| `quota_exceeded` | 401 | Character/generation limit hit | Check usage dashboard |
## Resources
- [Speech-to-Speech API](https://elevenlabs.io/docs/api-reference/speech-to-speech/convert)
- [Sound Effects API](https://elevenlabs.io/docs/api-reference/text-to-sound-effects/convert)
- [Audio Isolation API](https://elevenlabs.io/docs/api-reference/audio-isolation/convert)
- [Speech-to-Text API](https://elevenlabs.io/docs/api-reference/speech-to-text/convert)
## Next Steps
For common errors, see `elevenlabs-common-errors`. For SDK patterns, see `elevenlabs-sdk-patterns`.Related Skills
step-functions-workflow
Step Functions Workflow - Auto-activating skill for AWS Skills. Triggers on: step functions workflow, step functions workflow Part of the AWS Skills skill category.
sprint-workflow
Execute this skill should be used when the user asks about "how sprints work", "sprint phases", "iteration workflow", "convergent development", "sprint lifecycle", "when to use sprints", or wants to understand the sprint execution model and its convergent diffusion approach. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
scorecard-marketing
Build quiz and assessment funnels that generate qualified leads at 30-50% conversion. Use when the user mentions "lead magnet", "quiz funnel", "assessment tool", "lead generation", or "score-based segmentation". Covers question design, dynamic results by tier, and automated follow-up sequences. For landing page conversion, see cro-methodology. For full marketing plans, see one-page-marketing. Trigger with 'scorecard', 'marketing'.
n8n-workflow-generator
N8N Workflow Generator - Auto-activating skill for Business Automation. Triggers on: n8n workflow generator, n8n workflow generator Part of the Business Automation skill category.
jira-workflow-creator
Jira Workflow Creator - Auto-activating skill for Enterprise Workflows. Triggers on: jira workflow creator, jira workflow creator Part of the Enterprise Workflows skill category.
building-gitops-workflows
This skill enables Claude to construct GitOps workflows using ArgoCD and Flux. It is designed to generate production-ready configurations, implement best practices, and ensure a security-first approach for Kubernetes deployments. Use this skill when the user explicitly requests "GitOps workflow", "ArgoCD", "Flux", or asks for help with setting up a continuous delivery pipeline using GitOps principles. The skill will generate the necessary configuration files and setup code based on the user's specific requirements and infrastructure.
git-workflow-manager
Git Workflow Manager - Auto-activating skill for DevOps Basics. Triggers on: git workflow manager, git workflow manager Part of the DevOps Basics skill category.
fathom-core-workflow-b
Sync Fathom meeting data to CRM and build automated follow-up workflows. Use when integrating Fathom with Salesforce, HubSpot, or custom CRMs, or creating automated post-meeting email summaries. Trigger with phrases like "fathom crm sync", "fathom salesforce", "fathom follow-up", "fathom post-meeting workflow".
fathom-core-workflow-a
Build a meeting analytics pipeline with Fathom transcripts and summaries. Use when extracting insights from meetings, building CRM sync, or creating automated meeting follow-up workflows. Trigger with phrases like "fathom analytics", "fathom meeting pipeline", "fathom transcript analysis", "fathom action items sync".
exa-core-workflow-b
Execute Exa findSimilar, getContents, answer, and streaming answer workflows. Use when finding pages similar to a URL, retrieving content for known URLs, or getting AI-generated answers with citations. Trigger with phrases like "exa find similar", "exa get contents", "exa answer", "exa similarity search", "findSimilarAndContents".
exa-core-workflow-a
Execute Exa neural search with contents, date filters, and domain scoping. Use when building search features, implementing RAG context retrieval, or querying the web with semantic understanding. Trigger with phrases like "exa search", "exa neural search", "search with exa", "exa searchAndContents", "exa query".
evernote-core-workflow-b
Execute Evernote secondary workflow: Search and Retrieval. Use when implementing search features, finding notes, filtering content, or building search interfaces. Trigger with phrases like "search evernote", "find evernote notes", "evernote search", "query evernote".