elevenlabs-rate-limits

Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".

25 stars

Best use case

elevenlabs-rate-limits is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".

Teams using elevenlabs-rate-limits should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/elevenlabs-rate-limits/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/elevenlabs-rate-limits/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/elevenlabs-rate-limits/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How elevenlabs-rate-limits Compares

Feature / Agentelevenlabs-rate-limitsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# ElevenLabs Rate Limits

## Overview

Handle ElevenLabs rate limits with plan-aware concurrency queuing, exponential backoff, and quota monitoring. ElevenLabs uses two rate limit mechanisms: concurrent request limits (per plan) and system-level throttling.

## Prerequisites

- ElevenLabs SDK installed
- Understanding of your subscription plan's limits
- `p-queue` package (recommended): `npm install p-queue`

## Instructions

### Step 1: Understand the Two 429 Error Types

ElevenLabs returns HTTP 429 for two different reasons:

| 429 Variant | Response Body | Cause | Strategy |
|-------------|--------------|-------|----------|
| `too_many_concurrent_requests` | `{"detail":{"status":"too_many_concurrent_requests"}}` | Exceeded plan concurrency | Queue requests, don't backoff |
| `system_busy` | `{"detail":{"status":"system_busy"}}` | Server overload | Exponential backoff |

### Step 2: Plan Concurrency Limits

| Plan | Max Concurrent Requests | Characters/Month |
|------|------------------------|-------------------|
| Free | 2 | 10,000 |
| Starter | 3 | 30,000 |
| Creator | 5 | 100,000 |
| Pro | 10 | 500,000 |
| Scale | 15 | 2,000,000 |
| Business | 15 | Custom |

### Step 3: Concurrency-Aware Request Queue

```typescript
// src/elevenlabs/rate-limiter.ts
import PQueue from "p-queue";

type ElevenLabsPlan = "free" | "starter" | "creator" | "pro" | "scale" | "business";

const CONCURRENCY_LIMITS: Record<ElevenLabsPlan, number> = {
  free: 2,
  starter: 3,
  creator: 5,
  pro: 10,
  scale: 15,
  business: 15,
};

export function createRequestQueue(plan: ElevenLabsPlan) {
  const concurrency = CONCURRENCY_LIMITS[plan];

  const queue = new PQueue({
    concurrency,
    // Each queued request adds ~50ms to response time
    // so keep queue depth reasonable
    timeout: 120_000,  // 2 minute timeout per request
    throwOnTimeout: true,
  });

  queue.on("error", (error) => {
    console.error("[ElevenLabs Queue] Request failed:", error.message);
  });

  return queue;
}

// Usage
const queue = createRequestQueue("pro"); // 10 concurrent

async function generateWithQueue(voiceId: string, text: string) {
  return queue.add(async () => {
    return client.textToSpeech.convert(voiceId, {
      text,
      model_id: "eleven_flash_v2_5",
    });
  });
}

// All 20 requests run with max 10 concurrent
const results = await Promise.all(
  texts.map(text => generateWithQueue("21m00Tcm4TlvDq8ikWAM", text))
);
```

### Step 4: Exponential Backoff for system_busy

```typescript
// src/elevenlabs/backoff.ts
export async function withBackoff<T>(
  operation: () => Promise<T>,
  config = {
    maxRetries: 5,
    baseDelayMs: 1000,
    maxDelayMs: 32_000,
    jitterMs: 500,
  }
): Promise<T> {
  for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
    try {
      return await operation();
    } catch (error: any) {
      const status = error.statusCode || error.status;
      const errorType = error.body?.detail?.status;

      // Don't retry non-retryable errors
      if (status === 401 || status === 400 || status === 404) throw error;

      // For concurrent limit, retry immediately (queue handles spacing)
      if (errorType === "too_many_concurrent_requests") {
        if (attempt === config.maxRetries) throw error;
        // Short pause — the queue is managing concurrency
        await new Promise(r => setTimeout(r, 50 * (attempt + 1)));
        continue;
      }

      // For system_busy or 5xx, exponential backoff with jitter
      if (attempt === config.maxRetries) throw error;

      const exponentialDelay = config.baseDelayMs * Math.pow(2, attempt);
      const jitter = Math.random() * config.jitterMs;
      const delay = Math.min(exponentialDelay + jitter, config.maxDelayMs);

      console.warn(`[ElevenLabs] ${errorType || status}. Retry ${attempt + 1}/${config.maxRetries} in ${delay.toFixed(0)}ms`);
      await new Promise(r => setTimeout(r, delay));
    }
  }
  throw new Error("Unreachable");
}
```

### Step 5: Quota Monitor

```typescript
// src/elevenlabs/quota-monitor.ts
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";

export class QuotaMonitor {
  private characterCount = 0;
  private characterLimit = 0;
  private lastCheck = 0;

  constructor(
    private client: ElevenLabsClient,
    private warningThresholdPct = 80,
    private checkIntervalMs = 60_000
  ) {}

  async check(): Promise<{
    used: number;
    limit: number;
    remaining: number;
    pctUsed: number;
    warning: boolean;
  }> {
    const now = Date.now();
    if (now - this.lastCheck > this.checkIntervalMs) {
      const user = await this.client.user.get();
      this.characterCount = user.subscription.character_count;
      this.characterLimit = user.subscription.character_limit;
      this.lastCheck = now;
    }

    const remaining = this.characterLimit - this.characterCount;
    const pctUsed = (this.characterCount / this.characterLimit) * 100;

    return {
      used: this.characterCount,
      limit: this.characterLimit,
      remaining,
      pctUsed: Math.round(pctUsed * 10) / 10,
      warning: pctUsed >= this.warningThresholdPct,
    };
  }

  async guardRequest(textLength: number): Promise<void> {
    const quota = await this.check();
    if (textLength > quota.remaining) {
      throw new Error(
        `Insufficient quota: need ${textLength} chars, have ${quota.remaining} remaining (${quota.pctUsed}% used)`
      );
    }
    if (quota.warning) {
      console.warn(`[ElevenLabs] Quota warning: ${quota.pctUsed}% used (${quota.remaining} chars remaining)`);
    }
  }
}
```

### Step 6: Combined Rate-Limited Client

```typescript
// src/elevenlabs/resilient-client.ts
import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";
import { createRequestQueue } from "./rate-limiter";
import { withBackoff } from "./backoff";
import { QuotaMonitor } from "./quota-monitor";

export function createResilientClient(plan: "free" | "starter" | "creator" | "pro" | "scale" = "pro") {
  const client = new ElevenLabsClient({ maxRetries: 0 }); // We handle retries
  const queue = createRequestQueue(plan);
  const quota = new QuotaMonitor(client);

  return {
    async generateSpeech(voiceId: string, text: string, modelId = "eleven_multilingual_v2") {
      await quota.guardRequest(text.length);

      return queue.add(() =>
        withBackoff(() =>
          client.textToSpeech.convert(voiceId, {
            text,
            model_id: modelId,
          })
        )
      );
    },

    getQueueStats() {
      return {
        pending: queue.pending,
        size: queue.size,
      };
    },

    checkQuota: () => quota.check(),
  };
}
```

## Model Cost Impact on Quota

| Model | Credits per Character | 10,000 Chars Cost |
|-------|-----------------------|-------------------|
| `eleven_v3` | 1.0 | 10,000 credits |
| `eleven_multilingual_v2` | 1.0 | 10,000 credits |
| `eleven_flash_v2_5` | 0.5 | 5,000 credits |
| `eleven_turbo_v2_5` | 0.5 | 5,000 credits |

Use Flash/Turbo models during development to conserve quota.

## Error Handling

| Scenario | Detection | Response |
|----------|-----------|----------|
| Concurrent limit hit | 429 + `too_many_concurrent_requests` | Queue; retry after ~50ms per queued request |
| System busy | 429 + `system_busy` | Exponential backoff (1s, 2s, 4s, 8s...) |
| Quota exhausted | 401 + `quota_exceeded` | Stop requests; alert; wait for reset |
| Server error | 500-599 | Exponential backoff; max 5 retries |

## Resources

- [ElevenLabs Rate Limits Help](https://help.elevenlabs.io/hc/en-us/articles/19571824571921)
- [ElevenLabs Pricing](https://elevenlabs.io/pricing)
- [p-queue Documentation](https://github.com/sindresorhus/p-queue)

## Next Steps

For security configuration, see `elevenlabs-security-basics`.

Related Skills

versioning-strategy-helper

25
from ComeOnOliver/skillshub

Versioning Strategy Helper - Auto-activating skill for API Development. Triggers on: versioning strategy helper, versioning strategy helper Part of the API Development skill category.

strategic-clarity

25
from ComeOnOliver/skillshub

Guided workflow for establishing team identity, boundaries, and strategic clarity. Use when starting a new role, inheriting ambiguity, when a team lacks clear identity, or when you need to define "what we own" vs "what we don't". Triggers include "strategic clarity", "team identity", "new role", "inherited ambiguity", "what does my team own", or "define our boundaries".

rate-limiting-apis

25
from ComeOnOliver/skillshub

Implement sophisticated rate limiting with sliding windows, token buckets, and quotas. Use when protecting APIs from excessive requests. Trigger with phrases like "add rate limiting", "limit API requests", or "implement rate limits".

rate-limiter-config

25
from ComeOnOliver/skillshub

Rate Limiter Config - Auto-activating skill for Security Fundamentals. Triggers on: rate limiter config, rate limiter config Part of the Security Fundamentals skill category.

rate-limit-middleware

25
from ComeOnOliver/skillshub

Rate Limit Middleware - Auto-activating skill for Backend Development. Triggers on: rate limit middleware, rate limit middleware Part of the Backend Development skill category.

monitoring-error-rates

25
from ComeOnOliver/skillshub

Monitor and analyze application error rates to improve reliability. Use when tracking errors in applications including HTTP errors, exceptions, and database issues. Trigger with phrases like "monitor error rates", "track application errors", or "analyze error patterns".

learning-rate-scheduler

25
from ComeOnOliver/skillshub

Learning Rate Scheduler - Auto-activating skill for ML Training. Triggers on: learning rate scheduler, learning rate scheduler Part of the ML Training skill category.

implementing-backup-strategies

25
from ComeOnOliver/skillshub

Execute use when you need to work with backup and recovery. This skill provides backup automation and disaster recovery with comprehensive guidance and automation. Trigger with phrases like "create backups", "automate backups", or "implement disaster recovery".

exa-rate-limits

25
from ComeOnOliver/skillshub

Implement Exa rate limiting, exponential backoff, and request queuing. Use when handling 429 errors, implementing retry logic, or optimizing API request throughput for Exa. Trigger with phrases like "exa rate limit", "exa throttling", "exa 429", "exa retry", "exa backoff", "exa QPS".

evernote-rate-limits

25
from ComeOnOliver/skillshub

Handle Evernote API rate limits effectively. Use when implementing rate limit handling, optimizing API usage, or troubleshooting rate limit errors. Trigger with phrases like "evernote rate limit", "evernote throttling", "api quota evernote", "rate limit exceeded".

elevenlabs-webhooks-events

25
from ComeOnOliver/skillshub

Implement ElevenLabs webhook HMAC signature verification and event handling. Use when setting up webhook endpoints for transcription completion, call recording, or agent conversation events from ElevenLabs. Trigger: "elevenlabs webhook", "elevenlabs events", "elevenlabs webhook signature", "handle elevenlabs notifications", "elevenlabs post-call webhook", "elevenlabs transcription webhook".

elevenlabs-upgrade-migration

25
from ComeOnOliver/skillshub

Upgrade ElevenLabs SDK versions and migrate between API model generations. Use when upgrading the elevenlabs-js or elevenlabs Python SDK, migrating from v1 to v2 models, or handling deprecations. Trigger: "upgrade elevenlabs", "elevenlabs migration", "elevenlabs breaking changes", "update elevenlabs SDK", "migrate elevenlabs model", "eleven_v3 migration".