customerio-rate-limits

Implement Customer.io rate limiting and backoff. Use when handling high-volume API calls, implementing retry logic, or hitting 429 errors. Trigger: "customer.io rate limit", "customer.io throttle", "customer.io 429", "customer.io backoff", "customer.io too many requests".

25 stars

Best use case

customerio-rate-limits is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Implement Customer.io rate limiting and backoff. Use when handling high-volume API calls, implementing retry logic, or hitting 429 errors. Trigger: "customer.io rate limit", "customer.io throttle", "customer.io 429", "customer.io backoff", "customer.io too many requests".

Teams using customerio-rate-limits should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/customerio-rate-limits/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/customerio-rate-limits/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/customerio-rate-limits/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How customerio-rate-limits Compares

Feature / Agentcustomerio-rate-limitsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Implement Customer.io rate limiting and backoff. Use when handling high-volume API calls, implementing retry logic, or hitting 429 errors. Trigger: "customer.io rate limit", "customer.io throttle", "customer.io 429", "customer.io backoff", "customer.io too many requests".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Customer.io Rate Limits

## Overview

Understand Customer.io's API rate limits and implement proper throttling: token bucket limiters, exponential backoff with jitter, queue-based processing, and 429 response handling.

## Rate Limit Reference

| API | Endpoint | Limit | Scope |
|-----|----------|-------|-------|
| Track API | `identify`, `track`, `trackAnonymous` | ~100 req/sec | Per workspace |
| Track API | Batch operations | ~100 req/sec | Per workspace |
| App API | Transactional email/push | ~100 req/sec | Per workspace |
| App API | Broadcasts, queries | ~10 req/sec | Per workspace |

These are approximate. Customer.io uses sliding window rate limiting. When exceeded, you get a `429 Too Many Requests` response.

## Instructions

### Step 1: Token Bucket Rate Limiter

```typescript
// lib/rate-limiter.ts
export class TokenBucket {
  private tokens: number;
  private lastRefill: number;

  constructor(
    private readonly maxTokens: number = 80,  // Stay under 100/sec limit
    private readonly refillRate: number = 80   // Tokens per second
  ) {
    this.tokens = maxTokens;
    this.lastRefill = Date.now();
  }

  private refill(): void {
    const now = Date.now();
    const elapsed = (now - this.lastRefill) / 1000;
    this.tokens = Math.min(this.maxTokens, this.tokens + elapsed * this.refillRate);
    this.lastRefill = now;
  }

  async acquire(): Promise<void> {
    this.refill();
    if (this.tokens >= 1) {
      this.tokens -= 1;
      return;
    }
    // Wait until a token is available
    const waitMs = ((1 - this.tokens) / this.refillRate) * 1000;
    await new Promise((r) => setTimeout(r, Math.ceil(waitMs)));
    this.tokens = 0;
    this.lastRefill = Date.now();
  }
}
```

### Step 2: Exponential Backoff with Jitter

```typescript
// lib/backoff.ts
interface BackoffOptions {
  maxRetries: number;
  baseDelayMs: number;
  maxDelayMs: number;
  jitter: number;         // 0 to 1
}

const DEFAULTS: BackoffOptions = {
  maxRetries: 4,
  baseDelayMs: 1000,
  maxDelayMs: 60000,
  jitter: 0.25,
};

export async function withBackoff<T>(
  fn: () => Promise<T>,
  opts: Partial<BackoffOptions> = {}
): Promise<T> {
  const { maxRetries, baseDelayMs, maxDelayMs, jitter } = { ...DEFAULTS, ...opts };
  let lastErr: Error | undefined;

  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (err: any) {
      lastErr = err;
      const status = err.statusCode ?? err.status;

      // Don't retry 4xx errors (except 429)
      if (status >= 400 && status < 500 && status !== 429) throw err;

      if (attempt === maxRetries) break;

      // Check Retry-After header (429 responses)
      const retryAfter = err.headers?.["retry-after"];
      let delay: number;

      if (retryAfter) {
        delay = parseInt(retryAfter) * 1000;
      } else {
        delay = Math.min(baseDelayMs * Math.pow(2, attempt), maxDelayMs);
      }

      // Add jitter to prevent thundering herd
      delay += delay * jitter * Math.random();
      console.warn(`CIO retry ${attempt + 1}/${maxRetries} in ${Math.round(delay)}ms`);
      await new Promise((r) => setTimeout(r, delay));
    }
  }
  throw lastErr;
}
```

### Step 3: Rate-Limited Client

```typescript
// lib/customerio-rate-limited.ts
import { TrackClient, RegionUS } from "customerio-node";
import { TokenBucket } from "./rate-limiter";
import { withBackoff } from "./backoff";

export class RateLimitedCioClient {
  private client: TrackClient;
  private limiter: TokenBucket;

  constructor(siteId: string, apiKey: string, ratePerSec: number = 80) {
    this.client = new TrackClient(siteId, apiKey, { region: RegionUS });
    this.limiter = new TokenBucket(ratePerSec, ratePerSec);
  }

  async identify(userId: string, attrs: Record<string, any>): Promise<void> {
    await this.limiter.acquire();
    return withBackoff(() => this.client.identify(userId, attrs));
  }

  async track(userId: string, event: { name: string; data?: any }): Promise<void> {
    await this.limiter.acquire();
    return withBackoff(() => this.client.track(userId, event));
  }

  async trackAnonymous(event: {
    anonymous_id: string;
    name: string;
    data?: any;
  }): Promise<void> {
    await this.limiter.acquire();
    return withBackoff(() => this.client.trackAnonymous(event));
  }

  async suppress(userId: string): Promise<void> {
    await this.limiter.acquire();
    return withBackoff(() => this.client.suppress(userId));
  }

  async destroy(userId: string): Promise<void> {
    await this.limiter.acquire();
    return withBackoff(() => this.client.destroy(userId));
  }
}
```

### Step 4: Queue-Based Processing with p-queue

For sustained high volume, use `p-queue` for cleaner concurrency control:

```typescript
// lib/customerio-queued.ts
import PQueue from "p-queue";
import { TrackClient, RegionUS } from "customerio-node";

const cio = new TrackClient(
  process.env.CUSTOMERIO_SITE_ID!,
  process.env.CUSTOMERIO_TRACK_API_KEY!,
  { region: RegionUS }
);

// Process at most 80 requests per second with max 10 concurrent
const queue = new PQueue({
  concurrency: 10,
  interval: 1000,
  intervalCap: 80,
});

// Queue operations instead of calling directly
export function queueIdentify(userId: string, attrs: Record<string, any>) {
  return queue.add(() => cio.identify(userId, attrs));
}

export function queueTrack(userId: string, name: string, data?: any) {
  return queue.add(() => cio.track(userId, { name, data }));
}

// Monitor queue health
setInterval(() => {
  console.log(
    `CIO queue: pending=${queue.pending} size=${queue.size}`
  );
}, 10000);
```

Install: `npm install p-queue`

### Step 5: Bulk Import Strategy

For large data imports (>10K users), avoid hitting rate limits with controlled batching:

```typescript
// scripts/bulk-import.ts
import { RateLimitedCioClient } from "../lib/customerio-rate-limited";

async function bulkImport(users: { id: string; attrs: Record<string, any> }[]) {
  const client = new RateLimitedCioClient(
    process.env.CUSTOMERIO_SITE_ID!,
    process.env.CUSTOMERIO_TRACK_API_KEY!,
    50  // Conservative rate — 50/sec for imports
  );

  let processed = 0;
  let errors = 0;

  for (const user of users) {
    try {
      await client.identify(user.id, user.attrs);
      processed++;
    } catch (err: any) {
      errors++;
      console.error(`Failed user ${user.id}: ${err.message}`);
    }

    if (processed % 1000 === 0) {
      console.log(`Progress: ${processed}/${users.length} (${errors} errors)`);
    }
  }

  console.log(`Done: ${processed} processed, ${errors} errors`);
}
```

## Error Handling

| Scenario | Strategy |
|----------|----------|
| `429` received | Respect `Retry-After` header, fall back to exponential backoff |
| Burst traffic spike | Token bucket absorbs burst, queue holds overflow |
| Sustained high volume | Use p-queue with interval limiting |
| Bulk import | Use conservative rate (50/sec) with progress logging |
| Downstream timeout | Don't count as rate limit — retry normally |

## Resources

- [Track API Limits](https://docs.customer.io/integrations/api/track/)
- [App API Reference](https://docs.customer.io/integrations/api/app/)
- [p-queue npm](https://www.npmjs.com/package/p-queue)

## Next Steps

After implementing rate limits, proceed to `customerio-security-basics` for security best practices.

Related Skills

versioning-strategy-helper

25
from ComeOnOliver/skillshub

Versioning Strategy Helper - Auto-activating skill for API Development. Triggers on: versioning strategy helper, versioning strategy helper Part of the API Development skill category.

strategic-clarity

25
from ComeOnOliver/skillshub

Guided workflow for establishing team identity, boundaries, and strategic clarity. Use when starting a new role, inheriting ambiguity, when a team lacks clear identity, or when you need to define "what we own" vs "what we don't". Triggers include "strategic clarity", "team identity", "new role", "inherited ambiguity", "what does my team own", or "define our boundaries".

rate-limiting-apis

25
from ComeOnOliver/skillshub

Implement sophisticated rate limiting with sliding windows, token buckets, and quotas. Use when protecting APIs from excessive requests. Trigger with phrases like "add rate limiting", "limit API requests", or "implement rate limits".

rate-limiter-config

25
from ComeOnOliver/skillshub

Rate Limiter Config - Auto-activating skill for Security Fundamentals. Triggers on: rate limiter config, rate limiter config Part of the Security Fundamentals skill category.

rate-limit-middleware

25
from ComeOnOliver/skillshub

Rate Limit Middleware - Auto-activating skill for Backend Development. Triggers on: rate limit middleware, rate limit middleware Part of the Backend Development skill category.

monitoring-error-rates

25
from ComeOnOliver/skillshub

Monitor and analyze application error rates to improve reliability. Use when tracking errors in applications including HTTP errors, exceptions, and database issues. Trigger with phrases like "monitor error rates", "track application errors", or "analyze error patterns".

learning-rate-scheduler

25
from ComeOnOliver/skillshub

Learning Rate Scheduler - Auto-activating skill for ML Training. Triggers on: learning rate scheduler, learning rate scheduler Part of the ML Training skill category.

implementing-backup-strategies

25
from ComeOnOliver/skillshub

Execute use when you need to work with backup and recovery. This skill provides backup automation and disaster recovery with comprehensive guidance and automation. Trigger with phrases like "create backups", "automate backups", or "implement disaster recovery".

exa-rate-limits

25
from ComeOnOliver/skillshub

Implement Exa rate limiting, exponential backoff, and request queuing. Use when handling 429 errors, implementing retry logic, or optimizing API request throughput for Exa. Trigger with phrases like "exa rate limit", "exa throttling", "exa 429", "exa retry", "exa backoff", "exa QPS".

evernote-rate-limits

25
from ComeOnOliver/skillshub

Handle Evernote API rate limits effectively. Use when implementing rate limit handling, optimizing API usage, or troubleshooting rate limit errors. Trigger with phrases like "evernote rate limit", "evernote throttling", "api quota evernote", "rate limit exceeded".

elevenlabs-rate-limits

25
from ComeOnOliver/skillshub

Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".

documenso-rate-limits

25
from ComeOnOliver/skillshub

Implement Documenso rate limiting, backoff, and request throttling patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Documenso. Trigger with phrases like "documenso rate limit", "documenso throttling", "documenso 429", "documenso retry", "documenso backoff".