apify-rate-limits

Handle Apify API rate limits with proper backoff and request queuing. Use when hitting 429 errors, optimizing API request throughput, or implementing rate-aware client wrappers. Trigger: "apify rate limit", "apify throttling", "apify 429", "apify retry", "apify backoff", "too many requests apify".

25 stars

Best use case

apify-rate-limits is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Handle Apify API rate limits with proper backoff and request queuing. Use when hitting 429 errors, optimizing API request throughput, or implementing rate-aware client wrappers. Trigger: "apify rate limit", "apify throttling", "apify 429", "apify retry", "apify backoff", "too many requests apify".

Teams using apify-rate-limits should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/apify-rate-limits/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/apify-rate-limits/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/apify-rate-limits/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How apify-rate-limits Compares

Feature / Agentapify-rate-limitsStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Handle Apify API rate limits with proper backoff and request queuing. Use when hitting 429 errors, optimizing API request throughput, or implementing rate-aware client wrappers. Trigger: "apify rate limit", "apify throttling", "apify 429", "apify retry", "apify backoff", "too many requests apify".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Apify Rate Limits

## Overview

The Apify API enforces rate limits per resource. The `apify-client` library auto-retries 429s (up to 8 times with exponential backoff), but you need to understand the limits for bulk operations and custom API calls.

## Apify Rate Limit Rules

| Scope | Limit | Notes |
|-------|-------|-------|
| Per resource (default) | 60 req/sec | Applies to each Actor, dataset, KV store independently |
| Dataset push | 60 req/sec per dataset | Batch items to reduce call count |
| Actor runs | 60 req/sec per Actor | Start runs in sequence or with delays |
| Platform-wide | Higher limit | Aggregate across all resources |

**"Per resource" means:** calls to dataset A and dataset B each get 60 req/sec independently.

Rate limit headers returned:
- `X-RateLimit-Limit` — max requests per interval
- `X-RateLimit-Remaining` — remaining requests
- `X-RateLimit-Reset` — epoch seconds when limit resets

## Instructions

### Step 1: Understand Built-in Retries

The `apify-client` package handles rate limits automatically:

```typescript
import { ApifyClient } from 'apify-client';

// Default: retries up to 8 times on 429 and 500+ errors
const client = new ApifyClient({ token: process.env.APIFY_TOKEN });

// Customize retry behavior
const client = new ApifyClient({
  token: process.env.APIFY_TOKEN,
  maxRetries: 5,         // Default: 8
  minDelayBetweenRetriesMillis: 500,  // Default: 500
});
```

### Step 2: Batch Operations to Reduce API Calls

```typescript
// BAD: 1000 API calls (easily rate limited)
for (const item of items) {
  await client.dataset(dsId).pushItems([item]);
}

// GOOD: 1 API call (up to 9MB payload)
await client.dataset(dsId).pushItems(items);

// GOOD: Chunked for very large datasets
function chunkArray<T>(arr: T[], size: number): T[][] {
  const chunks: T[][] = [];
  for (let i = 0; i < arr.length; i += size) {
    chunks.push(arr.slice(i, i + size));
  }
  return chunks;
}

for (const chunk of chunkArray(items, 1000)) {
  await client.dataset(dsId).pushItems(chunk);
}
```

### Step 3: Queue-Based Rate Limiting for Custom Calls

```typescript
import PQueue from 'p-queue';

// 50 requests per second with max 10 concurrent
const apiQueue = new PQueue({
  concurrency: 10,
  interval: 1000,
  intervalCap: 50,
});

// All API calls go through the queue
async function rateLimitedCall<T>(fn: () => Promise<T>): Promise<T> {
  return apiQueue.add(fn) as Promise<T>;
}

// Usage
const results = await Promise.all(
  actorIds.map(id =>
    rateLimitedCall(() => client.actor(id).get())
  )
);
```

### Step 4: Stagger Actor Starts

```typescript
import { sleep } from 'crawlee';

// Start multiple Actor runs with delays to avoid 429 on the runs endpoint
async function staggeredRuns(
  actorId: string,
  inputs: Record<string, unknown>[],
  delayMs = 200,
) {
  const runs = [];
  for (const input of inputs) {
    const run = await client.actor(actorId).start(input);
    runs.push(run);
    await sleep(delayMs);
  }

  // Wait for all to finish
  const finished = await Promise.all(
    runs.map(run => client.run(run.id).waitForFinish())
  );
  return finished;
}
```

### Step 5: Rate Limit Monitor

```typescript
class ApifyRateLimitMonitor {
  private remaining = 60;
  private resetAt = Date.now();
  private warningThreshold: number;

  constructor(warningThreshold = 10) {
    this.warningThreshold = warningThreshold;
  }

  updateFromHeaders(headers: Record<string, string>) {
    if (headers['x-ratelimit-remaining']) {
      this.remaining = parseInt(headers['x-ratelimit-remaining']);
    }
    if (headers['x-ratelimit-reset']) {
      this.resetAt = parseInt(headers['x-ratelimit-reset']) * 1000;
    }

    if (this.remaining < this.warningThreshold) {
      const waitMs = Math.max(0, this.resetAt - Date.now());
      console.warn(
        `Rate limit warning: ${this.remaining} requests remaining. ` +
        `Resets in ${waitMs}ms.`
      );
    }
  }

  shouldPause(): boolean {
    return this.remaining <= 1 && Date.now() < this.resetAt;
  }

  getWaitMs(): number {
    return Math.max(0, this.resetAt - Date.now());
  }
}
```

## Crawlee-Level Concurrency (Target Website Rate Limits)

Separate from API rate limits, you must also respect the target website:

```typescript
const crawler = new CheerioCrawler({
  // Limit concurrent requests to the target site
  maxConcurrency: 10,           // Max parallel requests
  minConcurrency: 1,            // Min parallel requests
  maxRequestsPerMinute: 120,    // Hard cap per minute

  // Auto-scale based on system resources
  autoscaledPoolOptions: {
    desiredConcurrency: 5,
    maxConcurrency: 20,
  },

  // Delay between requests
  requestHandlerTimeoutSecs: 30,
});
```

## Error Handling

| Scenario | Detection | Response |
|----------|-----------|----------|
| API 429 | `apify-client` auto-retries | Usually transparent; increase delays if persistent |
| Target site 429 | `statusCode === 429` in handler | Reduce `maxConcurrency`, add proxy rotation |
| Burst of starts | Starting 100+ runs at once | Stagger with 200ms delays |
| Large data push | Single 50MB dataset push | Chunk into 9MB batches |

## Resources

- [Apify API Rate Limits](https://docs.apify.com/api/v2)
- [p-queue Documentation](https://github.com/sindresorhus/p-queue)
- [Crawlee Auto-scaling](https://crawlee.dev/js/docs/guides/configuration)

## Next Steps

For security configuration, see `apify-security-basics`.

Related Skills

versioning-strategy-helper

25
from ComeOnOliver/skillshub

Versioning Strategy Helper - Auto-activating skill for API Development. Triggers on: versioning strategy helper, versioning strategy helper Part of the API Development skill category.

strategic-clarity

25
from ComeOnOliver/skillshub

Guided workflow for establishing team identity, boundaries, and strategic clarity. Use when starting a new role, inheriting ambiguity, when a team lacks clear identity, or when you need to define "what we own" vs "what we don't". Triggers include "strategic clarity", "team identity", "new role", "inherited ambiguity", "what does my team own", or "define our boundaries".

rate-limiting-apis

25
from ComeOnOliver/skillshub

Implement sophisticated rate limiting with sliding windows, token buckets, and quotas. Use when protecting APIs from excessive requests. Trigger with phrases like "add rate limiting", "limit API requests", or "implement rate limits".

rate-limiter-config

25
from ComeOnOliver/skillshub

Rate Limiter Config - Auto-activating skill for Security Fundamentals. Triggers on: rate limiter config, rate limiter config Part of the Security Fundamentals skill category.

rate-limit-middleware

25
from ComeOnOliver/skillshub

Rate Limit Middleware - Auto-activating skill for Backend Development. Triggers on: rate limit middleware, rate limit middleware Part of the Backend Development skill category.

monitoring-error-rates

25
from ComeOnOliver/skillshub

Monitor and analyze application error rates to improve reliability. Use when tracking errors in applications including HTTP errors, exceptions, and database issues. Trigger with phrases like "monitor error rates", "track application errors", or "analyze error patterns".

learning-rate-scheduler

25
from ComeOnOliver/skillshub

Learning Rate Scheduler - Auto-activating skill for ML Training. Triggers on: learning rate scheduler, learning rate scheduler Part of the ML Training skill category.

implementing-backup-strategies

25
from ComeOnOliver/skillshub

Execute use when you need to work with backup and recovery. This skill provides backup automation and disaster recovery with comprehensive guidance and automation. Trigger with phrases like "create backups", "automate backups", or "implement disaster recovery".

exa-rate-limits

25
from ComeOnOliver/skillshub

Implement Exa rate limiting, exponential backoff, and request queuing. Use when handling 429 errors, implementing retry logic, or optimizing API request throughput for Exa. Trigger with phrases like "exa rate limit", "exa throttling", "exa 429", "exa retry", "exa backoff", "exa QPS".

evernote-rate-limits

25
from ComeOnOliver/skillshub

Handle Evernote API rate limits effectively. Use when implementing rate limit handling, optimizing API usage, or troubleshooting rate limit errors. Trigger with phrases like "evernote rate limit", "evernote throttling", "api quota evernote", "rate limit exceeded".

elevenlabs-rate-limits

25
from ComeOnOliver/skillshub

Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".

documenso-rate-limits

25
from ComeOnOliver/skillshub

Implement Documenso rate limiting, backoff, and request throttling patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Documenso. Trigger with phrases like "documenso rate limit", "documenso throttling", "documenso 429", "documenso retry", "documenso backoff".