canva-rate-limits
Handle Canva Connect API rate limits with backoff, queuing, and monitoring. Use when hitting 429 errors, implementing retry logic, or optimizing API request throughput for Canva integrations. Trigger with phrases like "canva rate limit", "canva throttling", "canva 429", "canva retry", "canva backoff".
Best use case
canva-rate-limits is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Handle Canva Connect API rate limits with backoff, queuing, and monitoring. Use when hitting 429 errors, implementing retry logic, or optimizing API request throughput for Canva integrations. Trigger with phrases like "canva rate limit", "canva throttling", "canva 429", "canva retry", "canva backoff".
Teams using canva-rate-limits should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/canva-rate-limits/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How canva-rate-limits Compares
| Feature / Agent | canva-rate-limits | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Handle Canva Connect API rate limits with backoff, queuing, and monitoring. Use when hitting 429 errors, implementing retry logic, or optimizing API request throughput for Canva integrations. Trigger with phrases like "canva rate limit", "canva throttling", "canva 429", "canva retry", "canva backoff".
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Canva Rate Limits
## Overview
The Canva Connect API enforces per-user, per-endpoint rate limits. Each endpoint has different thresholds. A 429 response means you must wait before retrying.
## Canva Connect API Rate Limits
| Endpoint | Method | Limit |
|----------|--------|-------|
| `/v1/users/me` | GET | 10 req/min |
| `/v1/users/me/profile` | GET | 10 req/min |
| `/v1/designs` | GET | 100 req/min |
| `/v1/designs` | POST | 20 req/min |
| `/v1/designs/{id}` | GET | 100 req/min |
| `/v1/exports` | POST | 75 req/5min, 500/24hr per user |
| `/v1/exports` (integration) | POST | 750 req/5min, 5000/24hr |
| `/v1/exports` (per document) | POST | 75 req/5min |
| `/v1/asset-uploads` | POST | 30 req/min |
| `/v1/autofills` | POST | 60 req/min |
| `/v1/folders` | POST | 20 req/min |
| `/v1/brand-templates` | GET | 100 req/min |
All limits are **per user** of your integration unless noted otherwise.
## Exponential Backoff with Jitter
```typescript
async function canvaRequestWithBackoff<T>(
fn: () => Promise<T>,
config = { maxRetries: 5, baseDelayMs: 1000, maxDelayMs: 60000 }
): Promise<T> {
for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
try {
return await fn();
} catch (error: any) {
if (attempt === config.maxRetries) throw error;
// Only retry on 429 or 5xx
const status = error.status || error.response?.status;
if (status !== 429 && (status < 500 || status >= 600)) throw error;
// Honor Retry-After header if present
const retryAfter = error.headers?.get?.('Retry-After');
const delay = retryAfter
? parseInt(retryAfter) * 1000
: Math.min(
config.baseDelayMs * Math.pow(2, attempt) + Math.random() * 1000,
config.maxDelayMs
);
console.warn(`Rate limited (attempt ${attempt + 1}/${config.maxRetries}). Waiting ${(delay / 1000).toFixed(1)}s`);
await new Promise(r => setTimeout(r, delay));
}
}
throw new Error('Unreachable');
}
```
## Queue-Based Rate Limiting
```typescript
import PQueue from 'p-queue';
// Match per-user endpoint limits
const canvaQueues = {
designs: new PQueue({ concurrency: 1, interval: 3000, intervalCap: 1 }), // ~20/min
exports: new PQueue({ concurrency: 1, interval: 4000, intervalCap: 1 }), // ~15/min (conservative)
assets: new PQueue({ concurrency: 1, interval: 2000, intervalCap: 1 }), // ~30/min
reads: new PQueue({ concurrency: 3, interval: 1000, intervalCap: 3 }), // ~100/min (shared reads)
};
// Usage — automatically queued to stay under limits
const design = await canvaQueues.designs.add(() =>
client.createDesign({ design_type: { type: 'custom', width: 1080, height: 1080 }, title: 'Queued' })
);
// Batch export with rate control
const designIds = ['DAV1', 'DAV2', 'DAV3', 'DAV4', 'DAV5'];
const exports = await Promise.all(
designIds.map(id =>
canvaQueues.exports.add(() =>
client.createExport({ design_id: id, format: { type: 'pdf' } })
)
)
);
```
## Rate Limit Monitor
```typescript
class CanvaRateLimitTracker {
private windows: Map<string, { count: number; resetAt: number }> = new Map();
track(endpoint: string, response: Response): void {
const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');
if (remaining !== null) {
this.windows.set(endpoint, {
count: parseInt(remaining),
resetAt: reset ? parseInt(reset) * 1000 : Date.now() + 60000,
});
}
}
shouldThrottle(endpoint: string): boolean {
const window = this.windows.get(endpoint);
if (!window) return false;
return window.count < 3 && Date.now() < window.resetAt;
}
getWaitMs(endpoint: string): number {
const window = this.windows.get(endpoint);
if (!window) return 0;
return Math.max(0, window.resetAt - Date.now());
}
report(): Record<string, { remaining: number; resetsIn: string }> {
const report: Record<string, any> = {};
for (const [ep, w] of this.windows) {
report[ep] = {
remaining: w.count,
resetsIn: `${Math.max(0, (w.resetAt - Date.now()) / 1000).toFixed(0)}s`,
};
}
return report;
}
}
```
## Proactive Throttling
```typescript
// Wrap the client to throttle before hitting limits
async function throttledCanvaRequest<T>(
tracker: CanvaRateLimitTracker,
endpoint: string,
fn: () => Promise<T>
): Promise<T> {
if (tracker.shouldThrottle(endpoint)) {
const waitMs = tracker.getWaitMs(endpoint);
console.log(`Proactively waiting ${waitMs}ms for ${endpoint}`);
await new Promise(r => setTimeout(r, waitMs));
}
return fn();
}
```
## Error Handling
| Scenario | Detection | Action |
|----------|-----------|--------|
| Single 429 | HTTP status | Wait `Retry-After` seconds, retry |
| Sustained 429s | Multiple retries fail | Reduce request rate, increase backoff |
| Export quota hit | 500/24hr per user | Queue exports, spread across hours |
| Integration quota | 5000/24hr exports | Distribute across users |
## Resources
- [API Requests & Responses](https://www.canva.dev/docs/connect/api-requests-responses/)
- [p-queue](https://github.com/sindresorhus/p-queue)
## Next Steps
For security configuration, see `canva-security-basics`.Related Skills
versioning-strategy-helper
Versioning Strategy Helper - Auto-activating skill for API Development. Triggers on: versioning strategy helper, versioning strategy helper Part of the API Development skill category.
strategic-clarity
Guided workflow for establishing team identity, boundaries, and strategic clarity. Use when starting a new role, inheriting ambiguity, when a team lacks clear identity, or when you need to define "what we own" vs "what we don't". Triggers include "strategic clarity", "team identity", "new role", "inherited ambiguity", "what does my team own", or "define our boundaries".
rate-limiting-apis
Implement sophisticated rate limiting with sliding windows, token buckets, and quotas. Use when protecting APIs from excessive requests. Trigger with phrases like "add rate limiting", "limit API requests", or "implement rate limits".
rate-limiter-config
Rate Limiter Config - Auto-activating skill for Security Fundamentals. Triggers on: rate limiter config, rate limiter config Part of the Security Fundamentals skill category.
rate-limit-middleware
Rate Limit Middleware - Auto-activating skill for Backend Development. Triggers on: rate limit middleware, rate limit middleware Part of the Backend Development skill category.
monitoring-error-rates
Monitor and analyze application error rates to improve reliability. Use when tracking errors in applications including HTTP errors, exceptions, and database issues. Trigger with phrases like "monitor error rates", "track application errors", or "analyze error patterns".
learning-rate-scheduler
Learning Rate Scheduler - Auto-activating skill for ML Training. Triggers on: learning rate scheduler, learning rate scheduler Part of the ML Training skill category.
implementing-backup-strategies
Execute use when you need to work with backup and recovery. This skill provides backup automation and disaster recovery with comprehensive guidance and automation. Trigger with phrases like "create backups", "automate backups", or "implement disaster recovery".
exa-rate-limits
Implement Exa rate limiting, exponential backoff, and request queuing. Use when handling 429 errors, implementing retry logic, or optimizing API request throughput for Exa. Trigger with phrases like "exa rate limit", "exa throttling", "exa 429", "exa retry", "exa backoff", "exa QPS".
evernote-rate-limits
Handle Evernote API rate limits effectively. Use when implementing rate limit handling, optimizing API usage, or troubleshooting rate limit errors. Trigger with phrases like "evernote rate limit", "evernote throttling", "api quota evernote", "rate limit exceeded".
elevenlabs-rate-limits
Implement ElevenLabs rate limiting, concurrency queuing, and backoff patterns. Use when handling 429 errors, implementing retry logic, or managing concurrent TTS request throughput. Trigger: "elevenlabs rate limit", "elevenlabs throttling", "elevenlabs 429", "elevenlabs retry", "elevenlabs backoff", "elevenlabs concurrent requests".
documenso-rate-limits
Implement Documenso rate limiting, backoff, and request throttling patterns. Use when handling rate limit errors, implementing retry logic, or optimizing API request throughput for Documenso. Trigger with phrases like "documenso rate limit", "documenso throttling", "documenso 429", "documenso retry", "documenso backoff".