processing-api-batches

Optimize bulk API requests with batching, throttling, and parallel execution. Use when processing bulk API operations efficiently. Trigger with phrases like "process bulk requests", "batch API calls", or "handle batch operations".

25 stars

Best use case

processing-api-batches is best used when you need a repeatable AI agent workflow instead of a one-off prompt.

Optimize bulk API requests with batching, throttling, and parallel execution. Use when processing bulk API operations efficiently. Trigger with phrases like "process bulk requests", "batch API calls", or "handle batch operations".

Teams using processing-api-batches should expect a more consistent output, faster repeated execution, less prompt rewriting.

When to use this skill

  • You want a reusable workflow that can be run more than once with consistent structure.

When not to use this skill

  • You only need a quick one-off answer and do not need a reusable workflow.
  • You cannot install or maintain the underlying files, dependencies, or repository context.

Installation

Claude Code / Cursor / Codex

$curl -o ~/.claude/skills/processing-api-batches/SKILL.md --create-dirs "https://raw.githubusercontent.com/ComeOnOliver/skillshub/main/skills/jeremylongshore/claude-code-plugins-plus-skills/processing-api-batches/SKILL.md"

Manual Installation

  1. Download SKILL.md from GitHub
  2. Place it in .claude/skills/processing-api-batches/SKILL.md inside your project
  3. Restart your AI agent — it will auto-discover the skill

How processing-api-batches Compares

Feature / Agentprocessing-api-batchesStandard Approach
Platform SupportNot specifiedLimited / Varies
Context Awareness High Baseline
Installation ComplexityUnknownN/A

Frequently Asked Questions

What does this skill do?

Optimize bulk API requests with batching, throttling, and parallel execution. Use when processing bulk API operations efficiently. Trigger with phrases like "process bulk requests", "batch API calls", or "handle batch operations".

Where can I find the source code?

You can find the source code on GitHub using the link provided at the top of the page.

SKILL.md Source

# Processing API Batches

## Overview

Optimize bulk API operations with batch request endpoints, parallel execution with concurrency control, partial failure handling, and progress tracking. Implement batch processing patterns that accept arrays of operations in a single request, execute them efficiently with database bulk operations, and return per-item results with individual success/failure status.

## Prerequisites

- Web framework capable of handling large request bodies (configure body size limits: 10MB+ for batch payloads)
- Database with bulk operation support (bulk insert, bulk update, transactions)
- Queue system for async batch processing: Bull/BullMQ (Node.js), Celery (Python), or SQS
- Progress tracking store (Redis) for long-running batch status polling
- Rate limiting aware of batch operations (count individual operations, not just requests)

## Instructions

1. Examine existing API endpoints using Read and Grep to identify operations frequently called in loops by consumers, which are candidates for batch equivalents.
2. Design the batch request format: accept an array of operations in the request body, each with an optional client-provided `id` for result correlation, e.g., `POST /batch` with `{operations: [{method: "POST", path: "/users", body: {...}, id: "op1"}]}`.
3. Implement synchronous batch processing for small batches (< 100 items): validate all items, execute in a database transaction, and return per-item results with `{id, status, result|error}` for each operation.
4. Add asynchronous batch processing for large batches (> 100 items): accept the batch, return 202 Accepted with a `batchId` and status polling URL, process in a background worker, and update progress in Redis.
5. Implement concurrency control: process batch items in parallel with configurable concurrency limit (default: 10) using `p-limit` or `asyncio.Semaphore` to prevent database connection exhaustion.
6. Handle partial failures: do not abort the entire batch when individual items fail; collect per-item results with success/failure status, and return the batch result with summary counts (`succeeded`, `failed`, `total`).
7. Add progress tracking for async batches: expose `GET /batch/:batchId/status` returning `{total, completed, failed, progress: 0.75, status: "processing|completed|failed"}`.
8. Implement batch size limits and validation: maximum 1000 items per batch, reject oversized batches with 413, validate all items before processing any, and return all validation errors upfront.
9. Write tests covering: small sync batches, large async batches, partial failure handling, progress tracking, concurrency limits, and batch size validation.

See `${CLAUDE_SKILL_DIR}/references/implementation.md` for the full implementation guide.

## Output

- `${CLAUDE_SKILL_DIR}/src/routes/batch.js` - Batch request endpoint with sync/async routing
- `${CLAUDE_SKILL_DIR}/src/batch/processor.js` - Batch execution engine with concurrency control
- `${CLAUDE_SKILL_DIR}/src/batch/validator.js` - Batch request validation and size limit enforcement
- `${CLAUDE_SKILL_DIR}/src/batch/progress.js` - Redis-backed progress tracking for async batches
- `${CLAUDE_SKILL_DIR}/src/batch/workers/` - Background worker for async batch processing
- `${CLAUDE_SKILL_DIR}/src/batch/results.js` - Per-item result aggregation with summary statistics
- `${CLAUDE_SKILL_DIR}/tests/batch/` - Batch processing integration tests

## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| 413 Payload Too Large | Batch exceeds maximum item count (1000) or body size limit | Return clear error with maximum allowed count; suggest splitting into multiple batch requests |
| 207 Multi-Status | Some batch items succeeded while others failed | Return per-item status array; include error details for failed items; provide summary counts |
| 408 Batch Timeout | Synchronous batch processing exceeded request timeout | Switch to async processing for large batches; return 202 with status polling URL |
| Partial transaction failure | Database transaction rolls back all items due to one failure | Use savepoints for per-item isolation; or process items individually outside a wrapping transaction |
| Progress tracking stale | Worker crashed mid-batch; progress stops updating | Implement heartbeat monitoring; mark batch as failed after heartbeat timeout; enable retry from last checkpoint |

Refer to `${CLAUDE_SKILL_DIR}/references/errors.md` for comprehensive error patterns.

## Examples

**Bulk user import**: Accept a CSV-uploaded batch of 5000 user records via `POST /batch/users/import`, return 202 with `batchId`, process asynchronously with progress updates, and provide a downloadable results file when complete.

**Multi-resource batch**: Accept mixed operations in a single batch: `[{method:"POST",path:"/users",...}, {method:"PUT",path:"/orders/123",...}, {method:"DELETE",path:"/products/456"}]`, executing each against the appropriate handler.

**Idempotent batch retry**: Client includes `idempotencyKey` per batch item; on retry, already-completed items return their cached result without re-execution, while failed items are re-attempted.

See `${CLAUDE_SKILL_DIR}/references/examples.md` for additional examples.

## Resources

- Google Cloud batch request format: https://cloud.google.com/storage/docs/json_api/v1/how-tos/batch
- Facebook Graph API batch requests pattern
- HTTP 207 Multi-Status (WebDAV) for partial success responses
- Bull queue documentation: https://docs.bullmq.io/

Related Skills

processing-computer-vision-tasks

25
from ComeOnOliver/skillshub

Process images using object detection, classification, and segmentation. Use when requesting "analyze image", "object detection", "image classification", or "computer vision". Trigger with relevant phrases based on skill purpose.

preprocessing-data-with-automated-pipelines

25
from ComeOnOliver/skillshub

Process automate data cleaning, transformation, and validation for ML tasks. Use when requesting "preprocess data", "clean data", "ETL pipeline", or "data transformation". Trigger with relevant phrases based on skill purpose.

pdf-processing-pro

25
from ComeOnOliver/skillshub

Production-ready PDF processing with forms, tables, OCR, validation, and batch operations. Use when working with complex PDF workflows in production environments, processing large volumes of PDFs, or requiring robust error handling and validation.

pdf-processing

25
from ComeOnOliver/skillshub

Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.

data-processing

25
from ComeOnOliver/skillshub

Process JSON with jq and YAML/TOML with yq. Filter, transform, query structured data efficiently. Triggers on: parse JSON, extract from YAML, query config, Docker Compose, K8s manifests, GitHub Actions workflows, package.json, filter data.

nutrient-document-processing

25
from ComeOnOliver/skillshub

Process, convert, OCR, extract, redact, sign, and fill documents using the Nutrient DWS API. Works with PDFs, DOCX, XLSX, PPTX, HTML, and images.

ImageMagick — Command-Line Image Processing

25
from ComeOnOliver/skillshub

You are an expert in ImageMagick, the powerful command-line tool for creating, editing, compositing, and converting images. You help developers automate image processing pipelines using ImageMagick's `convert`, `mogrify`, `composite`, and `identify` commands — batch resizing, format conversion, watermarking, thumbnail generation, PDF manipulation, and complex image compositing for web applications, print production, and data visualization.

Ray Data - Scalable ML Data Processing

25
from ComeOnOliver/skillshub

Distributed data processing library for ML and AI workloads.

laravel-background-processing

25
from ComeOnOliver/skillshub

Scalable asynchronous workflows using Queues, Jobs, and Events. Use when implementing queued jobs, event-driven workflows, or async processing in Laravel. (triggers: app/Jobs/**/*.php, app/Events/**/*.php, app/Listeners/**/*.php, ShouldQueue, dispatch, batch, chain, listener)

axiom-background-processing-ref

25
from ComeOnOliver/skillshub

Complete background task API reference - BGTaskScheduler, BGAppRefreshTask, BGProcessingTask, BGContinuedProcessingTask (iOS 26), beginBackgroundTask, background URLSession, with all WWDC code examples

axiom-background-processing-diag

25
from ComeOnOliver/skillshub

Symptom-based background task troubleshooting - decision trees for 'task never runs', 'task terminates early', 'works in dev not prod', 'handler not called', with time-cost analysis for each diagnosis path

Daily Logs

25
from ComeOnOliver/skillshub

Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.