batch-processor
Process multiple documents in bulk with parallel execution. Use when a user asks to batch process files, convert many documents at once, run parallel file operations, bulk rename, bulk transform, or process a directory of files concurrently. Covers parallel execution, error handling, and progress tracking.
Best use case
batch-processor is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
Process multiple documents in bulk with parallel execution. Use when a user asks to batch process files, convert many documents at once, run parallel file operations, bulk rename, bulk transform, or process a directory of files concurrently. Covers parallel execution, error handling, and progress tracking.
Teams using batch-processor should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/batch-processor/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How batch-processor Compares
| Feature / Agent | batch-processor | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
Process multiple documents in bulk with parallel execution. Use when a user asks to batch process files, convert many documents at once, run parallel file operations, bulk rename, bulk transform, or process a directory of files concurrently. Covers parallel execution, error handling, and progress tracking.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Batch Processor
## Overview
Process multiple documents and files in bulk using parallel execution. Handles large-scale file operations including format conversion, data extraction, transformation, and validation across hundreds or thousands of files with configurable concurrency, error recovery, and progress reporting.
## Instructions
When a user asks for batch processing, determine which approach fits their needs:
### Task A: Parallel file processing with shell tools
For simple transformations, use `xargs` or GNU `parallel`:
```bash
# Convert all PNG files to JPEG using ImageMagick (8 parallel jobs)
find ./images -name "*.png" | xargs -P 8 -I {} bash -c \
'convert "$1" "${1%.png}.jpg"' _ {}
# Process files with GNU parallel and progress bar
find ./docs -name "*.csv" | parallel --bar --jobs 8 \
'python transform.py {} {.}_processed.csv'
# Bulk compress PDFs (4 parallel jobs)
find ./reports -name "*.pdf" | xargs -P 4 -I {} bash -c \
'gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/ebook \
-dNOPAUSE -dBATCH -sOutputFile="{}.compressed" "{}" && mv "{}.compressed" "{}"'
```
### Task B: Python batch processor with concurrency control
Create a reusable batch processing script:
```python
import asyncio
import os
from pathlib import Path
from dataclasses import dataclass, field
@dataclass
class BatchResult:
total: int = 0
success: int = 0
failed: int = 0
errors: list = field(default_factory=list)
async def process_file(filepath: Path, semaphore: asyncio.Semaphore) -> tuple[bool, str]:
async with semaphore:
try:
# Replace with actual processing logic
content = filepath.read_text()
output = content.upper() # Example transformation
out_path = filepath.with_suffix('.processed' + filepath.suffix)
out_path.write_text(output)
return True, str(filepath)
except Exception as e:
return False, f"{filepath}: {e}"
async def batch_process(
input_dir: str,
pattern: str = "*.*",
max_concurrent: int = 10
) -> BatchResult:
semaphore = asyncio.Semaphore(max_concurrent)
files = list(Path(input_dir).glob(pattern))
result = BatchResult(total=len(files))
tasks = [process_file(f, semaphore) for f in files]
for coro in asyncio.as_completed(tasks):
success, msg = await coro
if success:
result.success += 1
else:
result.failed += 1
result.errors.append(msg)
# Progress reporting
done = result.success + result.failed
print(f"\rProgress: {done}/{result.total}", end="", flush=True)
print() # Newline after progress
return result
if __name__ == "__main__":
result = asyncio.run(batch_process("./input", pattern="*.txt", max_concurrent=8))
print(f"Done: {result.success} succeeded, {result.failed} failed")
for err in result.errors:
print(f" ERROR: {err}")
```
### Task C: Batch processing with error recovery
For long-running jobs, track progress and allow resuming:
```python
import json
from pathlib import Path
PROGRESS_FILE = ".batch_progress.json"
def load_progress() -> set:
if Path(PROGRESS_FILE).exists():
return set(json.loads(Path(PROGRESS_FILE).read_text()))
return set()
def save_progress(completed: set):
Path(PROGRESS_FILE).write_text(json.dumps(list(completed)))
def batch_with_resume(input_dir: str, pattern: str = "*.*"):
completed = load_progress()
files = [f for f in Path(input_dir).glob(pattern) if str(f) not in completed]
print(f"Resuming: {len(completed)} done, {len(files)} remaining")
for i, filepath in enumerate(files):
try:
process_single_file(filepath) # Your processing function
completed.add(str(filepath))
if i % 10 == 0: # Checkpoint every 10 files
save_progress(completed)
except KeyboardInterrupt:
save_progress(completed)
print(f"\nSaved progress at {len(completed)} files")
raise
except Exception as e:
print(f"Error on {filepath}: {e}")
save_progress(completed)
Path(PROGRESS_FILE).unlink() # Clean up on completion
```
### Task D: Shell-based batch with logging
```bash
#!/bin/bash
INPUT_DIR="$1"
OUTPUT_DIR="$2"
LOG_FILE="batch_$(date +%Y%m%d_%H%M%S).log"
PARALLEL_JOBS=8
TOTAL=$(find "$INPUT_DIR" -type f | wc -l)
COUNT=0
mkdir -p "$OUTPUT_DIR"
process_file() {
local file="$1"
local outfile="$OUTPUT_DIR/$(basename "$file")"
# Replace with your processing command
cp "$file" "$outfile" 2>&1
echo $?
}
export -f process_file
export OUTPUT_DIR
find "$INPUT_DIR" -type f | parallel --jobs "$PARALLEL_JOBS" --bar \
--joblog "$LOG_FILE" process_file {}
echo "Results logged to $LOG_FILE"
awk 'NR>1 {if($7!=0) fail++; else ok++} END {print ok" succeeded, "fail" failed"}' "$LOG_FILE"
```
## Examples
### Example 1: Convert a directory of Markdown files to PDF
**User request:** "Convert all 200 Markdown files in docs/ to PDF"
```bash
# Install pandoc if needed
# Process in parallel with 6 workers
find ./docs -name "*.md" | parallel --bar --jobs 6 \
'pandoc {} -o {.}.pdf --pdf-engine=xelatex'
echo "Conversion complete. Check for errors above."
```
### Example 2: Extract text from hundreds of images
**User request:** "OCR all scanned documents in the scans/ folder"
```bash
# Using tesseract with parallel processing
find ./scans -name "*.png" -o -name "*.jpg" | parallel --bar --jobs 4 \
'tesseract {} {.} -l eng 2>/dev/null && echo "OK: {}"'
```
### Example 3: Bulk resize images for web
**User request:** "Resize all product images to 800px wide, keep aspect ratio"
```bash
mkdir -p ./resized
find ./products -name "*.jpg" | xargs -P 8 -I {} bash -c \
'convert "$1" -resize 800x -quality 85 "./resized/$(basename $1)"' _ {}
echo "Resized $(ls ./resized | wc -l) images"
```
## Guidelines
- Always test batch operations on a small subset (5-10 files) before processing the full set.
- Set a reasonable concurrency limit. Start with CPU core count for CPU-bound tasks, or 2-4x for I/O-bound tasks.
- Implement progress reporting so users can monitor long-running jobs.
- Write errors to a log file rather than stopping the entire batch.
- Create a checkpoint/resume mechanism for batches over 100 files.
- Back up original files or write output to a separate directory; never overwrite in place without confirmation.
- Use `--dry-run` flags in scripts to preview operations before executing.
- Monitor system resources (RAM, disk space) during large batch operations.Related Skills
webhook-processor
Build and configure webhook processing systems with retry logic, signature verification, and dead letter queues. Use when you need to receive, validate, and reliably process incoming webhooks from payment providers, version control platforms, or third-party APIs. Trigger words: webhook, callback URL, event handler, retry, idempotency, payload processing.
orchestrate-batch-refactor
Plan and execute large refactor efforts with parallel multi-agent analysis. Use when: refactoring many files, splitting workstreams, or coordinating sub-agents for batch code changes.
file-upload-processor
When the user needs to build file upload functionality for a web application. Use when the user mentions "file upload," "image upload," "upload endpoint," "multipart upload," "presigned URL," "S3 upload," "file validation," "upload to cloud storage," or "accept user files." Handles upload endpoints, file validation (type, size, magic bytes), cloud storage integration, and upload status tracking. For image/video processing after upload, see media-transcoder.
excel-processor
Read, transform, analyze, and generate Excel and CSV files. Use when a user asks to open a spreadsheet, process Excel data, merge CSVs, create pivot tables, clean up data, convert between Excel and CSV, add formulas, filter rows, or generate reports from tabular data. Handles .xlsx, .xls, and .csv.
zustand
You are an expert in Zustand, the small, fast, and scalable state management library for React. You help developers manage global state without boilerplate using Zustand's hook-based stores, selectors for performance, middleware (persist, devtools, immer), computed values, and async actions — replacing Redux complexity with a simple, un-opinionated API in under 1KB.
zoho
Integrate and automate Zoho products. Use when a user asks to work with Zoho CRM, Zoho Books, Zoho Desk, Zoho Projects, Zoho Mail, or Zoho Creator, build custom integrations via Zoho APIs, automate workflows with Deluge scripting, sync data between Zoho apps and external systems, manage leads and deals, automate invoicing, build custom Zoho Creator apps, set up webhooks, or manage Zoho organization settings. Covers Zoho CRM, Books, Desk, Projects, Creator, and cross-product integrations.
zod
You are an expert in Zod, the TypeScript-first schema declaration and validation library. You help developers define schemas that validate data at runtime AND infer TypeScript types at compile time — eliminating the need to write types and validators separately. Used for API input validation, form validation, environment variables, config files, and any data boundary.
zipkin
Deploy and configure Zipkin for distributed tracing and request flow visualization. Use when a user needs to set up trace collection, instrument Java/Spring or other services with Zipkin, analyze service dependencies, or configure storage backends for trace data.
zig
Expert guidance for Zig, the systems programming language focused on performance, safety, and readability. Helps developers write high-performance code with compile-time evaluation, seamless C interop, no hidden control flow, and no garbage collector. Zig is used for game engines, operating systems, networking, and as a C/C++ replacement.
zed
Expert guidance for Zed, the high-performance code editor built in Rust with native collaboration, AI integration, and GPU-accelerated rendering. Helps developers configure Zed, create custom extensions, set up collaborative editing sessions, and integrate AI assistants for productive coding.
zeabur
Expert guidance for Zeabur, the cloud deployment platform that auto-detects frameworks, builds and deploys applications with zero configuration, and provides managed services like databases and message queues. Helps developers deploy full-stack applications with automatic scaling and one-click marketplace services.
zapier
Automate workflows between apps with Zapier. Use when a user asks to connect apps without code, automate repetitive tasks, sync data between services, or build no-code integrations between SaaS tools.