Best use case
Job Queue is best used when you need a repeatable AI agent workflow instead of a one-off prompt.
## Overview
Teams using Job Queue should expect a more consistent output, faster repeated execution, less prompt rewriting.
When to use this skill
- You want a reusable workflow that can be run more than once with consistent structure.
When not to use this skill
- You only need a quick one-off answer and do not need a reusable workflow.
- You cannot install or maintain the underlying files, dependencies, or repository context.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/job-queue/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How Job Queue Compares
| Feature / Agent | Job Queue | Standard Approach |
|---|---|---|
| Platform Support | Not specified | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | Unknown | N/A |
Frequently Asked Questions
What does this skill do?
## Overview
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
SKILL.md Source
# Job Queue
## Overview
This skill helps you build production-grade background job processing systems. It covers queue architecture, worker concurrency, job priorities, retry strategies, scheduled/recurring jobs, progress reporting, and graceful shutdown. The patterns work across BullMQ (Node.js), Celery (Python), and Sidekiq (Ruby).
## Instructions
### 1. Set up the queue and define job types
Create typed job definitions and a queue instance:
```typescript
// src/jobs/types.ts
export interface JobMap {
"email:send": { to: string; template: string; data: Record<string, string> };
"pdf:generate": { reportId: string; format: "a4" | "letter" };
"export:csv": { userId: string; query: string; columns: string[] };
"image:resize": { sourceUrl: string; widths: number[] };
}
// src/jobs/queues.ts
import { Queue } from "bullmq";
import { JobMap } from "./types";
const connection = { host: "localhost", port: 6379 };
export const emailQueue = new Queue<JobMap["email:send"]>("email", { connection });
export const pdfQueue = new Queue<JobMap["pdf:generate"]>("pdf", { connection });
export const exportQueue = new Queue<JobMap["export:csv"]>("export", { connection });
export const imageQueue = new Queue<JobMap["image:resize"]>("image", { connection });
```
### 2. Implement workers with concurrency control
```typescript
// src/workers/email-worker.ts
import { Worker, Job } from "bullmq";
import { JobMap } from "../jobs/types";
const emailWorker = new Worker<JobMap["email:send"]>(
"email",
async (job: Job) => {
const { to, template, data } = job.data;
await job.updateProgress(10);
const html = await renderTemplate(template, data);
await job.updateProgress(50);
await sendEmail(to, html);
await job.updateProgress(100);
return { sentAt: new Date().toISOString() };
},
{
connection: { host: "localhost", port: 6379 },
concurrency: 10, // Process 10 emails in parallel
limiter: { max: 100, duration: 60000 }, // Rate limit: 100/minute
}
);
emailWorker.on("completed", (job) => {
console.log(`Email sent: job ${job.id} → ${job.data.to}`);
});
emailWorker.on("failed", (job, err) => {
console.error(`Email failed: job ${job?.id} — ${err.message}`);
});
```
### 3. Add job scheduling and priorities
```typescript
// Delayed job — send welcome email 30 minutes after signup
await emailQueue.add("email:send", {
to: "newuser@example.com",
template: "welcome",
data: { name: "Alex" },
}, { delay: 30 * 60 * 1000 });
// Priority jobs — password resets jump the queue
await emailQueue.add("email:send", {
to: "user@example.com",
template: "password-reset",
data: { resetLink: "https://app.example.com/reset/abc123" },
}, { priority: 1 }); // Lower number = higher priority
// Recurring job — daily digest at 8:00 AM UTC
await emailQueue.add("email:send", {
to: "digest",
template: "daily-digest",
data: {},
}, {
repeat: { pattern: "0 8 * * *" },
jobId: "daily-digest", // Prevent duplicates
});
```
### 4. Implement graceful shutdown
```typescript
// src/workers/shutdown.ts
const workers = [emailWorker, pdfWorker, exportWorker, imageWorker];
async function gracefulShutdown(signal: string): Promise<void> {
console.log(`Received ${signal}. Closing workers gracefully...`);
await Promise.all(workers.map((w) => w.close()));
console.log("All workers closed. Exiting.");
process.exit(0);
}
process.on("SIGTERM", () => gracefulShutdown("SIGTERM"));
process.on("SIGINT", () => gracefulShutdown("SIGINT"));
```
## Examples
### Example 1: PDF report generation queue
**Prompt:** "Build a background job system for generating PDF reports. Users request a report, get a job ID back immediately, and can poll for progress. Reports take 10-30 seconds to generate."
**Agent output:**
- Creates `src/jobs/pdf-queue.ts` with typed job definitions
- Creates `src/workers/pdf-worker.ts` with progress updates at each stage (query data → format → render → upload)
- Creates `src/routes/reports.ts` with `POST /reports` (enqueue, return job ID) and `GET /reports/:jobId/status` (return progress percentage and download URL when complete)
- Adds retry logic: 3 attempts with 10-second backoff
### Example 2: Image processing pipeline
**Prompt:** "I need to process uploaded images: resize to 3 widths (200, 800, 1600px), convert to WebP, and upload to cloud storage. Handle up to 500 images per hour."
**Agent output:**
- Creates `src/workers/image-worker.ts` with sharp-based resize and conversion pipeline
- Sets concurrency to 4 (CPU-bound work, matches core count)
- Adds per-image progress tracking (useful for batch uploads)
- Creates `src/jobs/image-pipeline.ts` with a flow: resize → convert → upload as chained jobs
## Guidelines
- **Keep jobs serializable** — job data must survive JSON round-trips. Pass IDs and URLs, not buffers or streams.
- **Set appropriate concurrency** — CPU-bound work (image processing): match core count. I/O-bound (email, API calls): 10-50 concurrent.
- **Always implement graceful shutdown** — `SIGTERM` should let running jobs finish before the process exits.
- **Use job IDs for idempotency** — set a deterministic `jobId` to prevent the same job from being enqueued twice.
- **Monitor queue depth** — a growing queue means workers can't keep up. Alert when backlog exceeds 5 minutes of processing time.
- **Separate queues by workload type** — don't let a slow PDF generation block fast email sends.
- **Store results externally** — BullMQ job results are cleaned up by default. Persist important results in your database.Related Skills
sqs-queue-setup
Sqs Queue Setup - Auto-activating skill for AWS Skills. Triggers on: sqs queue setup, sqs queue setup Part of the AWS Skills skill category.
rabbitmq-queue-setup
Rabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.
cloud-tasks-queue-setup
Cloud Tasks Queue Setup - Auto-activating skill for GCP Skills. Triggers on: cloud tasks queue setup, cloud tasks queue setup Part of the GCP Skills skill category.
azure-storage-queue-ts
Azure Queue Storage JavaScript/TypeScript SDK (@azure/storage-queue) for message queue operations. Use for sending, receiving, peeking, and deleting messages in queues. Supports visibility timeout, message encoding, and batch operations. Triggers: "queue storage", "@azure/storage-queue", "QueueServiceClient", "QueueClient", "send message", "receive message", "dequeue", "visibility timeout".
azure-storage-queue-py
Azure Queue Storage SDK for Python. Use for reliable message queuing, task distribution, and asynchronous processing. Triggers: "queue storage", "QueueServiceClient", "QueueClient", "message queue", "dequeue".
Amazon SQS — Managed Message Queue
You are an expert in Amazon SQS (Simple Queue Service), the fully managed message queuing service. You help developers build decoupled, event-driven architectures using standard queues (at-least-once, best-effort ordering) and FIFO queues (exactly-once, ordered), dead-letter queues for failed messages, and Lambda triggers for serverless processing — scaling from zero to millions of messages per second.
BullMQ — Redis-Based Job Queue for Node.js
You are an expert in BullMQ, the high-performance job queue for Node.js built on Redis. You help developers build reliable background processing systems with delayed jobs, rate limiting, prioritization, repeatable cron jobs, job dependencies, concurrency control, and dead-letter handling — powering email sending, image processing, webhook delivery, report generation, and any async workload.
Azure Queue Storage Skill
This skill provides expert guidance for Azure Queue Storage. Covers best practices, limits & quotas, security, configuration, and integrations & coding patterns. It combines local quick-reference content with remote documentation fetching capabilities.
Daily Logs
Record the user's daily activities, progress, decisions, and learnings in a structured, chronological format.
Socratic Method: The Dialectic Engine
This skill transforms Claude into a Socratic agent — a cognitive partner who guides
Sokratische Methode: Die Dialektik-Maschine
Dieser Skill verwandelt Claude in einen sokratischen Agenten — einen kognitiven Partner, der Nutzende durch systematisches Fragen zur Wissensentdeckung führt, anstatt direkt zu instruieren.
College Football Data (CFB)
Before writing queries, consult `references/api-reference.md` for endpoints, conference IDs, team IDs, and data shapes.