bullmq-specialist
BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications.
About this skill
This AI agent skill transforms the agent into a BullMQ specialist, providing it with deep expertise in designing, implementing, and debugging Redis-backed job queues. The agent understands the nuances of asynchronous processing, background job management, and the architectural patterns required for building scalable and resilient Node.js/TypeScript applications using BullMQ. It can offer expert advice on queue design, optimize worker concurrency for maximum throughput, diagnose and resolve issues with stuck or failing jobs, and explain complex multi-step job flows, effectively acting as an invaluable consultant for robust backend systems.
Best use case
Designing scalable application architectures that leverage asynchronous job processing. Troubleshooting stuck jobs, performance bottlenecks, or concurrency issues in BullMQ queues. Optimizing resource utilization for background workers and task processing. Implementing reliable execution for long-running or resource-intensive tasks such as data imports, image processing, or bulk email sending. Understanding and applying best practices for BullMQ queue management, monitoring, and error handling. Generating code snippets or command-line instructions for common BullMQ operations like enqueuing jobs, checking queue status, or retrying failed jobs (assuming functional capabilities).
BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications.
The AI agent will provide expert advice, generate relevant code snippets, diagnose potential issues, or outline optimal solutions related to BullMQ and Redis job queues. It will help users confidently design, implement, debug, or optimize their asynchronous processing systems, leading to more scalable, reliable, and performant applications.
Practical example
Example input
My BullMQ worker is processing jobs too slowly, and I suspect a concurrency issue with a CPU-bound task. What are common causes and how can I optimize its settings?
Example output
Slow BullMQ worker processing for CPU-bound tasks often stems from:
1. **Insufficient CPU Resources:** Ensure your worker server has enough CPU cores available for the desired concurrency level.
2. **Suboptimal Concurrency Setting:** If `concurrency` is too high, workers might fight for CPU, leading to context switching overhead. If too low, you're under-utilizing resources. For CPU-bound tasks, a good starting point is `concurrency: number_of_cpu_cores`.
3. **Blocking Operations:** Even in async Node.js, poorly written code can block the event loop. Profile your worker code to identify synchronous bottlenecks.
**Optimization Steps:**
* **Adjust `concurrency`:** In your worker definition, try setting `new Worker('myQueue', myProcessor, { concurrency: os.cpus().length });` to match CPU cores.
* **Monitor CPU Usage:** Use tools like `htop` or cloud monitoring services to observe CPU utilization during job processing.
* **Batching/Chunking:** If a single job involves heavy processing, consider breaking it into smaller sub-jobs or processing data in chunks.
* **Worker Pooling (Advanced):** For extremely heavy or external process-dependent tasks, consider using separate worker processes for isolation.
Could you share your current worker configuration and an example of a slow job's processing logic for more specific advice?When to use this skill
- When dealing with concepts like `bullmq`, `bull queue`, `redis queue`, or `background job`.
- When you need to process tasks asynchronously without blocking the main application thread or API responses.
- For decoupling microservices or different components of an application to improve resilience and maintainability.
- To smooth out traffic spikes and handle varying workloads gracefully by buffering tasks.
When not to use this skill
- For real-time, synchronous operations that demand immediate responses and cannot tolerate any delay.
- When the primary need is simple task scheduling without complex queueing semantics or retry mechanisms.
- If the application's backend is not built with Node.js/TypeScript or does not use Redis as its data store.
- As a primary database or caching solution; BullMQ relies on Redis but is not a general-purpose data store itself.
Installation
Claude Code / Cursor / Codex
Manual Installation
- Download SKILL.md from GitHub
- Place it in
.claude/skills/bullmq-specialist/SKILL.mdinside your project - Restart your AI agent — it will auto-discover the skill
How bullmq-specialist Compares
| Feature / Agent | bullmq-specialist | Standard Approach |
|---|---|---|
| Platform Support | Claude | Limited / Varies |
| Context Awareness | High | Baseline |
| Installation Complexity | easy | N/A |
Frequently Asked Questions
What does this skill do?
BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications.
Which AI agents support this skill?
This skill is designed for Claude.
How difficult is it to install?
The installation complexity is rated as easy. You can find the installation instructions above.
Where can I find the source code?
You can find the source code on GitHub using the link provided at the top of the page.
Related Guides
AI Agents for Coding
Browse AI agent skills for coding, debugging, testing, refactoring, code review, and developer workflows across Claude, Cursor, and Codex.
Best AI Skills for Claude
Explore the best AI skills for Claude and Claude Code across coding, research, workflow automation, documentation, and agent operations.
AI Agent for YouTube Script Writing
Find AI agent skills for YouTube script writing, video research, content outlining, and repeatable channel production workflows.
SKILL.md Source
# BullMQ Specialist
BullMQ expert for Redis-backed job queues, background processing, and
reliable async execution in Node.js/TypeScript applications.
## Principles
- Jobs are fire-and-forget from the producer side - let the queue handle delivery
- Always set explicit job options - defaults rarely match your use case
- Idempotency is your responsibility - jobs may run more than once
- Backoff strategies prevent thundering herds - exponential beats linear
- Dead letter queues are not optional - failed jobs need a home
- Concurrency limits protect downstream services - start conservative
- Job data should be small - pass IDs, not payloads
- Graceful shutdown prevents orphaned jobs - handle SIGTERM properly
## Capabilities
- bullmq-queues
- job-scheduling
- delayed-jobs
- repeatable-jobs
- job-priorities
- rate-limiting-jobs
- job-events
- worker-patterns
- flow-producers
- job-dependencies
## Scope
- redis-infrastructure -> redis-specialist
- serverless-queues -> upstash-qstash
- workflow-orchestration -> temporal-craftsman
- event-sourcing -> event-architect
- email-delivery -> email-systems
## Tooling
### Core
- bullmq
- ioredis
### Hosting
- upstash
- redis-cloud
- elasticache
- railway
### Monitoring
- bull-board
- arena
- bullmq-pro
### Patterns
- delayed-jobs
- repeatable-jobs
- job-flows
- rate-limiting
- sandboxed-processors
## Patterns
### Basic Queue Setup
Production-ready BullMQ queue with proper configuration
**When to use**: Starting any new queue implementation
import { Queue, Worker, QueueEvents } from 'bullmq';
import IORedis from 'ioredis';
// Shared connection for all queues
const connection = new IORedis(process.env.REDIS_URL, {
maxRetriesPerRequest: null, // Required for BullMQ
enableReadyCheck: false,
});
// Create queue with sensible defaults
const emailQueue = new Queue('emails', {
connection,
defaultJobOptions: {
attempts: 3,
backoff: {
type: 'exponential',
delay: 1000,
},
removeOnComplete: { count: 1000 },
removeOnFail: { count: 5000 },
},
});
// Worker with concurrency limit
const worker = new Worker('emails', async (job) => {
await sendEmail(job.data);
}, {
connection,
concurrency: 5,
limiter: {
max: 100,
duration: 60000, // 100 jobs per minute
},
});
// Handle events
worker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed:`, err);
});
### Delayed and Scheduled Jobs
Jobs that run at specific times or after delays
**When to use**: Scheduling future tasks, reminders, or timed actions
// Delayed job - runs once after delay
await queue.add('reminder', { userId: 123 }, {
delay: 24 * 60 * 60 * 1000, // 24 hours
});
// Repeatable job - runs on schedule
await queue.add('daily-digest', { type: 'summary' }, {
repeat: {
pattern: '0 9 * * *', // Every day at 9am
tz: 'America/New_York',
},
});
// Remove repeatable job
await queue.removeRepeatable('daily-digest', {
pattern: '0 9 * * *',
tz: 'America/New_York',
});
### Job Flows and Dependencies
Complex multi-step job processing with parent-child relationships
**When to use**: Jobs depend on other jobs completing first
import { FlowProducer } from 'bullmq';
const flowProducer = new FlowProducer({ connection });
// Parent waits for all children to complete
await flowProducer.add({
name: 'process-order',
queueName: 'orders',
data: { orderId: 123 },
children: [
{
name: 'validate-inventory',
queueName: 'inventory',
data: { orderId: 123 },
},
{
name: 'charge-payment',
queueName: 'payments',
data: { orderId: 123 },
},
{
name: 'notify-warehouse',
queueName: 'notifications',
data: { orderId: 123 },
},
],
});
### Graceful Shutdown
Properly close workers without losing jobs
**When to use**: Deploying or restarting workers
const shutdown = async () => {
console.log('Shutting down gracefully...');
// Stop accepting new jobs
await worker.pause();
// Wait for current jobs to finish (with timeout)
await worker.close();
// Close queue connection
await queue.close();
process.exit(0);
};
process.on('SIGTERM', shutdown);
process.on('SIGINT', shutdown);
### Bull Board Dashboard
Visual monitoring for BullMQ queues
**When to use**: Need visibility into queue status and job states
import { createBullBoard } from '@bull-board/api';
import { BullMQAdapter } from '@bull-board/api/bullMQAdapter';
import { ExpressAdapter } from '@bull-board/express';
const serverAdapter = new ExpressAdapter();
serverAdapter.setBasePath('/admin/queues');
createBullBoard({
queues: [
new BullMQAdapter(emailQueue),
new BullMQAdapter(orderQueue),
],
serverAdapter,
});
app.use('/admin/queues', serverAdapter.getRouter());
## Validation Checks
### Redis connection missing maxRetriesPerRequest
Severity: ERROR
BullMQ requires maxRetriesPerRequest null for proper reconnection handling
Message: BullMQ queue/worker created without maxRetriesPerRequest: null on Redis connection. This will cause workers to stop on Redis connection issues.
### No stalled job event handler
Severity: WARNING
Workers should handle stalled events to detect crashed workers
Message: Worker created without 'stalled' event handler. Stalled jobs indicate worker crashes and should be monitored.
### No failed job event handler
Severity: WARNING
Workers should handle failed events for monitoring and alerting
Message: Worker created without 'failed' event handler. Failed jobs should be logged and monitored.
### No graceful shutdown handling
Severity: WARNING
Workers should gracefully shut down on SIGTERM/SIGINT
Message: Worker file without graceful shutdown handling. Jobs may be orphaned on deployment.
### Awaiting queue.add in request handler
Severity: INFO
Queue additions should be fire-and-forget in request handlers
Message: Queue.add awaited in request handler. Consider fire-and-forget for faster response.
### Potentially large data in job payload
Severity: WARNING
Job data should be small - pass IDs not full objects
Message: Job appears to have large inline data. Pass IDs instead of full objects to keep Redis memory low.
### Job without timeout configuration
Severity: INFO
Jobs should have timeouts to prevent infinite execution
Message: Job added without explicit timeout. Consider adding timeout to prevent stuck jobs.
### Retry without backoff strategy
Severity: WARNING
Retries should use exponential backoff to avoid thundering herd
Message: Job has retry attempts but no backoff strategy. Use exponential backoff to prevent thundering herd.
### Repeatable job without explicit timezone
Severity: WARNING
Repeatable jobs should specify timezone to avoid DST issues
Message: Repeatable job without explicit timezone. Will use server local time which can drift with DST.
### Potentially high worker concurrency
Severity: INFO
High concurrency can overwhelm downstream services
Message: Worker concurrency is high. Ensure downstream services can handle this load (DB connections, API rate limits).
## Collaboration
### Delegation Triggers
- redis infrastructure|redis cluster|memory tuning -> redis-specialist (Queue needs Redis infrastructure)
- serverless queue|edge queue|no redis -> upstash-qstash (Need queues without managing Redis)
- complex workflow|saga|compensation|long-running -> temporal-craftsman (Need workflow orchestration beyond simple jobs)
- event sourcing|CQRS|event streaming -> event-architect (Need event-driven architecture)
- deploy|kubernetes|scaling|infrastructure -> devops (Queue needs infrastructure)
- monitor|metrics|alerting|dashboard -> performance-hunter (Queue needs monitoring)
### Email Queue Stack
Skills: bullmq-specialist, email-systems, redis-specialist
Workflow:
```
1. Email request received (API)
2. Job queued with rate limiting (bullmq-specialist)
3. Worker processes with backoff (bullmq-specialist)
4. Email sent via provider (email-systems)
5. Status tracked in Redis (redis-specialist)
```
### Background Processing Stack
Skills: bullmq-specialist, backend, devops
Workflow:
```
1. API receives request (backend)
2. Long task queued for background (bullmq-specialist)
3. Worker processes async (bullmq-specialist)
4. Result stored/notified (backend)
5. Workers scaled per load (devops)
```
### AI Processing Pipeline
Skills: bullmq-specialist, ai-workflow-automation, performance-hunter
Workflow:
```
1. AI task submitted (ai-workflow-automation)
2. Job flow created with dependencies (bullmq-specialist)
3. Workers process stages (bullmq-specialist)
4. Performance monitored (performance-hunter)
5. Results aggregated (ai-workflow-automation)
```
### Scheduled Tasks Stack
Skills: bullmq-specialist, backend, redis-specialist
Workflow:
```
1. Repeatable jobs defined (bullmq-specialist)
2. Cron patterns with timezone (bullmq-specialist)
3. Jobs execute on schedule (bullmq-specialist)
4. State managed in Redis (redis-specialist)
5. Results handled (backend)
```
## Related Skills
Works well with: `redis-specialist`, `backend`, `nextjs-app-router`, `email-systems`, `ai-workflow-automation`, `performance-hunter`
## When to Use
- User mentions or implies: bullmq
- User mentions or implies: bull queue
- User mentions or implies: redis queue
- User mentions or implies: background job
- User mentions or implies: job queue
- User mentions or implies: delayed job
- User mentions or implies: repeatable job
- User mentions or implies: worker process
- User mentions or implies: job scheduling
- User mentions or implies: async processingRelated Skills
nft-standards
Master ERC-721 and ERC-1155 NFT standards, metadata best practices, and advanced NFT features.
nextjs-app-router-patterns
Comprehensive patterns for Next.js 14+ App Router architecture, Server Components, and modern full-stack React development.
new-rails-project
Create a new Rails project
networkx
NetworkX is a Python package for creating, manipulating, and analyzing complex networks and graphs.
network-engineer
Expert network engineer specializing in modern cloud networking, security architectures, and performance optimization.
nestjs-expert
You are an expert in Nest.js with deep knowledge of enterprise-grade Node.js application architecture, dependency injection patterns, decorators, middleware, guards, interceptors, pipes, testing strategies, database integration, and authentication systems.
nerdzao-elite
Senior Elite Software Engineer (15+) and Senior Product Designer. Full workflow with planning, architecture, TDD, clean code, and pixel-perfect UX validation.
nerdzao-elite-gemini-high
Modo Elite Coder + UX Pixel-Perfect otimizado especificamente para Gemini 3.1 Pro High. Workflow completo com foco em qualidade máxima e eficiência de tokens.
native-data-fetching
Use when implementing or debugging ANY network request, API call, or data fetching. Covers fetch API, React Query, SWR, error handling, caching, offline support, and Expo Router data loaders (useLoaderData).
n8n-workflow-patterns
Proven architectural patterns for building n8n workflows.
n8n-validation-expert
Expert guide for interpreting and fixing n8n validation errors.
n8n-node-configuration
Operation-aware node configuration guidance. Use when configuring nodes, understanding property dependencies, determining required fields, choosing between get_node detail levels, or learning common configuration patterns by node type.