Long queues slow RabbitMQ down. When millions of messages pile up, the broker consumes more memory, paging kicks in, and throughput drops. The goal is for consumers to process messages roughly as fast as producers publish them.
If queues are growing consistently:
By default, queues and messages do not survive a broker restart. In production, always declare queues as durable and publish messages with persistent delivery mode:
await channel.assertQueue("orders", { durable: true });
channel.sendToQueue("orders", Buffer.from("order data"), {
persistent: true,
});Durable queues are recreated on restart. Persistent messages are written to disk. Together, they protect you against data loss during crashes or restarts.
Without a prefetch limit, RabbitMQ pushes messages to consumers as fast as it can. This can overwhelm slow consumers, cause uneven load distribution, and spike memory usage.
Set a prefetch count to limit how many unacknowledged messages a consumer holds at once:
await channel.prefetch(10);
channel.consume("orders", (msg) => {
// process message
channel.ack(msg);
});A good starting point is a prefetch of 10–50 for most workloads. Tune it based on your processing time: shorter processing = higher prefetch, longer processing = lower prefetch.
Never use noAck: true in production unless you can afford to lose messages. With automatic acknowledgment, messages are removed from the queue the moment they're delivered, before your consumer finishes processing them. If the consumer crashes mid-processing, the message is gone.
Use manual acknowledgments instead:
channel.consume("orders", (msg) => {
try {
processOrder(msg);
channel.ack(msg);
} catch (err) {
channel.nack(msg, false, true);
}
});This ensures messages are only removed from the queue after successful processing. See our dedicated article on RabbitMQ message acknowledgments for more detail.
Messages will inevitably fail. Instead of losing them or letting them loop forever, route failed messages to a dead-letter queue where you can inspect, debug, and retry them later.
await channel.assertExchange("dlx", "direct");
await channel.assertQueue("orders.dlq");
await channel.bindQueue("orders.dlq", "dlx", "orders");
await channel.assertQueue("orders", {
durable: true,
deadLetterExchange: "dlx",
deadLetterRoutingKey: "orders",
});Every production queue should have a dead-letter strategy. It's the difference between losing failed messages silently and being able to investigate and recover from failures.
Message TTL (time-to-live) prevents stale messages from sitting in queues indefinitely. You can set TTL per-queue or per-message:
// Per-queue TTL: all messages expire after 60 seconds
await channel.assertQueue("events", {
messageTtl: 60000,
});
// Per-message TTL
channel.sendToQueue("events", Buffer.from("data"), {
expiration: "30000",
});Expired messages are either discarded or routed to a dead-letter exchange if one is configured. TTL is especially useful for time-sensitive data like notifications or session tokens.
Classic mirrored queues have been deprecated since RabbitMQ 4.0. For high availability, use quorum queues, which replicate data across multiple nodes using the Raft consensus algorithm:
await channel.assertQueue("critical_orders", {
durable: true,
arguments: { "x-queue-type": "quorum" },
});Quorum queues are designed for data safety and consistency. Use them for any queue where losing messages is unacceptable.
Opening a new TCP connection for every operation is expensive. Instead, open one connection per application and create multiple channels on it for concurrent work:
const connection = await amqp.connect("amqp://localhost:5672");
const publishChannel = await connection.createChannel();
const consumeChannel = await connection.createChannel();Channels are lightweight and multiplexed over a single connection. However, avoid sharing a single channel across threads or concurrent operations, as channels are not thread-safe.
Avoid auto-generated queue names in production. Use clear, consistent naming conventions that reflect purpose:
orders.created instead of amq.gen-xyzevents.dlx for dead-letter exchangesnotifications.email for specific consumersGood naming makes it easier to monitor, debug, and reason about your messaging topology. Tools like RabbitGUI make it even easier to navigate well-named resources.
If you expect queues to accumulate large numbers of messages (e.g., batch processing or periodic consumers), use lazy queues. Lazy queues store messages on disk instead of memory, reducing RAM pressure:
await channel.assertQueue("batch_imports", {
durable: true,
arguments: { "x-queue-mode": "lazy" },
});The tradeoff is slightly higher latency for consumers, but the memory savings are significant for queues that routinely hold millions of messages.
A healthy RabbitMQ deployment requires active monitoring. Track these key metrics:
The RabbitMQ monitoring API exposes these metrics. Pair it with a visual tool like RabbitGUI to get a real-time view of your broker's health.
A common anti-pattern is creating many queues and publishing directly to each one with sendToQueue. This tightly couples producers to consumers and makes the topology rigid.
Instead, publish to exchanges and let bindings handle routing. This gives you the flexibility to add, remove, or reroute consumers without changing producer code. Read more about exchange types and routing strategies.
| Practice | Why it matters |
|---|---|
| Keep queues short | Prevents memory pressure and throughput degradation |
| Durable queues + persistent messages | Protects against data loss on restart |
| Prefetch limits | Balances load and prevents consumer overload |
| Manual acknowledgments | Ensures messages survive consumer crashes |
| Dead-letter queues | Captures failed messages for inspection and retry |
| Message TTL | Prevents stale messages from accumulating |
| Quorum queues | High availability with Raft-based replication |
| Reuse connections | Avoids expensive TCP connection overhead |
| Explicit naming | Improves observability and debugging |
| Lazy queues | Reduces memory usage for large backlogs |
| Monitor metrics | Catches issues before they become outages |
| Route via exchanges | Decouples producers from consumers |
RabbitMQ tutorialRabbitMQ vs KafkaEverything you need to know about the differences between RabbitMQ and Kafka.
RabbitMQ tutorialRabbitMQ: maximum size of a messageThe maximum size of a message in RabbitMQ is not defined by the protocol, but by the implementation. Unfortunately, this value is not well documented and has changed a lot over time
RabbitMQ tutorialProperly setting up dead-letter queues in RabbitMQLearn how to set up dead-letter queues in RabbitMQ, including creating a dead-letter exchange, binding it to a queue, and managing rejected messages via policiesDebug, monitor, and manage RabbitMQ with a modern developer interface.
Try now
Cheat sheetRabbitMQ Javascript Cheat-SheetEverything you need to know to get started with RabbitMQ in NodeJs and Docker with code examples ready to go.
ProductHow to log into your CloudAMQP RabbitMQ instanceUse RabbitGUI to connect to your CloudAMQP instance and manage your dead letter queues with ease
ProductHow security is built into RabbitGUIRabbitGUI was built with security as a top priority for its users, and here is how it was done!