When message processing fails, the most tempting approach is to requeue the message with nack:
channel.consume("orders", (msg) => {
try {
processOrder(msg);
channel.ack(msg);
} catch (err) {
channel.nack(msg, false, true);
}
});This puts the message back at the front of the queue and it gets redelivered almost instantly. If the failure is persistent (bad payload, downstream service down, bug in processing logic), the message fails again immediately, creating a tight infinite loop that:
You need a retry strategy that limits attempts and adds delay between retries.
The most common retry pattern in RabbitMQ uses a pair of queues: a work queue and a retry queue. Failed messages are sent to the retry queue, which has a TTL. When the TTL expires, the message is dead-lettered back to the work queue for another attempt.

import amqp from "amqplib";
const connection = await amqp.connect("amqp://localhost:5672");
const channel = await connection.createChannel();
// Work exchange and queue
await channel.assertExchange("work_exchange", "direct");
await channel.assertQueue("work_queue", {
durable: true,
deadLetterExchange: "retry_exchange",
deadLetterRoutingKey: "retry",
});
await channel.bindQueue("work_queue", "work_exchange", "work");
// Retry exchange and queue (messages wait here before being retried)
await channel.assertExchange("retry_exchange", "direct");
await channel.assertQueue("retry_queue", {
durable: true,
deadLetterExchange: "work_exchange",
deadLetterRoutingKey: "work",
messageTtl: 5000,
});
await channel.bindQueue("retry_queue", "retry_exchange", "retry");
// Dead-letter queue for messages that exceeded max retries
await channel.assertExchange("dlx_exchange", "direct");
await channel.assertQueue("dead_letter_queue", { durable: true });
await channel.bindQueue("dead_letter_queue", "dlx_exchange", "dead");RabbitMQ doesn't track retry counts natively, so use a custom header to count attempts:
const MAX_RETRIES = 3;
await channel.prefetch(10);
channel.consume("work_queue", (msg) => {
const headers = msg.properties.headers || {};
const retryCount = headers["x-retry-count"] || 0;
try {
processOrder(msg);
channel.ack(msg);
} catch (err) {
channel.ack(msg);
if (retryCount >= MAX_RETRIES) {
// Send to dead-letter queue
channel.publish("dlx_exchange", "dead", msg.content, {
headers: { ...headers, "x-retry-count": retryCount, "x-error": err.message },
});
} else {
// Send to retry queue with incremented count
channel.publish("retry_exchange", "retry", msg.content, {
headers: { ...headers, "x-retry-count": retryCount + 1 },
});
}
}
});Notice that we ack the original message and then republish it to either the retry queue or the dead-letter queue. This gives us full control over the headers and routing, rather than relying on nack which doesn't let us modify the message.
A fixed 5-second delay isn't always ideal. If a downstream service is down, you want to wait progressively longer between retries to give it time to recover. This is exponential backoff.
Create separate retry queues for each delay tier:
const delays = [5000, 15000, 60000]; // 5s, 15s, 60s
for (const delay of delays) {
await channel.assertQueue(`retry_${delay}ms`, {
durable: true,
deadLetterExchange: "work_exchange",
deadLetterRoutingKey: "work",
messageTtl: delay,
});
await channel.bindQueue(`retry_${delay}ms`, "retry_exchange", `retry_${delay}`);
}const DELAYS = [5000, 15000, 60000];
const MAX_RETRIES = DELAYS.length;
channel.consume("work_queue", (msg) => {
const headers = msg.properties.headers || {};
const retryCount = headers["x-retry-count"] || 0;
try {
processOrder(msg);
channel.ack(msg);
} catch (err) {
channel.ack(msg);
if (retryCount >= MAX_RETRIES) {
channel.publish("dlx_exchange", "dead", msg.content, {
headers: { ...headers, "x-retry-count": retryCount, "x-error": err.message },
});
} else {
const delay = DELAYS[retryCount];
channel.publish("retry_exchange", `retry_${delay}`, msg.content, {
headers: { ...headers, "x-retry-count": retryCount + 1 },
});
}
}
});The first failure retries after 5 seconds, the second after 15 seconds, the third after 60 seconds. After that, the message goes to the dead-letter queue.
If you have the delayed message exchange plugin installed, you can implement exponential backoff with a single exchange instead of multiple queues:
await channel.assertExchange("retry_delayed", "x-delayed-message", {
arguments: { "x-delayed-type": "direct" },
});
await channel.bindQueue("work_queue", "retry_delayed", "work");Then in your consumer, set the delay dynamically per message:
const DELAYS = [5000, 15000, 60000];
channel.consume("work_queue", (msg) => {
const headers = msg.properties.headers || {};
const retryCount = headers["x-retry-count"] || 0;
try {
processOrder(msg);
channel.ack(msg);
} catch (err) {
channel.ack(msg);
if (retryCount >= DELAYS.length) {
channel.publish("dlx_exchange", "dead", msg.content, {
headers: { ...headers, "x-retry-count": retryCount, "x-error": err.message },
});
} else {
channel.publish("retry_delayed", "work", msg.content, {
headers: {
...headers,
"x-retry-count": retryCount + 1,
"x-delay": DELAYS[retryCount],
},
});
}
}
});This is the cleanest approach but requires plugin installation. See our delayed messages guide for setup instructions.
| Approach | Pros | Cons |
|---|---|---|
| Single retry queue (fixed delay) | Simple setup, no plugins | Fixed delay only, no backoff |
| Multiple retry queues (backoff) | Exponential backoff, no plugins | More queues to manage |
| Delayed exchange plugin | Cleanest code, flexible delays | Requires plugin, not cluster-safe |
For most production systems, multiple retry queues with exponential backoff is the sweet spot. It works on any RabbitMQ installation, supports backoff, and scales with standard queue features like quorum replication.
After exhausting retries, messages end up in your dead-letter queue. Use RabbitGUI to inspect these messages, check their x-retry-count and x-error headers, and decide whether to republish them or discard them.

RabbitMQ tutorialRabbitMQ Exchange Types ExplainedLearn how RabbitMQ exchanges work and when to use each type. Covers direct, fanout, topic, and headers exchanges with practical examples and use cases.
RabbitMQ tutorialRabbitMQ Delayed MessagesLearn how to implement delayed messages in RabbitMQ using the delayed message exchange plugin and the message TTL with dead-letter queue pattern.
RabbitMQ tutorialRabbitMQ Monitoring APIComplete documentation on how to monitor RabbitMQ using its HTTP monitoring API with detailed explanations of available metrics and examples.Debug, monitor, and manage RabbitMQ with a modern developer interface.
Try now
ProductAnnouncing RabbitGUI 1.1: Now on Windows and LinuxRabbitGUI v1.1 brings native support for Windows, Linux, and Intel-based Macs, along with a built-in auto updater. Here's why this release matters.
Cheat sheetRabbitMQ Javascript Cheat-SheetEverything you need to know to get started with RabbitMQ in NodeJs and Docker with code examples ready to go.
ProductHow to log into your CloudAMQP RabbitMQ instanceUse RabbitGUI to connect to your CloudAMQP instance and manage your dead letter queues with ease