An event bus is a messaging layer that lets services communicate through events without knowing about each other. Services can publish events, and any number of other services can react to it, or none at all. The publisher doesn't care.
This decoupling is what makes event-driven architectures scale. The order service doesn't need to know that the mailer service sends a shipping confirmation, or that the analytics service tracks every order. It just says "an order was created" and moves on.
RabbitMQ is a natural fit for this pattern. A single topic exchange acts as the event bus: every service publishes events to it, and every service that cares creates a queue with the right bindings to receive the events it needs.
The entire event bus is a single topic exchange. Every publisher in your system publishes to this one exchange, there's no need to create one exchange per service or per event type.
await channel.assertExchange("events", "topic", { durable: true });That's it. The exchange is durable so it survives broker restarts. All services publish to events, and the exchange handles routing based on the message's routing key.
The topic exchange type is the right choice here because it supports wildcard pattern matching on routing keys. This lets consumers subscribe to broad categories of events (v1.orders.#) or very specific ones (v1.orders.order.shipped) without any changes on the publisher side.
Producers can already publish events to the event bus. Since there are no consumers yet, RabbitMQ will simply drop those messages.
A good routing key convention is critical. It determines how flexibly consumers can filter events. A flat key like order_created gives you no room to subscribe to subsets of events.
The recommended format is:
v1.<publisher>.<entity>.<action>
| Segment | Purpose | Example |
|---|---|---|
v1 | Schema version — lets you evolve the format without breaking consumers | v1, v2 |
<publisher> | The service that produced the event | orders, users, payments |
<entity> | The domain object the event is about | order, user, payment |
<action> | What happened | created, shipped, deleted |
Real examples:
v1.orders.order.createdv1.orders.order.shippedv1.users.user.registeredv1.payments.payment.completedThis structure lets consumers bind with patterns like:
v1.orders.# — all events from the orders servicev1.*.order.* — all order events regardless of publisherv1.orders.order.created — one specific eventThe version prefix is worth including from day one. When you need to change an event's payload shape, you publish both v1.orders.order.created and v2.orders.order.created during the migration period, and consumers upgrade on their own schedule.
Here's the key insight: publishers just fire events into the exchange. If no queue is bound to receive a particular event, the message is silently dropped. This is by design.
In the diagram above, two services are publishing events but no queues exist yet. Every message reaches the exchange and gets dropped. The publishers don't fail, don't block, and don't even know. This is exactly what you want during development — services can start publishing events before any consumer is ready to handle them.
channel.publish(
"events",
"v1.orders.order.created",
Buffer.from(JSON.stringify({ orderId: "abc-123", total: 99.00 })),
{ persistent: true },
);Always set persistent: true so messages are written to disk. Combined with durable queues, this ensures events aren't lost if RabbitMQ restarts.
The consumer side is where the topology takes shape. Each service that wants to react to events creates a durable queue and binds it to the exchange with the routing patterns it cares about.
await channel.assertQueue("v1.mailer.welcome", { durable: true });
await channel.bindQueue("v1.mailer.welcome", "events", "v1.users.user.registered");
await channel.assertQueue("v1.mailer.shipped", { durable: true });
await channel.bindQueue("v1.mailer.shipped", "events", "v1.orders.order.shipped");Queues must be durable. A non-durable queue disappears when RabbitMQ restarts, and every event published while it was gone is lost forever. With a durable queue and persistent messages, RabbitMQ guarantees that events survive restarts.
Queue names should follow a convention too. The recommended format is:
v1.<service>.<purpose>
Where <service> identifies the consuming service and <purpose> describes what this specific consumer does. This maps naturally to the consumer's code:
| Queue name | Service | Purpose |
|---|---|---|
v1.mailer.welcome | mailer | Send welcome email on registration |
v1.mailer.shipped | mailer | Send shipping notification |
v1.stock.reserve | stock | Reserve inventory on new order |
v1.stats.orders | stats | Track all order-related events |
A single service can have multiple queues if it handles different events in different ways. The mailer service above has two queues because welcome emails and shipping notifications are independent workflows that should be processed and scaled separately.
The version prefix in queue names serves the same purpose as in routing keys: when you deploy v2 consumers alongside v1 consumers during a migration, queue names don't collide.
Here's a realistic setup with three publishers and four consumer queues. Watch how events flow through the exchange:
A few things to notice:
v1.orders.order.created goes to both v1.stock.reserve (exact match) and v1.stats.orders (wildcard v1.orders.#). One event, two independent consumers.v1.orders.order.shipped goes to both v1.mailer.shipped and v1.stats.orders. The mailer sends a notification while stats tracks the event.v1.users.user.registered only goes to v1.mailer.welcome. No other queue cares about this event yet.v1.users.user.deleted gets dropped — no queue has a binding that matches. That's fine. When a service eventually needs to react to user deletions, it creates a queue and binding without touching the publisher.v1.payments.payment.completed also gets dropped. The payments service is already publishing events, ready for when a consumer needs them.Here's the complete setup code for the topology above:
import amqplib from "amqplib";
const conn = await amqplib.connect("amqp://localhost");
const ch = await conn.createChannel();
await ch.assertExchange("events", "topic", { durable: true });
await ch.assertQueue("v1.mailer.welcome", { durable: true });
await ch.bindQueue("v1.mailer.welcome", "events", "v1.users.user.registered");
await ch.assertQueue("v1.mailer.shipped", { durable: true });
await ch.bindQueue("v1.mailer.shipped", "events", "v1.orders.order.shipped");
await ch.assertQueue("v1.stock.reserve", { durable: true });
await ch.bindQueue("v1.stock.reserve", "events", "v1.orders.order.created");
await ch.assertQueue("v1.stats.orders", { durable: true });
await ch.bindQueue("v1.stats.orders", "events", "v1.orders.#");Each service only declares the exchange and its own queues. RabbitMQ's assert operations are idempotent — if the exchange or queue already exists with the same configuration, the call is a no-op. This means every service can safely assert the exchange on startup without coordination.
function publishEvent(channel, routingKey, payload) {
channel.publish(
"events",
routingKey,
Buffer.from(JSON.stringify(payload)),
{ persistent: true },
);
}
publishEvent(ch, "v1.orders.order.created", {
orderId: "abc-123",
userId: "user-456",
total: 99.00,
});ch.consume("v1.stock.reserve", (msg) => {
if (!msg) return;
const event = JSON.parse(msg.content.toString());
reserveStock(event.orderId);
ch.ack(msg);
});Always acknowledge messages explicitly. If your consumer crashes before acking, RabbitMQ redelivers the message to another consumer (or the same one after restart). This is how you get at-least-once delivery guarantees.
Building an event bus with RabbitMQ comes down to a few decisions:
eventsv1.<publisher>.<entity>.<action> gives consumers maximum flexibilityPublishers and consumers evolve independently. A new service subscribes to existing events by creating a queue and binding — no changes to publishers, no deployments, no coordination. Events that nobody listens to are silently dropped until someone needs them.
RabbitMQ tutorialRabbitMQ Retry Pattern: How to Retry Failed MessagesLearn how to implement message retry patterns in RabbitMQ using dead-letter queues, delayed retries with TTL, and exponential backoff strategies.
RabbitMQ tutorialRabbitMQ exchange types explained with animationsLearn how RabbitMQ exchanges work and when to use each type. Covers direct, fanout, topic, and headers exchanges with practical examples and use cases.
RabbitMQ tutorialRabbitMQ Delayed MessagesLearn how to implement delayed messages in RabbitMQ using the delayed message exchange plugin and the message TTL with dead-letter queue pattern.Debug, monitor, and manage RabbitMQ with a modern developer interface.
Available on Windows, Mac, and Linux.

ProductHow to spy on real-time queue traffic in RabbitMQ?Inspecting live messages flowing through a RabbitMQ queue is tricky because consuming is destructive. Learn how RabbitGUI creates a shadow queue to capture traffic without affecting your application.
ProductHow to predict when a RabbitMQ queue will be empty?A step-by-step explanation of how to estimate backlog drain time for a RabbitMQ queue, from naive division to linear regression with adaptive windowing.
ProductAnnouncing RabbitGUI 1.1: Now on Windows and LinuxRabbitGUI v1.1 brings native support for Windows, Linux, and Intel-based Macs, along with a built-in auto updater. Here's why this release matters.