How Pub/Sub Works — Decoupling Publishers from Subscribers

How Pub/Sub Works — Decoupling Publishers from Subscribers

2026-03-24

A user clicks "buy." The order service needs to tell the inventory service, the email service, the analytics service, and the fraud detection service. If the order service calls each one directly, it has four dependencies. Adding a fifth consumer means changing the order service. This is tight coupling.

Pub/sub (publish-subscribe) eliminates this coupling. The order service publishes a message to a topic. Every service that subscribes to that topic receives the message. The publisher does not know how many subscribers there are, who they are, or what they do with the message.

The Core Concepts

Publisher — a component that sends messages. It publishes to a topic, not to a specific recipient. The order service publishes OrderPlaced to the orders topic.

Subscriber — a component that receives messages from a topic. The inventory service subscribes to the orders topic. So does the email service. Each receives every message published to that topic.

Topic (or channel) — a named category of messages. orders, payments, user-events. Publishers write to topics. Subscribers read from topics. The topic is the contract between them.

Message — the data published to a topic. Typically a serialized payload (JSON, Protobuf, Avro) containing the event data and metadata (timestamp, source, correlation ID).

Fan-Out

The defining characteristic of pub/sub is fan-out: one message reaches many subscribers. When the order service publishes one OrderPlaced message, every subscriber to the orders topic gets a copy.

This is fundamentally different from a point-to-point queue where each message is consumed by exactly one worker. In pub/sub, the message is broadcast. In a queue, the message is distributed.

Pub/SubPoint-to-Point Queue
DeliveryEvery subscriber gets every messageEach message goes to one consumer
Use caseNotifications, broadcasting, fan-outTask distribution, load balancing
CouplingPublisher knows nothing about subscribersProducer knows the queue
Scaling consumersMore subscribers = more copiesMore workers = faster processing

How Subscriptions Work

The mechanics differ between implementations, but the concept is consistent:

Ephemeral subscriptions — the subscriber connects and receives messages while connected. If it disconnects, it misses messages published during the gap. Redis pub/sub works this way. Good for real-time notifications where missing a message is acceptable.

Durable subscriptions — the broker tracks where each subscriber left off. If a subscriber disconnects and reconnects, it picks up where it stopped. Kafka consumer groups, Google Cloud Pub/Sub, and Amazon SNS+SQS work this way. Required for reliable event processing.

Filtered subscriptions — some brokers allow subscribers to filter messages within a topic. RabbitMQ's topic exchange lets subscribers match on routing keys (e.g., orders.eu.* matches European orders only). This avoids processing irrelevant messages.

Pub/Sub: one message, many subscribers

Order Service publisher orders topic OrderPlaced fan-out to all Inventory subscriber Email subscriber Analytics subscriber

Publisher does not know who subscribes

Implementations

Apache Kafka — topics are partitioned, ordered logs. Consumers read from partitions using offsets. Consumer groups enable both fan-out (different groups each get all messages) and load balancing (consumers within a group split partitions). Messages are retained even after consumption. The de facto standard for high-throughput event streaming.

Redis Pub/Sub — simple, in-memory pub/sub. Messages are fire-and-forget — if no subscriber is listening, the message is lost. Extremely fast, zero durability. Good for real-time features where missed messages are tolerable. Redis Streams adds durability and consumer groups.

Google Cloud Pub/Sub — fully managed. Messages are stored until acknowledged. Supports push (HTTP webhook) and pull delivery. Dead letter topics for failed messages. Global by default.

Amazon SNS — a pub/sub service that fans out to multiple subscribers. Subscribers can be SQS queues, Lambda functions, HTTP endpoints, or email addresses. Commonly paired with SQS: SNS handles fan-out, SQS provides durable queues for each consumer.

RabbitMQ — supports pub/sub through exchanges. A fanout exchange broadcasts to all bound queues. A topic exchange routes based on pattern-matched routing keys. More routing flexibility than Kafka, lower throughput.

Pub/Sub vs Point-to-Point

The distinction matters. In pub/sub, adding a subscriber is a configuration change — the publisher is untouched. In point-to-point messaging (a queue), each message goes to one consumer, which is better for task distribution. See How Message Queues Work for the queue pattern.

Many systems combine both. SNS + SQS is a common AWS pattern: SNS fans out a message to multiple SQS queues (pub/sub), and each queue has competing consumers that process messages (point-to-point).

When to Use Pub/Sub

Good fits:

  • Notifications — a user action triggers reactions in multiple services.
  • Real-time updates — WebSocket servers subscribe to events and push to connected clients.
  • Decoupled event processing — analytics, audit logging, search indexing.
  • Cross-team boundaries — teams subscribe to events from other teams without coordination.

Poor fits:

  • Tasks that need exactly one worker to process each message (use a queue).
  • Request-response interactions where the caller needs an immediate answer.
  • Low-volume systems where the operational cost of a message broker outweighs the decoupling benefit.

Message Ordering and Delivery

Pub/sub systems vary in their ordering guarantees. Kafka guarantees order within a partition but not across partitions. Redis pub/sub delivers in order per connection. Cloud pub/sub services generally do not guarantee ordering unless explicitly configured.

For many use cases, strict ordering is unnecessary. Notifications, analytics events, and log aggregation tolerate out-of-order delivery. When ordering matters (financial transactions, state machines), partition by entity key so all events for the same entity go to the same partition.

Next Steps