How Container Networking Works — Bridges, Veth Pairs, and Port Mapping

How Container Networking Works — Bridges, Veth Pairs, and Port Mapping

2026-03-24

Each container runs in its own network namespace — it has its own network interfaces, IP addresses, routing table, and firewall rules. But an isolated network is useless unless it can communicate. Container networking connects these isolated namespaces to each other and to the outside world.

The default Docker networking setup uses three Linux kernel features: veth pairs (virtual ethernet cables), a bridge (a virtual switch), and iptables rules (for port mapping and NAT). No special networking hardware. No virtual machines. Just kernel networking primitives.

The Default Bridge Network

When Docker starts, it creates a Linux bridge called docker0 — a virtual network switch in the host's network namespace. Every container that uses the default bridge network gets connected to this switch.

External Network

eth0 10.0.0.5 iptables NAT / DNAT docker0 bridge (172.17.0.1) veth nginx eth0 172.17.0.2 veth postgres eth0 172.17.0.3 veth redis eth0 172.17.0.4

Each container has its own network namespace connected to the bridge via a veth pair

The bridge assigns each container an IP address from a private subnet (typically 172.17.0.0/16). Containers on the same bridge can communicate directly by IP address. The bridge acts as a Layer 2 switch — it forwards frames between connected interfaces based on MAC addresses.

Veth Pairs

A veth pair is a virtual ethernet cable — two virtual network interfaces connected back-to-back. Whatever goes in one end comes out the other.

When a container starts, the runtime creates a veth pair. One end is placed in the container's network namespace (where it appears as eth0). The other end is attached to the docker0 bridge in the host namespace. This connects the container to the bridge, and through the bridge to other containers and the host.

From the container's perspective, it has a regular eth0 interface with an IP address. From the host's perspective, there is a vethXXX interface attached to the bridge. The container does not know it is virtualized — it sends and receives packets through its eth0 just like any network interface.

You can see the veth interfaces on the host:

$ ip link show type veth
12: veth7a3d4f@if11: <BROADCAST,MULTICAST,UP> mtu 1500
    link/ether 3a:5b:12:cd:ef:01 brd ff:ff:ff:ff:ff:ff link-netns 6f2a8c
14: vethb81e23@if13: <BROADCAST,MULTICAST,UP> mtu 1500
    link/ether 4e:7c:83:ab:cd:02 brd ff:ff:ff:ff:ff:ff link-netns 9d4b1e

Each vethXXX on the host corresponds to an eth0 inside a container.

Port Mapping

Containers have private IP addresses (172.17.0.x) that are not reachable from outside the host. To expose a container service to the network, Docker uses port mapping — an iptables DNAT (Destination NAT) rule that forwards traffic from a host port to a container port.

When you run docker run -p 8080:80 nginx, Docker adds an iptables rule:

-A DOCKER -p tcp --dport 8080 -j DNAT --to-destination 172.17.0.2:80

Traffic arriving at the host's port 8080 is rewritten to destination 172.17.0.2:80 and forwarded through the bridge to the container. The response follows the reverse path — the kernel's connection tracking (conntrack) handles the reverse NAT automatically.

This is the same NAT mechanism used by home routers and firewalls. The container's private IP address is never exposed to the external network.

Container DNS Resolution

Containers on user-defined bridge networks (created with docker network create) get built-in DNS resolution. Docker runs an embedded DNS server at 127.0.0.11 inside each container. When a container resolves another container's name, Docker's DNS server returns the target container's IP address.

# On user-defined network, containers resolve each other by name
$ docker run --network mynet --name web nginx
$ docker run --network mynet alpine ping web
PING web (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.085 ms

The default bridge network does not have automatic DNS resolution — containers on the default bridge must use IP addresses or the legacy --link flag (deprecated).

User-defined bridge networks also provide better isolation. Containers on different user-defined networks cannot communicate unless explicitly connected to both networks.

Network Modes

Docker supports several network modes:

Bridge (default) — the container gets its own network namespace, connected to a bridge via a veth pair. Isolated from the host network. Port mapping required for external access.

Host — the container shares the host's network namespace. No network isolation. The container's processes bind directly to host ports. No performance overhead from NAT or bridging. Used when network performance is critical or when the container needs to see all host network traffic.

None — the container gets a network namespace with only a loopback interface. No external connectivity. Used for batch processing or security-sensitive workloads that should not communicate over the network.

Overlay — spans multiple hosts. Used by Docker Swarm and Kubernetes. Encapsulates container traffic in VXLAN tunnels between hosts. Each overlay network is a virtual Layer 2 network that spans the cluster.

Macvlan — assigns a MAC address to the container, making it appear as a physical device on the network. The container gets an IP address on the host's physical network. No NAT, no port mapping. Used when containers need to be directly addressable on the LAN.

ModeIsolationPerformancePort mappingUse case
BridgeFullModerate (NAT overhead)RequiredDefault, most workloads
HostNoneNativeNot neededHigh-performance networking
NoneFull (no network)N/AN/ABatch, security
OverlayFullLower (encapsulation)Per serviceMulti-host clusters
MacvlanFullNativeNot neededLAN integration

Container-to-Container Communication

On the same bridge network, containers communicate directly through the bridge. Container A sends a packet to 172.17.0.3 (Container B). The packet arrives at the container's eth0, traverses the veth pair to the bridge, and the bridge forwards it to Container B's veth pair.

Across different networks, containers are isolated. The bridge does not forward packets between networks. To allow cross-network communication, a container must be connected to both networks (docker network connect).

For TCP connections between containers on the same bridge, latency is microseconds — there is no physical network to traverse, just kernel memory copies between network namespaces.

iptables and Connection Tracking

Docker manipulates the host's iptables rules extensively. It creates custom chains (DOCKER, DOCKER-ISOLATION) to manage port mapping, inter-container communication, and network isolation.

The DOCKER-ISOLATION chain prevents traffic between bridge networks. The DOCKER chain handles DNAT rules for port mapping. The MASQUERADE rule in the POSTROUTING chain handles source NAT for outbound traffic — container traffic to the internet appears to come from the host's IP address.

Connection tracking (conntrack) maintains state for every connection, ensuring return packets are correctly NATed back to the originating container.

Next Steps