What is Orchestration

Container orchestration automates the deployment, scaling, networking, and self-healing of containers across a cluster of machines. Instead of manually starting containers on individual servers, you declare the desired state (which containers, how many, what resources) and the orchestrator makes it happen.

How it works

Kubernetes, the dominant orchestrator, works through a declare-then-reconcile model:

  1. You submit a desired state: "run 3 replicas of nginx with 512MB memory each."
  2. The scheduler assigns pods to nodes based on available resources and constraints.
  3. The kubelet on each node creates the containers using containerd and runc.
  4. Controllers continuously monitor actual state vs desired state. If a pod crashes, the controller creates a replacement. If load increases, the autoscaler adds replicas.

Orchestration handles: scheduling (which node runs which container), networking (service discovery, load balancing, network policies), storage (persistent volumes, storage classes), scaling (horizontal pod autoscaler, cluster autoscaler), and self-healing (restart failed containers, replace unhealthy nodes).

Docker Swarm and Nomad are alternatives to Kubernetes but have smaller ecosystems.

Why it matters

Running a few containers on one machine is simple. Running hundreds across a cluster requires automated scheduling, failure recovery, network routing, and resource management. Orchestration solves the operational complexity of containerized applications at scale — turning a cluster of machines into a single platform.

See How Containers Work for the single-host foundations that orchestration builds on.