What is a Container

A container is a process running on the host operating system with restricted visibility and limited resources. It is not a virtual machine — there is no guest kernel, no hypervisor, no hardware emulation. The container shares the host kernel and uses Linux kernel features to isolate itself.

How it works

A container combines three kernel features: namespaces control what the process can see (its own PIDs, network, filesystem), cgroups control what it can consume (CPU, memory, I/O), and a union filesystem (overlayfs) provides a layered root filesystem built from an image.

A container runtime like runc creates the namespaces, configures the cgroup, mounts the filesystem, and executes the process. After that, the container is a regular Linux process — the kernel enforces all restrictions.

Containers start in milliseconds (no OS boot), use kilobytes of overhead (no guest kernel), and hundreds can run on a single host. The tradeoff is weaker isolation than VMs — a kernel vulnerability affects all containers on the host.

Why it matters

Containers provide consistent, reproducible environments from development to production. The same image runs identically on a laptop, in CI, and in production. They are the deployment unit for Kubernetes, Docker Compose, and every modern cloud platform.

See How Containers Work for the full architecture.