Systems FAQ

Common questions about memory, processes, the kernel, threads, file systems, and containers. Each answer is short. Links go to the full explanation.

What is the difference between the stack and the heap?

The stack is fast, LIFO memory for local variables and function calls. Allocation is a pointer move. It's automatically managed — when a function returns, its frame is gone. Fixed size (1-8 MB per thread). Exceeding it causes a stack overflow.

The heap is for memory that outlives a single function call. Allocation searches for free space via an allocator (malloc, jemalloc). Slower, flexible size, requires explicit free (or garbage collection). Can fragment over time.

See How Memory Works for the full breakdown with diagrams.

What is virtual memory?

Virtual memory gives each process its own private address space. The CPU's MMU translates virtual addresses to physical addresses using page tables maintained by the kernel. A process can't read another process's memory because the page tables simply don't map to it.

Virtual memory also enables overcommit (allocate more than physical RAM) and swapping (move inactive pages to disk). The tradeoff: swapping is 100,000x slower than RAM access.

See How Memory Works for the full explanation of page faults, RSS vs VSZ, and why programs hit memory cliffs.

What is the difference between a process and a thread?

A process has its own memory space, file descriptors, and PID. A thread shares memory with other threads in the same process but has its own stack and CPU registers.

Creating a process is expensive (copy page tables). Creating a thread is cheap (new stack only). Processes are isolated — one can't corrupt another. Threads share everything — one can corrupt another's data without synchronization.

See How Processes Work and How Threads Work for the full explanations.

What is a syscall?

A syscall transfers control from your program (user space) to the kernel (kernel space). The CPU switches to privileged mode, the kernel performs the operation (open a file, send a packet, allocate memory), and returns. A round trip takes 100-500 nanoseconds.

Linux has approximately 450 syscalls. Most programs use 20-30. The rest are for specialized features like namespaces (containers) and eBPF (kernel programmability).

See How the Kernel Works for the full syscall pipeline.

What is a race condition?

A race condition occurs when two threads access shared data without synchronization and at least one writes. The result depends on which thread runs first — non-deterministic and unreproducible.

Prevention: mutexes (lock shared data), atomic operations (hardware-guaranteed indivisible operations), channels (send data between threads instead of sharing), or Rust's borrow checker (compile-time prevention).

See How Threads Work for worked examples and synchronization patterns.

What is the difference between containers and virtual machines?

Containers share the host kernel and use namespaces for isolation and cgroups for resource limits. VMs run their own kernel on a hypervisor with hardware-enforced isolation.

Containers: millisecond startup, minimal memory, thousands per host. VMs: seconds to start, full OS overhead, tens per host. VMs provide stronger isolation (hardware boundary). Containers provide better density and speed.

See How Containers Work for the full architecture of namespaces, cgroups, and overlayfs.

What is a deadlock?

A deadlock occurs when two or more threads each hold a resource the other needs. Thread A holds mutex 1 and waits for mutex 2. Thread B holds mutex 2 and waits for mutex 1. Neither can proceed.

Prevention: always acquire locks in the same order. If every thread locks mutex 1 before mutex 2, circular wait can't happen.

See How Threads Work for the four Coffman conditions and prevention strategies.

Why does my program crash with "segmentation fault"?

Your program accessed a virtual memory address that isn't mapped to any physical memory. The CPU raised a page fault, the kernel found no valid mapping in the page table, and sent SIGSEGV (signal 11) to the process.

Common causes: dereferencing a null pointer, using a pointer after the memory was freed (use-after-free), writing past the end of an array (buffer overflow), or exceeding the stack size (stack overflow).

See How Memory Works for the three types of page faults.

What is an inode?

An inode stores file metadata (permissions, size, timestamps, block pointers) on Unix file systems. The file name is NOT in the inode — names live in directory entries. This is why hard links work: two names pointing to the same inode.

See How File Systems Work for the full explanation of inodes, journaling, and copy-on-write.

Why is fsync slow but important?

When you call write(), data goes to a kernel buffer, not disk. fsync() forces the buffer to disk and waits for confirmation. Without it, a power failure loses buffered data.

Databases call fsync after every transaction commit — it's what makes "committed" mean "durable." On SSDs, fsync costs 0.1-1ms. On spinning disks, 5-10ms. Database throughput is often limited by fsync latency.

See How File Systems Work for write caching, journaling, and fsync.