What is a Thread
A thread is an independent flow of execution within a process. A process starts with one thread, but can create more. All threads within a process share the same virtual memory, file descriptors, and heap — but each thread has its own stack and its own CPU register state.
How it works
Creating a thread is lighter than creating a process. A new process needs its own address space, file descriptor table, and page tables. A new thread just needs a new stack (typically 1-8 MB) and an entry in the scheduler. Everything else is shared.
The OS scheduler treats each thread as a separate unit of execution. On a multi-core CPU, threads from the same process can run on different cores simultaneously — true parallelism. On a single core, the scheduler time-slices between threads, giving the illusion of concurrency.
Shared memory is both the advantage and the danger of threads. Two threads can operate on the same data structure without copying it, which is fast. But if two threads modify shared data without coordination, you get race conditions. Preventing that requires synchronization primitives like mutexes — but misusing those leads to deadlocks.
Common threading models:
- 1:1 (kernel threads) — each language thread maps to an OS thread. Used by C, Rust, Java.
- M:N (green threads) — many language threads multiplexed onto fewer OS threads. Used by Go (goroutines) and Erlang.
- Async/await — a single thread handles many tasks by yielding at I/O boundaries. Used by Rust (tokio), JavaScript, and Python (asyncio).
Why it matters
Threads are how programs use multiple CPU cores. A web server handles thousands of concurrent requests by spreading work across threads. A video encoder processes frames in parallel. Without threads, your 8-core CPU would sit mostly idle. But with threads comes the challenge of concurrent access — making threads and synchronization one of the most important topics in systems programming.
See How Threads Work for concurrency patterns and pitfalls.