What is a Bit
A bit is a single binary digit — either 0 or 1. It is the smallest unit of information a computer can store or process. Every piece of data your computer handles, from text to video to running programs, is ultimately represented as a sequence of bits.
How it works
The word "bit" is a contraction of "binary digit." At the hardware level, a bit corresponds to a physical state: voltage high or low in a circuit, magnetic orientation on a disk, charge present or absent in a memory cell. The computer doesn't care what the state means — it only knows two possibilities.
A single bit carries very little information. It can answer exactly one yes-or-no question. But bits combine to represent anything:
- 2 bits give you 4 possible values (00, 01, 10, 11).
- 8 bits (one byte) give you 256 values — enough for every ASCII character.
- 32 bits give you about 4.3 billion values — enough for an IPv4 address.
- 64 bits give you roughly 18 quintillion values — enough for modern memory addresses.
Bits are the foundation of binary, the number system computers use. They are also the basis of boolean logic, where each bit represents true or false and can be manipulated with AND, OR, and NOT operations.
Why it matters
Understanding bits makes everything else in computing concrete. When someone says a color is "24-bit," that means 8 bits each for red, green, and blue — 16.7 million possible colors. When a network link is rated at 1 Gbps, that's 1 billion bits per second. When you learn about two's complement or floating point, you're learning how bits are interpreted as signed integers and decimal numbers.
Every abstraction in computing — numbers, text, images, instructions — sits on top of bits. Start here.