What is Two's Complement
Two's complement is the standard way computers represent negative integers in binary. It uses the highest bit as a sign indicator — 0 for positive, 1 for negative — but in a way that makes addition and subtraction work without special hardware.
How it works
To negate a number in two's complement, you flip all the bits and add 1. Take the 8-bit representation of 5:
5 in binary: 00000101
Flip all bits: 11111010
Add 1: 11111011 ← this is -5
The key insight is that addition just works. Adding 5 and -5 in binary:
00000101 (5)
+ 11111011 (-5)
----------
100000000 → discard the carry → 00000000 (0)
The CPU doesn't need separate circuits for subtraction. It computes A - B as A + (-B) using the same addition hardware. This is why two's complement won the encoding war — simpler circuits, faster math.
An 8-bit two's complement number ranges from -128 to 127. The pattern generalizes: an N-bit signed integer ranges from -2^(N-1) to 2^(N-1) - 1. A 32-bit signed integer (i32) ranges from about -2.1 billion to 2.1 billion. A 64-bit signed integer (i64) covers roughly +/- 9.2 quintillion.
Overflow is the gotcha. If you add 1 to 127 in an 8-bit signed integer, you get -128 — the value wraps around. This is defined behavior in some languages and undefined in others, which is why compilers warn about signed overflow.
Why it matters
Every signed integer in every programming language — int, i32, i64, short, long — uses two's complement. When a debugger shows you a negative number's hex representation and it starts with FF, that's two's complement at work. When you hit an integer overflow bug, it's two's complement wrapping around.
Understanding this encoding also prepares you for floating point, which handles the sign bit differently but builds on the same foundation of binary representation.