What is an Array
An array is a sequence of elements stored in a contiguous block of memory. Every element occupies the same amount of space, and they sit side by side with no gaps. This layout is why arrays are fast at one thing and slow at another.
How it works
Because elements are contiguous, the address of any element can be calculated directly:
address = base_address + (index * element_size)
This gives O(1) random access — jumping to element 5,000 is just as fast as jumping to element 0. No scanning, no following pointers, just arithmetic.
The tradeoff appears when you insert or delete. Adding an element in the middle means every element after it must shift one position to make room. Deleting means shifting everything back. Both are O(n) because the number of shifts grows with the array size. Appending to the end is typically O(1) if there's spare capacity, but if the array is full, a new larger block must be allocated and every element copied over.
[10, 20, 30, 40, 50]
^
insert 25 here → shift 30, 40, 50 right
[10, 20, 25, 30, 40, 50]
Arrays also have excellent cache performance. Modern CPUs load memory in chunks (cache lines). Because array elements are adjacent, accessing one element often preloads its neighbors into the cache. This makes sequential scans over arrays significantly faster in practice than their O(n) complexity suggests.
Why it matters
Arrays are the most fundamental data structure. They underlie virtually everything else — hash maps use arrays of buckets, stacks and queues are often implemented on top of arrays, and B-trees store sorted arrays inside each node. Understanding when O(1) access matters more than O(n) insertion — and when it doesn't — is the first decision in choosing any data structure.
See How Arrays and Linked Lists Work for the full comparison with linked lists.