What is a Buffer Pool
A buffer pool is a region of memory where the database caches pages read from disk. When a query needs data, the storage engine checks the buffer pool first. If the page is already in memory -- a cache hit -- no disk I/O is needed. If it is not -- a cache miss -- the engine reads the page from disk into the buffer pool before using it. A well-tuned database serves the vast majority of reads from the buffer pool.
How it works
The buffer pool is a fixed-size array of page-sized slots. When it is full and a new page must be loaded, the engine must evict an existing page. Most databases use a variant of the LRU (Least Recently Used) algorithm to choose which page to evict. InnoDB uses a modified LRU with a midpoint insertion strategy to prevent a single large table scan from flushing the entire pool.
Pages in the buffer pool can be clean (identical to the on-disk copy) or dirty (modified in memory but not yet written back to disk). When a transaction modifies a page, the change is first recorded in the WAL, and the in-memory page is marked dirty. Dirty pages are flushed to disk in the background by a checkpoint process, not immediately on each write.
The buffer pool hit ratio -- the percentage of page accesses served from memory -- is one of the most important metrics for database performance. A ratio below 99% on a production OLTP database usually means the working set does not fit in memory, and disk I/O is becoming the bottleneck. In InnoDB, you can check this with SHOW ENGINE INNODB STATUS. In PostgreSQL, the equivalent concept is shared buffers, configured via shared_buffers.
Why it matters
Disk I/O is orders of magnitude slower than memory access, even with SSDs. The buffer pool is what makes databases fast in practice. Sizing it correctly -- large enough to hold your working set, small enough to leave room for the OS and connections -- is one of the first things you tune on any database server.
See How Storage Engines Work for the full walkthrough of pages, buffer pools, and disk I/O.