Algorithms FAQ

Common questions about sorting, searching, recursion, and graph algorithms. Each answer is short. Links go to the full explanation.

What is the difference between quicksort and merge sort?

Merge sort is always O(n log n) but uses O(n) extra space for the merge buffer. Quicksort is O(n log n) average but O(n²) worst case (bad pivot choices). Quicksort sorts in-place (no extra array).

In practice, quicksort is usually faster because it has better cache performance — partitioning operates on contiguous memory. Most standard libraries use hybrid algorithms: Timsort (merge sort + insertion sort) or introsort (quicksort + heap sort).

See How Sorting Works for the full comparison of all sorting algorithms.

When should I use binary search?

Whenever data is sorted and you need to find a specific value. Binary search is O(log n) — 30 steps for 1 billion items. If you're scanning sorted data linearly, you're wasting time.

The pattern extends beyond arrays: B-trees in databases, git bisect for finding bugs, and binary search on answer spaces in optimization problems.

See How Searching Works for binary search, two-pointer technique, and hash-based alternatives.

What is the difference between BFS and DFS?

BFS explores layer by layer using a queue. DFS explores as deep as possible using a stack or recursion. Both are O(V + E).

BFS finds shortest paths in unweighted graphs. DFS detects cycles and enables topological sorting. Use BFS when distance matters, DFS when exhaustive exploration or ordering matters.

See How Graph Algorithms Work for both algorithms with worked examples.

What is dynamic programming?

Dynamic programming solves problems with overlapping subproblems by caching results. Bottom-up DP fills a table from smallest subproblems to largest. Top-down DP (memoization) caches recursive calls.

Classic examples: Fibonacci (O(2^n) → O(n)), shortest path (Dijkstra), edit distance (spell checking), longest common subsequence (git diff).

See How Recursion Works for the full progression from naive recursion to memoization to DP.

Is recursion slower than iteration?

Function call overhead makes recursion slightly slower per operation — each call pushes a stack frame. Deep recursion can cause stack overflow. But for naturally recursive problems (trees, divide and conquer, graph traversal), recursion produces clearer code.

The performance difference is rarely significant compared to algorithmic complexity. An O(n log n) recursive merge sort beats an O(n²) iterative bubble sort by orders of magnitude.

See How Recursion Works for when to use recursion vs iteration.

Can you sort faster than O(n log n)?

Not with comparison sorts — the O(n log n) bound is mathematically proven (any comparison sort is a decision tree with n! leaves). But non-comparison sorts bypass this: counting sort and radix sort achieve O(n) by exploiting the data's structure (integers in a known range). They don't compare elements.

For general-purpose sorting on arbitrary data, O(n log n) is the best you can do.

See How Sorting Works for the proof sketch and non-comparison sort alternatives.