Big O Notation
Big O is a way to describe how an algorithm's running time grows as the input grows. It's not a stopwatch measurement — it's a shape. Two algorithms can both take a millisecond on a small list and yet behave wildly differently when the list has a million items.
Why Constants Don't Matter
When n is huge, a curve's shape dwarfs its constants. A million-item list run with an O(n²) algorithm is a trillion operations — no amount of code optimization rescues that. Switching to O(n log n) saves you minutes or hours. So Big O ignores constants and lower-order terms and keeps only the dominant growth shape.
The Common Classes
- O(1) — constant. Same time no matter what. Looking up an array element by index, popping a stack.
- O(log n) — logarithmic. Cutting the problem in half each step. Binary search, balanced tree lookup.
- O(n) — linear. Touching every item once. Summing a list, finding a max.
- O(n log n) — linearithmic. Efficient sorts like merge sort and quicksort.
- O(n²) — quadratic. Nested loops over the input. Naive duplicate-finding, bubble sort.
- O(2^n) — exponential. Brute-forcing every subset. Avoid for non-trivial n.
How to Read Code
Look for loops and recursion. One loop over n items is usually O(n). Nested loops multiply: a loop inside a loop is O(n²). Halving each step is O(log n). Recursive calls multiply by branching factor.
A function with two separate loops is O(n + n) = O(2n) = O(n). A loop containing a constant amount of work stays O(n).
Worst, Average, Best
Big O usually refers to worst case. Average case can differ — quicksort is O(n log n) average but O(n²) worst case. Knowing both helps you choose well.
Try It Yourself
- Classify: a function that prints every pair (i, j) where i and j range over n items
- Classify: searching a sorted array by halving the range each step
- Rewrite an O(n²) duplicate-checker as O(n) using a set