Big O & Complexity Analysis

Asymptotic notation, complexity classes O(1) through O(n!), analyzing loops & recursion, Master Theorem, data structure operations & practical limits
📈
Growth Rate Comparison
Input Size (n) Operations 0 5 10 15 20 25 0 50 150 300 500 750 O(1) Constant O(log n) Logarithmic O(√n) Square Root O(n) Linear O(n log n) Linearithmic O(n²) Quadratic O(n³) Cubic O(2ⁿ) Exponential O(n!) Factorial FEASIBLE INFEASIBLE
𝒪
What Is Complexity Analysis?
How runtime grows as n increases: n=10 n=1000 n=10⁶ O(1) 1 1 1 O(log n) 3 10 20 O(n) 10 1,000 10⁶ O(n log n) 30 10,000 2×10⁷ O(n²) 100 10⁶ 10¹² O(2ⁿ) 1,024 ∞ ∞ No hardware fixes a bad algorithm.
MeasureQuestion
Time complexityHow many operations as n grows?
Space complexityHow much memory as n grows?
Best caseMinimum ops for any input of size n
Average caseExpected ops over all inputs of size n
Worst caseMaximum ops for any input of size n (most common)
Big O — Formal Definition & Intuition
Formal: f(n) = O(g(n)) if there exist constants c, n&sub0; such that: f(n) ≤ c · g(n) for all n ≥ n&sub0; Intuition: f(n) grows no faster than g(n), ignoring constants. Example: f(n) = 3n² + 7n + 15 Is f(n) = O(n²)? Let c=4, n&sub0;=10: 3n² + 7n + 15 ≤ 4n² for all n ≥ 10 ✓ The constants 3, 7, 15 don't matter. What Big O hides: Algorithm A: 1000n → O(n) ← slower for small n Algorithm B: 0.001n² → O(n²) ← faster for small n For n > 1,000,000: A wins. Big O = asymptotic behavior.
Θ
Big Ω, Big Θ, and Other Notations
NotationMeaningAnalogy
O(g(n))f grows at most as fast as gf ≤ g (upper bound)
Ω(g(n))f grows at least as fast as gf ≥ g (lower bound)
Θ(g(n))f grows exactly as fast as gf = g (tight bound)
o(g(n))f grows strictly slowerf < g (strict upper)
ω(g(n))f grows strictly fasterf > g (strict lower)
Common Complexity Classes — Ranked
O(1) < O(log n) < O(√n) < O(n) < O(n log n) < O(n²) < O(n³) < O(2ⁿ) < O(n!)
ClassNamen=10n=100n=1000n=10⁶Feasible?
O(1)Constant1111Always
O(log n)Logarithmic371020Always
O(√n)Square root310321,000Always
O(n)Linear101001,00010⁶Always
O(n log n)Linearithmic336649,9662×10⁷Always
O(n²)Quadratic10010⁴10⁶10¹²n ≤ ~10⁴
O(n³)Cubic1,00010⁶10⁹10¹⁸n ≤ ~500
O(2ⁿ)Exponential1,02410³&sup0;n ≤ ~25
O(n!)Factorial3.6×10⁶n ≤ ~12
O(1) — Constant Time
val x = arr[5] // Array access: O(1) val v = map["key"] // HashMap get: O(1) avg val exists = set.contains(42) // HashSet: O(1) avg stack.addLast(10) // Stack push: O(1) stack.removeLast() // Stack pop: O(1) head = ListNode(42, head) // LL insert at head: O(1) val n = arr.size // Array length: O(1) // O(1) ≠ "fast" — it means "bounded by a constant" fun constantWork() { for (i in 0 until 1_000_000) { /* ... */ } } // 1,000,000 ops but CONSTANT regardless of input → still O(1)
½
O(log n) — Logarithmic Time
Input is halved at each step. Doubling n adds only ONE extra op. n = 1,000,000 Step 1: 1,000,000 → 500,000 Step 2: 500,000 → 250,000 ... Step 20: 1 ← done! log&sub2;(1,000,000) ≈ 20
// Binary search fun binarySearch(arr: IntArray, target: Int): Int { var lo = 0; var hi = arr.size - 1 while (lo <= hi) { // runs log&sub2;(n) times val mid = lo + (hi - lo) / 2 when { arr[mid] == target -> return mid arr[mid] < target -> lo = mid + 1 else -> hi = mid - 1 } } return -1 }
SourceWhy log n
Binary searchHalves the search space
Balanced BST opsHalves the tree at each level
Heap insert/deleteHeight of complete tree = log n
Exponentiation by squaringHalves the exponent
O(n) Linear & O(n log n) Linearithmic
// O(n) — process each element once fun findMax(arr: IntArray): Int { var max = arr[0] for (x in arr) max = maxOf(max, x) // n iterations return max } // Multiple passes are still O(n): 3n = O(n) // O(n log n) — divide-and-conquer with linear work per level // Merge sort: log n levels × n work each = O(n log n) arr.sort() // O(n log n) — TimSort // Comparison sorting lower bound: Ω(n log n) // Proof: n! orderings, each comparison halves space // → log&sub2;(n!) = Θ(n log n) comparisons needed
O(n²) Quadratic & O(n³) Cubic
// O(n²) — nested loops for (i in 0 until n) { for (j in i + 1 until n) { // n(n-1)/2 = O(n²) if (arr[i] + arr[j] == target) return true } } // Triangular loops (j < i) are still O(n²): // 0 + 1 + 2 + ... + (n-1) = n(n-1)/2 = O(n²) // O(n³) — three nested loops // Matrix multiplication, Floyd-Warshall for (k in 0 until n) for (i in 0 until n) for (j in 0 until n) dist[i][j] = minOf(dist[i][j], dist[i][k] + dist[k][j])
💥
O(2ⁿ) Exponential & O(n!) Factorial
// O(2ⁿ) — each element: include or exclude = 2 choices fun fib(n: Int): Long { if (n <= 1) return n.toLong() return fib(n - 1) + fib(n - 2) // 2 calls per level → O(2ⁿ) } // Subsets: 2ⁿ subsets of n elements // Usually a starting point → add memoization → drops to polynomial (DP) // O(n!) — all permutations // n=10 → 3,628,800 | n=15 → 1.3×10¹² | n=20 → 2.4×10¹⁸ // Brute-force TSP, N-Queens without pruning
Factorial growth: n n! 5 120 10 3,628,800 12 479,001,600 15 1.3 × 10¹² 20 2.4 × 10¹⁸ ← exceeds nanoseconds in a century
O(√n) — Square Root Time
// Primality test fun isPrime(n: Int): Boolean { if (n < 2) return false var i = 2 while (i * i <= n) { // √n iterations if (n % i == 0) return false i++ } return true } // Why √n? If n = a × b and a ≤ b, then a ≤ √n
Amortized Analysis
Average cost per op over a sequence, even when some ops are expensive. ArrayList doubling: Size: 1 → 2 → 4 → 8 → 16 → ... Copies: 1 2 4 8 16 ... After n pushes, total copies = 1+2+4+...+n ≈ 2n Amortized cost per push: 2n/n = O(1) Amortized ≠ Average case Amortized: guaranteed avg over ANY sequence of n ops Average case: expected value over random inputs Amortized is STRONGER — holds for worst-case sequences
OperationWorst CaseAmortized
ArrayList.add (append)O(n) resizeO(1)
HashMap.putO(n) rehashO(1)
ArrayDeque.addFirst/LastO(n) resizeO(1)
Union-Find (find)O(log n)O(α(n)) ≈ O(1)
Space Complexity
// O(1) — constant extra memory fun findMax(arr: IntArray): Int { var max = arr[0] for (x in arr) max = maxOf(max, x) // one variable return max } // O(n) — HashMap grows with input val freq = mutableMapOf<Int, Int>() for (x in arr) freq[x] = (freq[x] ?: 0) + 1 // O(h) — recursion stack on a tree // Balanced: h = O(log n). Skewed: h = O(n) // O(n²) — 2D DP table val dp = Array(n) { IntArray(n) }
More SpaceLess TimeExample
HashMapO(1) lookup vs O(n) scanTwo Sum
MemoizationO(n) → O(1) per subproblemFibonacci
Prefix sum arrayO(1) range sum queryRange sum
Analyzing Loops
// Single loop: O(n) for (i in 0 until n) { /* O(1) */ } // Nested same range: O(n²) for (i in 0 until n) for (j in 0 until n) { /* O(1) */ } // Nested dependent: O(n²) for (i in 0 until n) for (j in 0 until i) { /* 0+1+...+(n-1) = n(n-1)/2 */ } // Different variables: O(n × m) — NOT O(n²) for (i in 0 until n) for (j in 0 until m) { /* O(1) */ } // Halving/doubling: O(log n) var i = n; while (i > 0) { i /= 2 } var i = 1; while (i < n) { i *= 2 } // Nested linear × log: O(n log n) for (i in 0 until n) { var j=1; while(j<n) { j*=2 } } // Harmonic series (Sieve pattern): O(n log n) for (i in 1..n) for (j in i..n step i) { /* n/i per outer */ } // Total: n/1 + n/2 + ... + n/n = n×H(n) ≈ O(n log n) // Two pointers: O(n) despite looking nested var left = 0; var right = n - 1 while (left < right) { /* each pointer moves at most n times */ }
Analyzing Recursion & Master Theorem
RecurrenceSolutionExample
T(n) = T(n-1) + O(1)O(n)Factorial, linear scan
T(n) = T(n-1) + O(n)O(n²)Selection sort
T(n) = 2T(n/2) + O(1)O(n)Tree traversal
T(n) = 2T(n/2) + O(n)O(n log n)Merge sort
T(n) = T(n/2) + O(1)O(log n)Binary search
T(n) = T(n/2) + O(n)O(n)Median of medians
T(n) = 2T(n-1) + O(1)O(2ⁿ)Tower of Hanoi
Master Theorem: T(n) = aT(n/b) + O(nᵈ) a = recursive calls, b = division factor, d = work exponent Compare log_b(a) with d: Case 1: d < log_b(a) → O(n^log_b(a)) (recursion dominates) Case 2: d = log_b(a) → O(nᵈ × log n) (balanced) Case 3: d > log_b(a) → O(nᵈ) (work dominates) Merge sort: a=2, b=2, d=1. log&sub2;(2)=1=d. Case 2 → O(n log n) ✓ Binary search: a=1, b=2, d=0. log&sub2;(1)=0=d. Case 2 → O(log n) ✓
Simplification Rules
1. Drop constants: 3n → O(n), 100n² → O(n²) 2. Drop lower terms: n² + n → O(n²), 2ⁿ + n³ → O(2ⁿ) 3. Different vars stay: O(n + m), NOT O(n) unless m ≤ n 4. Sequential = Add: O(n) + O(n²) = O(n²) 5. Nested = Multiply: O(n) × O(m) = O(n×m) 6. Conditional = Worst: if O(n) else O(n²) → O(n²) 7. Log base irrelevant: O(log&sub2; n) = O(log&sub1;&sub0; n) = O(ln n) = O(log n)
Log Properties Every Programmer Should Know
Essential identities: log(a × b) = log(a) + log(b) log(a / b) = log(a) - log(b) log(aⁿ) = n × log(a) log_b(a) = log(a) / log(b) ← change of base Common values: log&sub2;(1)=0 log&sub2;(2)=1 log&sub2;(1024)=10 log&sub2;(10⁶)≈20 log&sub2;(10⁹)≈30
SourceWhy log n
Binary searchHalving the search space
Balanced treesHeight = log&sub2;(n)
Merge sort levelsHalving log&sub2;(n) times
Heap operationsHeight of complete binary tree
Bit manipulationBits in n = ⌊log&sub2;(n)⌋ + 1
Sorting Algorithm Complexities
AlgorithmBestAverageWorstSpaceStable?
Bubble sortO(n)O(n²)O(n²)O(1)Yes
Selection sortO(n²)O(n²)O(n²)O(1)No
Insertion sortO(n)O(n²)O(n²)O(1)Yes
Merge sortO(n log n)O(n log n)O(n log n)O(n)Yes
Quick sortO(n log n)O(n log n)O(n²)O(log n)No
Heap sortO(n log n)O(n log n)O(n log n)O(1)No
Tim sortO(n)O(n log n)O(n log n)O(n)Yes
Counting sortO(n+k)O(n+k)O(n+k)O(k)Yes
Radix sortO(d(n+k))O(d(n+k))O(d(n+k))O(n+k)Yes
Data Structure Operations
OperationArrayArrayListLinkedListHashMapTreeMap
Access [i]O(1)O(1)O(n)
SearchO(n)O(n)O(n)O(1) avgO(log n)
Insert endO(1)*O(1)O(1) avgO(log n)
Insert startO(n)O(n)O(1)
DeleteO(n)O(n)O(1)**O(1) avgO(log n)
Min/MaxO(n)O(n)O(n)O(n)O(log n)
Practical Limits — What N Can Your Algorithm Handle?
ComplexityMax n (1 sec)Max n (10 sec)
O(log n)AnyAny
O(n)10⁸10⁹
O(n log n)~5×10⁶~5×10⁷
O(n²)~10⁴~3×10⁴
O(n³)~500~1000
O(2ⁿ)~25~28
O(n!)~12~13
ConstraintExpected ComplexityCommon Approach
n ≤ 10O(n!) or O(2ⁿ)Backtracking, brute force
n ≤ 20O(2ⁿ) or O(n×2ⁿ)Bitmask DP
n ≤ 500O(n³)DP, Floyd-Warshall
n ≤ 5,000O(n²)DP, nested loops
n ≤ 10⁵O(n log n)Sorting, binary search
n ≤ 10⁶O(n)Two pointers, greedy
n ≤ 10¹⁸O(log n) or O(√n)Binary search, math
Best, Average, and Worst Case
Quicksort: Best: O(n log n) — pivot always lands in the middle Average: O(n log n) — random pivots split reasonably Worst: O(n²) — pivot always at extreme (sorted input) Binary Search: Best O(1), Avg O(log n), Worst O(log n) HashMap: Best O(1), Avg O(1), Worst O(n) Interview tip: State worst case by default. "QuickSort is O(n log n) average, O(n²) worst case." "HashMap lookup is O(1) average, O(n) worst case." "ArrayList.add is O(1) amortized, O(n) worst case."
Kotlin/JVM Standard Library Complexities
List (ArrayList)Time
list[i]O(1)
list.add(e) (append)O(1) amortized
list.add(i, e) (insert)O(n)
list.removeAt(i)O(n)
list.contains(e)O(n)
list.sort()O(n log n)
list.subList(from, to)O(1) view
Map / SetHashMapTreeMap
get / containsO(1) avgO(log n)
put / addO(1) avg*O(log n)
removeO(1) avgO(log n)
IterateO(n + capacity)O(n)
floor/ceilingO(log n)
StringTime
s.length, s[i]O(1)
s + t (concatenation)O(n + m)
s.substring(i, j)O(j - i)
s.contains(t)O(n × m) worst
StringBuilder.appendO(len) amortized
FunctionalTimeNew Collection?
map { }, filter { }O(n)Yes
forEach { }, reduce { }O(n)No
groupBy { }, associate { }O(n)Yes
any { }, all { }O(n)No
distinct()O(n)Yes
Complexity Pitfalls
// 1. Hidden O(n) in standard library if (element in list) { /* O(n)! Use a Set for O(1) */ } var result = "" for (i in 0 until n) result += "x" // O(n²) total! Use StringBuilder list.add(0, element) // O(n) shift! Use ArrayDeque.addFirst() map.containsValue(v) // O(n) scan! Must check all values // 2. Amortized ≠ worst case // ArrayList.add: O(1) amortized, O(n) worst (resize) // HashMap.get: O(1) avg, O(n) worst (collisions) // 3. Recursion uses stack space: O(h) for trees // Balanced: O(log n). Skewed: O(n) → StackOverflow! // 4. Different variables: O(n × m) NOT O(n²) // 5. Two pointers look nested but are O(n) total // Each pointer moves at most n times // 6. Log base doesn't matter: O(log&sub2; n) = O(log n) // 7. Sorting doesn't help if you still scan O(n) per element // Sort O(n log n) + nested scan O(n²) = O(n²) // 8. Check constraints: n ≤ 10&sup5; but O(n²) = 10¹&sup0; → TLE // 9. Subsets (2ⁿ) vs Permutations (n!) // n=20: 2²&sup0; ≈ 10&sup6; (ok), 20! ≈ 10¹⁸ (not ok) // 10. Interview: state complexity, explain WHY, mention tradeoffs // "O(n log n) time, O(n) space. Sort is n log n, scan is n."