Asymptotic notation, complexity classes O(1) through O(n!), analyzing loops & recursion, Master Theorem, data structure operations & practical limits
📈
Growth Rate Comparison
𝒪
What Is Complexity Analysis?
How runtime grows as n increases:
n=10 n=1000 n=10⁶
O(1) 1 1 1
O(log n) 3 10 20
O(n) 10 1,000 10⁶
O(n log n) 30 10,000 2×10⁷
O(n²) 100 10⁶ 10¹²
O(2ⁿ) 1,024 ∞ ∞
No hardware fixes a bad algorithm.
Measure
Question
Time complexity
How many operations as n grows?
Space complexity
How much memory as n grows?
Best case
Minimum ops for any input of size n
Average case
Expected ops over all inputs of size n
Worst case
Maximum ops for any input of size n (most common)
We count dominant operations — ignore constant factors (2n and 5n are both O(n))
Drop lower-order terms — n² + 100n is O(n²)
This gives a machine-independent measure of efficiency
Fundamentals
≤
Big O — Formal Definition & Intuition
Formal: f(n) = O(g(n)) if there exist constants c, n&sub0; such that:
f(n) ≤ c · g(n) for all n ≥ n&sub0;
Intuition: f(n) grows no faster than g(n), ignoring constants.
Example: f(n) = 3n² + 7n + 15
Is f(n) = O(n²)? Let c=4, n&sub0;=10:
3n² + 7n + 15 ≤ 4n² for all n ≥ 10 ✓
The constants 3, 7, 15 don't matter.
What Big O hides:
Algorithm A: 1000n → O(n) ← slower for small n
Algorithm B: 0.001n² → O(n²) ← faster for small n
For n > 1,000,000: A wins. Big O = asymptotic behavior.
Θ
Big Ω, Big Θ, and Other Notations
Notation
Meaning
Analogy
O(g(n))
f grows at most as fast as g
f ≤ g (upper bound)
Ω(g(n))
f grows at least as fast as g
f ≥ g (lower bound)
Θ(g(n))
f grows exactly as fast as g
f = g (tight bound)
o(g(n))
f grows strictly slower
f < g (strict upper)
ω(g(n))
f grows strictly faster
f > g (strict lower)
In interviews, "O(n²)" usually means Θ(n²) — the tight bound
Big O = worst-case upper bound (most common usage)
Ω = lower bound (important in proofs like "comparison sorting requires Ω(n log n)")
val x = arr[5] // Array access: O(1)val v = map["key"] // HashMap get: O(1) avgval exists = set.contains(42) // HashSet: O(1) avg
stack.addLast(10) // Stack push: O(1)
stack.removeLast() // Stack pop: O(1)
head = ListNode(42, head) // LL insert at head: O(1)val n = arr.size // Array length: O(1)// O(1) ≠ "fast" — it means "bounded by a constant"funconstantWork() {
for (i in0 until 1_000_000) { /* ... */ }
}
// 1,000,000 ops but CONSTANT regardless of input → still O(1)
½
O(log n) — Logarithmic Time
Input is halved at each step. Doubling n adds only ONE extra op.
n = 1,000,000
Step 1: 1,000,000 → 500,000
Step 2: 500,000 → 250,000
...
Step 20: 1 ← done! log&sub2;(1,000,000) ≈ 20
// Binary searchfunbinarySearch(arr: IntArray, target: Int): Int {
var lo = 0; var hi = arr.size - 1while (lo <= hi) { // runs log&sub2;(n) timesval mid = lo + (hi - lo) / 2when {
arr[mid] == target -> return mid
arr[mid] < target -> lo = mid + 1else -> hi = mid - 1
}
}
return -1
}
Source
Why log n
Binary search
Halves the search space
Balanced BST ops
Halves the tree at each level
Heap insert/delete
Height of complete tree = log n
Exponentiation by squaring
Halves the exponent
─
O(n) Linear & O(n log n) Linearithmic
// O(n) — process each element oncefunfindMax(arr: IntArray): Int {
var max = arr[0]
for (x in arr) max = maxOf(max, x) // n iterationsreturn max
}
// Multiple passes are still O(n): 3n = O(n)// O(n log n) — divide-and-conquer with linear work per level// Merge sort: log n levels × n work each = O(n log n)
arr.sort() // O(n log n) — TimSort// Comparison sorting lower bound: Ω(n log n)// Proof: n! orderings, each comparison halves space// → log&sub2;(n!) = Θ(n log n) comparisons needed
▦
O(n²) Quadratic & O(n³) Cubic
// O(n²) — nested loopsfor (i in0 until n) {
for (j in i + 1 until n) { // n(n-1)/2 = O(n²)if (arr[i] + arr[j] == target) return true
}
}
// Triangular loops (j < i) are still O(n²):// 0 + 1 + 2 + ... + (n-1) = n(n-1)/2 = O(n²)// O(n³) — three nested loops// Matrix multiplication, Floyd-Warshallfor (k in0 until n)
for (i in0 until n)
for (j in0 until n)
dist[i][j] = minOf(dist[i][j], dist[i][k] + dist[k][j])
💥
O(2ⁿ) Exponential & O(n!) Factorial
// O(2ⁿ) — each element: include or exclude = 2 choicesfunfib(n: Int): Long {
if (n <= 1) return n.toLong()
return fib(n - 1) + fib(n - 2) // 2 calls per level → O(2ⁿ)
}
// Subsets: 2ⁿ subsets of n elements// Usually a starting point → add memoization → drops to polynomial (DP)// O(n!) — all permutations// n=10 → 3,628,800 | n=15 → 1.3×10¹² | n=20 → 2.4×10¹⁸// Brute-force TSP, N-Queens without pruning
Factorial growth:
n n!
5 120
10 3,628,800
12 479,001,600
15 1.3 × 10¹²
20 2.4 × 10¹⁸ ← exceeds nanoseconds in a century
√
O(√n) — Square Root Time
// Primality testfunisPrime(n: Int): Boolean {
if (n < 2) return falsevar i = 2while (i * i <= n) { // √n iterationsif (n % i == 0) return false
i++
}
return true
}
// Why √n? If n = a × b and a ≤ b, then a ≤ √n
Analysis Techniques
∑
Amortized Analysis
Average cost per op over a sequence, even when some ops are expensive.ArrayList doubling:
Size: 1 → 2 → 4 → 8 → 16 → ...
Copies: 1 2 4 8 16 ...
After n pushes, total copies = 1+2+4+...+n ≈ 2n
Amortized cost per push: 2n/n = O(1)Amortized ≠ Average case
Amortized: guaranteed avg over ANY sequence of n ops
Average case: expected value over random inputs
Amortized is STRONGER — holds for worst-case sequences
Operation
Worst Case
Amortized
ArrayList.add (append)
O(n) resize
O(1)
HashMap.put
O(n) rehash
O(1)
ArrayDeque.addFirst/Last
O(n) resize
O(1)
Union-Find (find)
O(log n)
O(α(n)) ≈ O(1)
⌸
Space Complexity
// O(1) — constant extra memoryfunfindMax(arr: IntArray): Int {
var max = arr[0]
for (x in arr) max = maxOf(max, x) // one variablereturn max
}
// O(n) — HashMap grows with inputval freq = mutableMapOf<Int, Int>()
for (x in arr) freq[x] = (freq[x] ?: 0) + 1// O(h) — recursion stack on a tree// Balanced: h = O(log n). Skewed: h = O(n)// O(n²) — 2D DP tableval dp = Array(n) { IntArray(n) }
More Space
Less Time
Example
HashMap
O(1) lookup vs O(n) scan
Two Sum
Memoization
O(n) → O(1) per subproblem
Fibonacci
Prefix sum array
O(1) range sum query
Range sum
↻
Analyzing Loops
// Single loop: O(n)for (i in0 until n) { /* O(1) */ }
// Nested same range: O(n²)for (i in0 until n)
for (j in0 until n) { /* O(1) */ }
// Nested dependent: O(n²)for (i in0 until n)
for (j in0 until i) { /* 0+1+...+(n-1) = n(n-1)/2 */ }
// Different variables: O(n × m) — NOT O(n²)for (i in0 until n)
for (j in0 until m) { /* O(1) */ }
// Halving/doubling: O(log n)var i = n; while (i > 0) { i /= 2 }
var i = 1; while (i < n) { i *= 2 }
// Nested linear × log: O(n log n)for (i in0 until n) { var j=1; while(j<n) { j*=2 } }
// Harmonic series (Sieve pattern): O(n log n)for (i in1..n) for (j in i..n step i) { /* n/i per outer */ }
// Total: n/1 + n/2 + ... + n/n = n×H(n) ≈ O(n log n)// Two pointers: O(n) despite looking nestedvar left = 0; var right = n - 1while (left < right) { /* each pointer moves at most n times */ }
↺
Analyzing Recursion & Master Theorem
Recurrence
Solution
Example
T(n) = T(n-1) + O(1)
O(n)
Factorial, linear scan
T(n) = T(n-1) + O(n)
O(n²)
Selection sort
T(n) = 2T(n/2) + O(1)
O(n)
Tree traversal
T(n) = 2T(n/2) + O(n)
O(n log n)
Merge sort
T(n) = T(n/2) + O(1)
O(log n)
Binary search
T(n) = T(n/2) + O(n)
O(n)
Median of medians
T(n) = 2T(n-1) + O(1)
O(2ⁿ)
Tower of Hanoi
Master Theorem: T(n) = aT(n/b) + O(nᵈ)
a = recursive calls, b = division factor, d = work exponent
Compare log_b(a) with d:
Case 1: d < log_b(a) → O(n^log_b(a)) (recursion dominates)Case 2: d = log_b(a) → O(nᵈ × log n) (balanced)Case 3: d > log_b(a) → O(nᵈ) (work dominates)Merge sort: a=2, b=2, d=1. log&sub2;(2)=1=d. Case 2 → O(n log n) ✓
Binary search: a=1, b=2, d=0. log&sub2;(1)=0=d. Case 2 → O(log n) ✓
✂
Simplification Rules
1. Drop constants: 3n → O(n), 100n² → O(n²)
2. Drop lower terms: n² + n → O(n²), 2ⁿ + n³ → O(2ⁿ)
3. Different vars stay: O(n + m), NOT O(n) unless m ≤ n
4. Sequential = Add: O(n) + O(n²) = O(n²)
5. Nested = Multiply: O(n) × O(m) = O(n×m)
6. Conditional = Worst: if O(n) else O(n²) → O(n²)
7. Log base irrelevant: O(log&sub2; n) = O(log&sub1;&sub0; n) = O(ln n) = O(log n)
㏒
Log Properties Every Programmer Should Know
Essential identities:
log(a × b) = log(a) + log(b)
log(a / b) = log(a) - log(b)
log(aⁿ) = n × log(a)
log_b(a) = log(a) / log(b) ← change of baseCommon values:
log&sub2;(1)=0 log&sub2;(2)=1 log&sub2;(1024)=10 log&sub2;(10⁶)≈20 log&sub2;(10⁹)≈30
Source
Why log n
Binary search
Halving the search space
Balanced trees
Height = log&sub2;(n)
Merge sort levels
Halving log&sub2;(n) times
Heap operations
Height of complete binary tree
Bit manipulation
Bits in n = ⌊log&sub2;(n)⌋ + 1
Quick Reference
↕
Sorting Algorithm Complexities
Algorithm
Best
Average
Worst
Space
Stable?
Bubble sort
O(n)
O(n²)
O(n²)
O(1)
Yes
Selection sort
O(n²)
O(n²)
O(n²)
O(1)
No
Insertion sort
O(n)
O(n²)
O(n²)
O(1)
Yes
Merge sort
O(n log n)
O(n log n)
O(n log n)
O(n)
Yes
Quick sort
O(n log n)
O(n log n)
O(n²)
O(log n)
No
Heap sort
O(n log n)
O(n log n)
O(n log n)
O(1)
No
Tim sort
O(n)
O(n log n)
O(n log n)
O(n)
Yes
Counting sort
O(n+k)
O(n+k)
O(n+k)
O(k)
Yes
Radix sort
O(d(n+k))
O(d(n+k))
O(d(n+k))
O(n+k)
Yes
Comparison sorting lower bound: Ω(n log n) — merge sort and heap sort are optimal
To beat n log n, use non-comparison sorts (counting, radix, bucket)
≡
Data Structure Operations
Operation
Array
ArrayList
LinkedList
HashMap
TreeMap
Access [i]
O(1)
O(1)
O(n)
—
—
Search
O(n)
O(n)
O(n)
O(1) avg
O(log n)
Insert end
—
O(1)*
O(1)
O(1) avg
O(log n)
Insert start
O(n)
O(n)
O(1)
—
—
Delete
O(n)
O(n)
O(1)**
O(1) avg
O(log n)
Min/Max
O(n)
O(n)
O(n)
O(n)
O(log n)
*Amortized O(1). **O(1) with reference to node; O(n) to find it
Practical Limits — What N Can Your Algorithm Handle?
Complexity
Max n (1 sec)
Max n (10 sec)
O(log n)
Any
Any
O(n)
10⁸
10⁹
O(n log n)
~5×10⁶
~5×10⁷
O(n²)
~10⁴
~3×10⁴
O(n³)
~500
~1000
O(2ⁿ)
~25
~28
O(n!)
~12
~13
Constraint
Expected Complexity
Common Approach
n ≤ 10
O(n!) or O(2ⁿ)
Backtracking, brute force
n ≤ 20
O(2ⁿ) or O(n×2ⁿ)
Bitmask DP
n ≤ 500
O(n³)
DP, Floyd-Warshall
n ≤ 5,000
O(n²)
DP, nested loops
n ≤ 10⁵
O(n log n)
Sorting, binary search
n ≤ 10⁶
O(n)
Two pointers, greedy
n ≤ 10¹⁸
O(log n) or O(√n)
Binary search, math
↔
Best, Average, and Worst Case
Quicksort:
Best: O(n log n) — pivot always lands in the middle
Average: O(n log n) — random pivots split reasonably
Worst: O(n²) — pivot always at extreme (sorted input)
Binary Search: Best O(1), Avg O(log n), Worst O(log n)
HashMap: Best O(1), Avg O(1), Worst O(n)
Interview tip: State worst case by default.
"QuickSort is O(n log n) average, O(n²) worst case.""HashMap lookup is O(1) average, O(n) worst case.""ArrayList.add is O(1) amortized, O(n) worst case."
Kotlin/JVM Standard Library
Ⓚ
Kotlin/JVM Standard Library Complexities
List (ArrayList)
Time
list[i]
O(1)
list.add(e) (append)
O(1) amortized
list.add(i, e) (insert)
O(n)
list.removeAt(i)
O(n)
list.contains(e)
O(n)
list.sort()
O(n log n)
list.subList(from, to)
O(1) view
Map / Set
HashMap
TreeMap
get / contains
O(1) avg
O(log n)
put / add
O(1) avg*
O(log n)
remove
O(1) avg
O(log n)
Iterate
O(n + capacity)
O(n)
floor/ceiling
—
O(log n)
String
Time
s.length, s[i]
O(1)
s + t (concatenation)
O(n + m)
s.substring(i, j)
O(j - i)
s.contains(t)
O(n × m) worst
StringBuilder.append
O(len) amortized
Functional
Time
New Collection?
map { }, filter { }
O(n)
Yes
forEach { }, reduce { }
O(n)
No
groupBy { }, associate { }
O(n)
Yes
any { }, all { }
O(n)
No
distinct()
O(n)
Yes
Common Pitfalls
⚠
Complexity Pitfalls
// 1. Hidden O(n) in standard libraryif (element in list) { /* O(n)! Use a Set for O(1) */ }
var result = ""for (i in0 until n) result += "x"// O(n²) total! Use StringBuilder
list.add(0, element) // O(n) shift! Use ArrayDeque.addFirst()
map.containsValue(v) // O(n) scan! Must check all values// 2. Amortized ≠ worst case// ArrayList.add: O(1) amortized, O(n) worst (resize)// HashMap.get: O(1) avg, O(n) worst (collisions)// 3. Recursion uses stack space: O(h) for trees// Balanced: O(log n). Skewed: O(n) → StackOverflow!// 4. Different variables: O(n × m) NOT O(n²)// 5. Two pointers look nested but are O(n) total// Each pointer moves at most n times// 6. Log base doesn't matter: O(log&sub2; n) = O(log n)// 7. Sorting doesn't help if you still scan O(n) per element// Sort O(n log n) + nested scan O(n²) = O(n²)// 8. Check constraints: n ≤ 10&sup5; but O(n²) = 10¹&sup0; → TLE// 9. Subsets (2ⁿ) vs Permutations (n!)// n=20: 2²&sup0; ≈ 10&sup6; (ok), 20! ≈ 10¹⁸ (not ok)// 10. Interview: state complexity, explain WHY, mention tradeoffs// "O(n log n) time, O(n) space. Sort is n log n, scan is n."