Linked List Fundamentals
Linked List = linear data structure where each node
contains data + pointer(s) to other nodes
Array (contiguous): [10][20][30][40][50]
0 1 2 3 4
Linked List:
[10|→]→[20|→]→[30|→]→[40|→]→[50|∅]
head tail
Key Trade-offs vs Arrays:
Insert/delete at known position → O(1) — just rewire pointers
Random access → O(n) — must traverse from head
No pre-allocated capacity → grows/shrinks one node at a time
| Type | Pointers per Node | Traversal | Use Case |
| Singly Linked | next only | Forward only | Simple sequences, stacks |
| Doubly Linked | next + prev | Both directions | Deques, LRU caches, undo history |
| Circular Singly | next (last → first) | Forward, wraps | Round-robin, circular buffers |
| Circular Doubly | next + prev (circular) | Both, wraps | Ring buffers, playlists |
Array:
┌────┬────┬────┬────┬────┐
│ 10 │ 20 │ 30 │ 40 │ 50 │ ← contiguous in memory
└────┴────┴────┴────┴────┘
↑ CPU cache-friendly: prefetching works well
Linked List:
┌──────┐ ┌──────┐ ┌──────┐
│10|──┼──→│20|──┼──→│30| ∅ │
└──────┘ └──────┘ └──────┘
0x48 0xA0 0x1C
↑ scattered in heap — cache misses
- Arrays are generally faster for iteration even though both are O(n) — the constant factor for arrays is much smaller due to spatial locality
- Linked list nodes are scattered in the heap — each
node.next traversal may cause a cache miss
- CPU prefetching works well with arrays (sequential addresses) but not with linked lists (random addresses)
// ── Node Definition ──
class ListNode<T>(
var value: T,
var next: ListNode<T>? = null
)
// ── Singly Linked List ──
class SinglyLinkedList<T> {
var head: ListNode<T>? = null
var size: Int = 0
/** Add to front: O(1) */
fun addFirst(value: T) {
head = ListNode(value, head)
size++
}
/** Add to back: O(n) — must traverse to find tail */
fun addLast(value: T) {
val newNode = ListNode(value)
if (head == null) { head = newNode }
else {
var current = head
while (current?.next != null) current = current.next
current?.next = newNode
}
size++
}
/** Remove from front: O(1) */
fun removeFirst(): T {
val node = head ?: throw NoSuchElementException()
head = node.next
size--
return node.value
}
/** Get value at index: O(n) */
operator fun get(index: Int): T {
var current = head
repeat(index) { current = current?.next }
return current!!.value
}
}
addFirst is O(1) — create a new node pointing to the current head
addLast is O(n) — must walk to the end without a tail pointer
- Adding a
tail pointer makes addLast O(1), but removeLast stays O(n) — can't find second-to-last without prev
- This limitation is why doubly linked lists exist
Node Memory:
Singly Node: ~32 bytes (value + next + header)
Doubly Node: ~40 bytes (value + prev + next + header)
For 1M nodes: ~8 MB more for doubly linked — worth it for O(1) at both ends
// ── DLL with Sentinel (circular) ──
class DListNode<T>(
var value: T,
var prev: DListNode<T>? = null,
var next: DListNode<T>? = null
)
class DoublyLinkedList<T> {
private val sentinel = DListNode<T?>(null)
var size = 0
init { sentinel.prev = sentinel; sentinel.next = sentinel }
/** Insert after node: O(1) */
private fun insertAfter(node: DListNode<T?>, value: T) {
val n = DListNode<T?>(value)
n.prev = node; n.next = node.next
node.next!!.prev = n; node.next = n; size++
}
fun addFirst(value: T) { insertAfter(sentinel, value) }
fun addLast(value: T) { insertAfter(sentinel.prev!!, value) }
/** Remove node: O(1) — if you have a reference to it */
private fun removeNode(node: DListNode<T?>): T {
node.prev!!.next = node.next
node.next!!.prev = node.prev
size--
return node.value as T
}
fun removeFirst() = removeNode(sentinel.next!!)
fun removeLast() = removeNode(sentinel.prev!!)
}
- All operations at both ends are O(1) —
addFirst, addLast, removeFirst, removeLast
- The circular sentinel eliminates all null checks and edge cases
- Empty list: sentinel ↔ sentinel (points to itself)
Without sentinel: edge cases everywhere
if (head == null) → special case: empty list
if (head.next == null) → special case: single element
if (removing head) → special case: must reassign head
With sentinel: clean and uniform
headSentinel ⇄ [actual nodes] ⇄ tailSentinel
Empty list: headSentinel ⇄ tailSentinel
Same four-line pointer swap works for ANY node, at ANY position
Variants:
Dual sentinels: [HEAD_DUMMY] ⇄ [A] ⇄ [B] ⇄ [C] ⇄ [TAIL_DUMMY]
Single circular: [DUMMY] ⇄ [A] ⇄ [B] ⇄ [C] ⇄ [DUMMY]
Singly linked: [DUMMY] → [A] → [B] → [C] → null
- Always use sentinel nodes for doubly linked list problems in interviews — eliminates an entire category of bugs
- The sentinel's
next is the real head; its prev is the real tail
- Insertion/deletion logic becomes uniform — no special cases for head/tail
| Feature | java.util.LinkedList | ArrayList |
| Implementation | Doubly linked list | Dynamic array |
Random access get(i) | O(n) | O(1) |
| Add/remove at head | O(1) | O(n) — shifts |
| Add/remove at tail | O(1) | O(1) amortized |
| Add/remove in middle | O(1) after positioning* | O(n) — shifts |
| Memory per element | ~40 bytes | ~4-8 bytes (ref only) |
| Cache locality | Poor | Excellent |
| Implements Deque? | Yes | No |
// ── java.util.LinkedList ──
import java.util.LinkedList
val list = LinkedList(listOf(1, 2, 3, 4, 5))
list.addFirst(0) // O(1)
list.addLast(6) // O(1)
list.removeFirst() // O(1)
list.removeLast() // O(1)
list[2] // O(n) — traverses from nearer end
// ── Kotlin's ArrayDeque — prefer this in most cases ──
val deque = ArrayDeque<Int>()
deque.addFirst(1); deque.addLast(2)
deque.removeFirst(); deque.removeLast()
mutableListOf() returns ArrayList, not LinkedList — construct explicitly if needed
- Kotlin's
ArrayDeque (circular array) is preferred for deque operations — better cache locality, less memory
- *Finding the position is O(n); the actual insert/remove once positioned is O(1)
Core Operations
// ── Using java.util.LinkedList ──
val list = LinkedList<Int>()
val fromList = LinkedList(listOf(1, 2, 3, 4, 5))
val fromRange = LinkedList((1..10).toList())
// ── Custom ListNode from array (interview style) ──
class ListNode(var value: Int, var next: ListNode? = null)
fun buildList(vararg values: Int): ListNode? {
val dummy = ListNode(0) // sentinel
var tail = dummy
for (v in values) {
tail.next = ListNode(v)
tail = tail.next!!
}
return dummy.next
}
val head = buildList(1, 2, 3, 4, 5)
- The "dummy head + tail pointer" pattern is extremely common — avoids special-casing the first insertion
- Always build with tail insertion to avoid O(n²) repeated traversals
// ── java.util.LinkedList ──
list.addFirst(1) // O(1) — at front
list.addLast(5) // O(1) — at back
list.add(7) // add defaults to addLast
list.add(2, 42) // O(n) — at index, must traverse
list.push(99) // stack push = addFirst
// ── Custom ListNode ──
/** Insert after a given node: O(1) */
fun insertAfter(node: ListNode, value: Int) {
node.next = ListNode(value, node.next)
}
/** Insert into sorted list: O(n) */
fun insertSorted(head: ListNode?, value: Int): ListNode? {
val dummy = ListNode(0, head)
var curr: ListNode? = dummy
while (curr?.next != null && curr.next!!.value < value)
curr = curr.next
curr?.next = ListNode(value, curr?.next)
return dummy.next
}
// ── java.util.LinkedList ──
list.removeFirst() // O(1)
list.removeLast() // O(1)
list.pollFirst() // O(1), returns null if empty
list.pop() // stack pop = removeFirst
list.remove(20) // O(n) — by value (first)
list.removeAt(1) // O(n) — by index
list.removeIf { it > 15 } // conditional removal
// ── Custom: remove ALL with value (dummy head pattern) ──
fun removeAll(head: ListNode?, target: Int): ListNode? {
val dummy = ListNode(0, head)
var curr: ListNode? = dummy
while (curr?.next != null) {
if (curr.next!!.value == target)
curr.next = curr.next!!.next
else curr = curr.next
}
return dummy.next
}
- Almost every removal uses a dummy head to avoid special-casing "what if the node to remove is the head?"
- Use
removeIf { } for safe conditional removal — manual iteration + mutation causes ConcurrentModificationException
// ── java.util.LinkedList ──
for (item in list) println(item)
list.forEach { println(it) }
list.forEachIndexed { i, item -> println("$i: $item") }
// Iterator with safe removal
val iter = list.iterator()
while (iter.hasNext()) {
if (iter.next().startsWith("b")) iter.remove()
}
// Descending iterator (doubly linked benefit)
val desc = list.descendingIterator()
// ── Custom ListNode traversal ──
var current = head
while (current != null) {
print("${current.value} → ")
current = current.next
}
// Recursive reverse print
fun printReverse(node: ListNode?) {
if (node == null) return
printReverse(node.next)
print("${node.value} ")
}
// ── java.util.LinkedList — returns List, not LinkedList ──
val doubled = list.map { it * 2 }
val evens = list.filter { it % 2 == 0 }
val sum = list.fold(0) { acc, n -> acc + n }
val chunks = list.chunked(2)
val windows = list.windowed(3)
// ── Custom: map & filter with dummy head ──
fun mapList(head: ListNode?, f: (Int) -> Int): ListNode? {
val dummy = ListNode(0)
var tail = dummy; var curr = head
while (curr != null) {
tail.next = ListNode(f(curr.value))
tail = tail.next!!; curr = curr.next
}
return dummy.next
}
// Convert result back if needed
val filtered = LinkedList(list.filter { it > 3 })
Algorithm Patterns
Many linked list patterns build on array techniques:
Two Pointers → same/different speed pointers on linked lists (see Arrays page) |
LRU Cache → HashMap + Doubly Linked List (see HashMaps page) |
Merge Sort → linked lists are better suited for merge sort than arrays
Iterative Reversal:
Before: 1 → 2 → 3 → 4 → null
Step 1: null ← 1 2 → 3 → 4 → null
Step 2: null ← 1 ← 2 3 → 4 → null
Step 3: null ← 1 ← 2 ← 3 4 → null
Step 4: null ← 1 ← 2 ← 3 ← 4
After: 4 → 3 → 2 → 1 → null
Three pointers: prev, curr, next
1. Save next = curr.next
2. Reverse: curr.next = prev
3. Advance: prev = curr, curr = next
fun reverseList(head: ListNode?): ListNode? {
var prev: ListNode? = null
var curr = head
while (curr != null) {
val next = curr.next
curr.next = prev
prev = curr
curr = next
}
return prev // new head
}
// Reverse sublist between positions left and right
fun reverseBetween(head: ListNode?, left: Int, right: Int): ListNode? {
val dummy = ListNode(0, head)
var prev: ListNode? = dummy
repeat(left - 1) { prev = prev?.next }
var curr = prev?.next
for (i in 0 until right - left) {
val next = curr?.next
curr?.next = next?.next
next?.next = prev?.next
prev?.next = next
}
return dummy.next
}
When to Use
reverse entire listWhy: Three-pointer technique (prev, curr, next). Save next, reverse pointer, advance. The most fundamental LL algorithm — building block for many others.
reverse between positionsWhy: Navigate to the node before position left, then repeatedly move the next node to the front of the sublist. Dummy head handles edge case when left=1.
palindrome checkWhy: Find middle with fast/slow, reverse second half, compare with first half. Uses reversal as a sub-routine for O(1) space palindrome detection.
reorder listWhy: Find middle, reverse second half, then merge alternating nodes from each half. Three-step pattern: split → reverse → interleave.
- Most fundamental LL algorithm — master this, it's a building block for many others
- Always save
next before overwriting curr.next — losing this reference is the #1 reversal bug
- Recursive version: O(n) space due to call stack — prefer iterative for production
▶ Visualize
Two pointers, different speeds:
slow moves 1 step, fast moves 2 steps
Cycle Detection:
If cycle exists → they must meet (gap shrinks by 1 each step)
If no cycle → fast reaches null first
Find Middle:
1 → 2 → 3 → 4 → 5
s (start)
s f (step 1)
s f (step 2: fast at end → slow at middle)
Cycle Entry (Phase 2):
After meeting, reset one pointer to head
Move both at speed 1 → they meet at cycle entry
Math: L = kC - d, where L=dist to cycle, C=cycle len
// Detect cycle
fun hasCycle(head: ListNode?): Boolean {
var slow = head; var fast = head
while (fast?.next != null) {
slow = slow?.next; fast = fast.next?.next
if (slow === fast) return true
}
return false
}
// Find middle node
fun middleNode(head: ListNode?): ListNode? {
var slow = head; var fast = head
while (fast?.next != null) {
slow = slow?.next; fast = fast.next?.next
}
return slow
}
// Find cycle entry point
fun detectCycle(head: ListNode?): ListNode? {
var slow = head; var fast = head
while (fast?.next != null) {
slow = slow?.next; fast = fast.next?.next
if (slow === fast) {
var entry = head
while (entry !== slow) {
entry = entry?.next; slow = slow?.next
}
return entry
}
}
return null
}
When to Use
detect cycleWhy: If there's a cycle, fast and slow must meet (gap shrinks by 1 each step). If no cycle, fast reaches null first. O(n) time, O(1) space vs O(n) space for HashSet approach.
find middle nodeWhy: When fast reaches the end, slow has traveled exactly half the distance. For even-length lists, check fast?.next?.next for first vs second middle.
find cycle entry pointWhy: After detecting the meeting point (Phase 1), reset one pointer to head and move both at speed 1. They meet at the cycle entry. Mathematical proof: L = kC - d.
happy numberWhy: Sum of squared digits eventually loops or reaches 1. Use fast/slow on the digit-sum sequence instead of a HashSet — O(1) space cycle detection.
- The Swiss Army knife of linked list algorithms — used in ~40% of LL interview problems
- Always use
=== (reference equality) when comparing nodes, not == (structural equality)
- Alternative: HashSet approach is O(n) space — Floyd's is O(1) space
▶ Visualize
Merge two sorted lists:
list1: 1 → 3 → 5
list2: 2 → 4 → 6
result: 1 → 2 → 3 → 4 → 5 → 6
Pattern: dummy head + two pointers
Compare heads, attach smaller, advance that pointer
Attach remaining when one list exhausted
Merge K sorted lists:
Min-heap: O(N log k) time, O(k) space
Divide & conquer: O(N log k) time, O(log k) space
fun mergeTwoLists(l1: ListNode?, l2: ListNode?): ListNode? {
val dummy = ListNode(0)
var tail: ListNode = dummy
var p1 = l1; var p2 = l2
while (p1 != null && p2 != null) {
if (p1.value <= p2.value) {
tail.next = p1; p1 = p1.next
} else {
tail.next = p2; p2 = p2.next
}
tail = tail.next!!
}
tail.next = p1 ?: p2 // attach remaining
return dummy.next
}
// Merge K lists with min-heap
fun mergeKLists(lists: List<ListNode?>): ListNode? {
val heap = PriorityQueue<ListNode>(compareBy { it.value })
lists.forEach { it?.let { heap.add(it) } }
val dummy = ListNode(0)
var tail = dummy
while (heap.isNotEmpty()) {
val node = heap.poll()
tail.next = node; tail = tail.next!!
node.next?.let { heap.add(it) }
}
return dummy.next
}
When to Use
merge two sorted listsWhy: Dummy head + two pointers. Compare heads, attach smaller, advance that pointer. When one is exhausted, attach the remainder. O(n+m) time, O(1) space — reuses existing nodes.
merge k sorted listsWhy: Min-heap holds the head of each list. Poll smallest, attach to result, push its next into heap. O(N log k) time where N = total nodes. Or divide-and-conquer pairwise merging.
sort list (merge sort)Why: Merge sort is ideal for linked lists — find middle (fast/slow), split, recursively sort, merge. O(n log n) time, O(log n) space. No extra O(n) space needed like array merge sort.
- Linked lists are better suited for merge sort than arrays — merge step is O(1) space (just rewire pointers)
- Bottom-up merge sort avoids recursion entirely — true O(1) extra space
- The dummy head pattern is essential here — avoids special-casing the first node
▶ Visualize
Two-pointer gap technique:
1 → 2 → 3 → 4 → 5, n = 2
Advance fast by n+1: slow=dummy fast=node 3
Move both until fast=null:
slow=1, fast=4 → slow=2, fast=5 → slow=3, fast=null
slow.next = slow.next.next → skip node 4
Result: 1 → 2 → 3 → 5
fun removeNthFromEnd(head: ListNode?, n: Int): ListNode? {
val dummy = ListNode(0, head)
var fast: ListNode? = dummy
var slow: ListNode? = dummy
repeat(n + 1) { fast = fast?.next }
while (fast != null) {
fast = fast?.next; slow = slow?.next
}
slow?.next = slow?.next?.next
return dummy.next
}
When to Use
remove nth from endWhy: Advance fast n+1 steps ahead of slow (starting from dummy). When fast reaches null, slow is just before the target. One pass, O(1) space.
find kth from endWhy: Same gap technique without dummy. Advance fast by k, then move both until fast reaches null. Slow is at the kth node from the end.
- The dummy head handles edge case where the head itself is removed (n = list length)
- Gap between fast and slow is always n+1 — when fast=null, slow is the node before the target
▶ Visualize
Elegant two-pointer technique:
A: a1 → a2 → c1 → c2 → c3
B: b1 → b2 → b3 → c1 → c2 → c3
Concatenate virtually: A+B and B+A have same length
pA: a1→a2→c1→c2→c3→b1→b2→b3→c1 ← meet
pB: b1→b2→b3→c1→c2→c3→a1→a2→c1 ← meet
fun getIntersectionNode(
headA: ListNode?, headB: ListNode?
): ListNode? {
var pA = headA; var pB = headB
while (pA !== pB) {
pA = if (pA != null) pA.next else headB
pB = if (pB != null) pB.next else headA
}
return pA
}
When to Use
find intersection nodeWhy: Virtual concatenation (A+B and B+A) aligns both pointers at the intersection. When one reaches null, redirect to the other list's head. O(n+m) time, O(1) space.
check if two lists convergeWhy: If they intersect, the two-pointer approach returns the shared node. If not, both become null simultaneously → returns null. No need to modify either list.
find merge point (length diff)Why: Alternative approach: compute length difference, advance the longer list by the difference, then walk both together until they meet. Same complexity, slightly less elegant.
- When one pointer reaches null, redirect to the other list's head — this aligns them at the intersection
- If no intersection, both become null simultaneously → returns null
- Alternative: compute length difference, advance the longer list by the difference, then walk together
▶ Visualize
Strategy: find middle → reverse second half → compare
1 → 2 → 3 → 2 → 1
1. Middle: 3
2. Reverse 2nd half: 1 → 2 → 3 → 1 → 2
3. Compare: 1==1, 2==2 ✓ → palindrome
fun isPalindrome(head: ListNode?): Boolean {
if (head?.next == null) return true
// Find middle
var slow = head; var fast = head
while (fast?.next != null) {
slow = slow?.next; fast = fast.next?.next
}
// Reverse second half
var prev: ListNode? = null; var curr = slow
while (curr != null) {
val next = curr.next
curr.next = prev; prev = curr; curr = next
}
// Compare
var left = head; var right = prev
while (right != null) {
if (left?.value != right.value) return false
left = left?.next; right = right.next
}
return true
}
When to Use
palindrome checkWhy: Find middle (fast/slow), reverse second half, compare with first half. O(n) time, O(1) space. The classic three-pattern combo: fast/slow + reverse + compare.
check symmetry of a sequenceWhy: Same middle + reverse + compare approach works whenever you need to verify the first half mirrors the second half. Stack approach is simpler but uses O(n) space.
- Combines fast/slow pointers (find middle) + reversal (reverse second half) + comparison
- Stack approach is simpler but O(n) space — push all, then compare with a second traversal
▶ Visualize
Reorder: L0 → L1 → ... → Ln
to: L0 → Ln → L1 → Ln-1 → L2 → ...
1 → 2 → 3 → 4 → 5 → 1 → 5 → 2 → 4 → 3
Three steps:
1. Find middle (fast/slow)
2. Reverse second half
3. Interleave the two halves
fun reorderList(head: ListNode?) {
if (head?.next == null) return
// 1. Find middle
var slow = head; var fast = head
while (fast?.next?.next != null) {
slow = slow?.next; fast = fast.next?.next
}
// 2. Reverse second half
var prev: ListNode? = null
var curr = slow?.next
slow?.next = null
while (curr != null) {
val next = curr.next
curr.next = prev; prev = curr; curr = next
}
// 3. Interleave
var first = head; var second = prev
while (second != null) {
val t1 = first?.next; val t2 = second.next
first?.next = second; second.next = t1
first = t1; second = t2
}
}
When to Use
reorder listWhy: Three-step combo: find middle (fast/slow), reverse second half, interleave both halves. O(n) time, O(1) space. The interleaving merge is the key step.
zigzag / alternating mergeWhy: Same split + reverse + interleave pattern works whenever you need to weave elements from both ends of a list together.
▶ Visualize
Digits stored in reverse order:
342 → 2 → 4 → 3
465 → 5 → 6 → 4
807 → 7 → 0 → 8
Process digit by digit with carry:
2+5=7 carry 0 → 4+6=10 carry 1 → 3+4+1=8
fun addTwoNumbers(l1: ListNode?, l2: ListNode?): ListNode? {
val dummy = ListNode(0)
var tail = dummy
var p1 = l1; var p2 = l2; var carry = 0
while (p1 != null || p2 != null || carry > 0) {
val sum = (p1?.value ?: 0) + (p2?.value ?: 0) + carry
carry = sum / 10
tail.next = ListNode(sum % 10)
tail = tail.next!!
p1 = p1?.next; p2 = p2?.next
}
return dummy.next
}
When to Use
add two numbers (reverse order)Why: Process digit by digit from least significant (head). Sum corresponding digits + carry. Dummy head builds the result list. while loop continues while either list has nodes OR carry > 0.
add two numbers (forward order)Why: Use stacks to process digits from least significant, or reverse both lists first, add, then reverse the result. Stack approach avoids modifying the input.
multiply two numbers as listsWhy: Similar digit-by-digit processing. Multiply each digit of one number with every digit of the other, accumulating partial results with proper positional offsets and carry propagation.
- Keep processing while either list has nodes OR carry > 0
- Forward-order variant: use stacks to reverse, or reverse both lists first
▶ Visualize
Partition around x=3:
1 → 4 → 3 → 2 → 5 → 2
Less: 1 → 2 → 2
Greater: 4 → 3 → 5
Result: 1 → 2 → 2 → 4 → 3 → 5
fun partition(head: ListNode?, x: Int): ListNode? {
val lessHead = ListNode(0)
val greaterHead = ListNode(0)
var less = lessHead; var greater = greaterHead
var curr = head
while (curr != null) {
if (curr.value < x) {
less.next = curr; less = less.next!!
} else {
greater.next = curr; greater = greater.next!!
}
curr = curr.next
}
greater.next = null // terminate!
less.next = greaterHead.next
return lessHead.next
}
When to Use
partition around pivotWhy: Two dummy heads collect nodes into less-than and greater-than-or-equal chains. Connect them at the end. Preserves relative order within each partition. O(n) time, O(1) space.
odd/even positioned nodesWhy: Same two-chain pattern. Walk the list, alternately appending nodes to odd and even chains, then connect odd tail to even head. Dummy heads simplify the logic.
three-way partition (LL)Why: Extend to three dummy heads (low/mid/high) for three-way partitioning. Same connect-at-end pattern. Each node is visited exactly once.
- Two dummy heads build two separate chains, then connect them
- Must set
greater.next = null — otherwise the last node may still point to a node in the less chain, creating a cycle
▶ Visualize
Rotate right by k=2:
1 → 2 → 3 → 4 → 5 → 4 → 5 → 1 → 2 → 3
Strategy:
1. Find length & tail
2. k = k % length (handle k > length)
3. Make circular: tail.next = head
4. Find new tail at position (length - k - 1)
5. Break the circle
fun rotateRight(head: ListNode?, k: Int): ListNode? {
if (head?.next == null || k == 0) return head
var length = 1; var tail = head
while (tail?.next != null) { length++; tail = tail.next }
val rotate = k % length
if (rotate == 0) return head
tail?.next = head // make circular
var newTail = head
repeat(length - rotate - 1) { newTail = newTail?.next }
val newHead = newTail?.next
newTail?.next = null // break circle
return newHead
}
When to Use
rotate list right by kWhy: Find length and tail, normalize k (k % length), make circular, walk to new tail at (length - k - 1), break the circle. O(n) time, O(1) space.
shift list by k positionsWhy: Same circular technique. Making the list temporarily circular simplifies finding the new head/tail without special-casing the wrap-around.
▶ Visualize
// Keep one of each duplicate
fun deleteDuplicates(head: ListNode?): ListNode? {
var curr = head
while (curr?.next != null) {
if (curr.value == curr.next!!.value)
curr.next = curr.next!!.next
else curr = curr.next
}
return head
}
// Remove ALL nodes with duplicate values
fun deleteDuplicatesAll(head: ListNode?): ListNode? {
val dummy = ListNode(0, head)
var prev: ListNode? = dummy
var curr = head
while (curr != null) {
if (curr.next != null && curr.value == curr.next!!.value) {
val dup = curr.value
while (curr != null && curr.value == dup)
curr = curr.next
prev?.next = curr
} else { prev = curr; curr = curr.next }
}
return dummy.next
}
When to Use
keep one of each duplicateWhy: Compare current with next. If equal, skip next. If not, advance. Simple — no dummy head needed since head is always retained.
remove all duplicatesWhy: Skip ALL nodes with the duplicate value. Dummy head is needed since the head itself might be a duplicate. Track prev to reconnect after skipping.
▶ Visualize
Swap every two adjacent nodes:
1 → 2 → 3 → 4 → 5 → 2 → 1 → 4 → 3 → 5
Before: prev → A → B → rest
After: prev → B → A → rest
fun swapPairs(head: ListNode?): ListNode? {
val dummy = ListNode(0, head)
var prev: ListNode? = dummy
while (prev?.next != null && prev.next?.next != null) {
val a = prev.next!!
val b = a.next!!
prev.next = b; a.next = b.next; b.next = a
prev = a
}
return dummy.next
}
When to Use
swap pairsWhy: Dummy head + three-pointer swap (prev, A, B). Rewire: prev→B→A→rest, then advance prev to A. Handles odd-length lists naturally (last node stays).
pairwise rearrangementWhy: This is the k=2 special case of reverse-k-group. The simpler swap logic (3 pointer reassignments vs full reversal) makes it worth knowing separately.
▶ Visualize
Reverse in groups of k=3:
1 → 2 → 3 → 4 → 5
Group 1: [1,2,3] → 3 → 2 → 1
Group 2: [4,5] → not enough, keep as-is
Result: 3 → 2 → 1 → 4 → 5
fun reverseKGroup(head: ListNode?, k: Int): ListNode? {
// Check if k nodes remaining
var check = head
repeat(k) {
if (check == null) return head
check = check?.next
}
// Reverse k nodes
var prev: ListNode? = null; var curr = head
repeat(k) {
val next = curr?.next
curr?.next = prev; prev = curr; curr = next
}
// head is now tail of reversed group
head?.next = reverseKGroup(curr, k)
return prev
}
When to Use
reverse k-groupWhy: First check if k nodes remain (return as-is if not). Reverse k nodes with the standard three-pointer technique. Recursively process the rest. O(n) time, O(n/k) stack or O(1) iterative.
reverse alternating k-groupWhy: Variant: reverse group 1, skip group 2, reverse group 3, etc. Same k-group reversal logic with an alternating flag to decide whether to reverse or just advance.
- First check if k nodes exist — if not, return as-is
- Recursive approach is clean; iterative version avoids O(n/k) stack depth
▶ Visualize
HashMap approach: O(n) space
Pass 1: create all new nodes, map old → new
Pass 2: wire up next & random via the map
Interleaving approach: O(1) space
Step 1: Insert copy after each original
A → A' → B → B' → C → C'
Step 2: Set random: A'.random = A.random.next
Step 3: Separate into two lists
// HashMap approach
fun copyRandomList(head: RandomNode?): RandomNode? {
val map = HashMap<RandomNode, RandomNode>()
var curr = head
while (curr != null) {
map[curr] = RandomNode(curr.value); curr = curr.next
}
curr = head
while (curr != null) {
map[curr]!!.next = map[curr.next]
map[curr]!!.random = map[curr.random]
curr = curr.next
}
return map[head]
}
When to Use
deep copy with hashmapWhy: HashMap maps old→new nodes. Pass 1: create copies. Pass 2: wire next and random via the map. O(n) time, O(n) space. Interleaving technique achieves O(1) extra space.
deep copy with interleavingWhy: Same HashMap (old→clone) pattern generalizes to any structure with arbitrary pointers. Two passes: create all nodes, then wire all edges.
▶ Visualize
Structure:
HashMap: key → DLL node (O(1) lookup)
DLL: tracks access order (most recent at front)
State changes:
Initial: head ⇄ tail
put(1,10): head ⇄ [1:10] ⇄ tail
put(2,20): head ⇄ [2:20] ⇄ [1:10] ⇄ tail
put(3,30): head ⇄ [3:30] ⇄ [2:20] ⇄ [1:10] ⇄ tail
get(1) → 10: head ⇄ [1:10] ⇄ [3:30] ⇄ [2:20] ⇄ tail
put(4,40): head ⇄ [4:40] ⇄ [1:10] ⇄ [3:30] ⇄ tail
↑ evicts [2:20] (was at tail = least recently used)
class LRUCache(private val capacity: Int) {
private data class Node(
val key: Int, var value: Int,
var prev: Node? = null, var next: Node? = null
)
private val map = HashMap<Int, Node>(capacity)
private val head = Node(-1, -1) // sentinel
private val tail = Node(-1, -1) // sentinel
init { head.next = tail; tail.prev = head }
fun get(key: Int): Int {
val node = map[key] ?: return -1
moveToFront(node)
return node.value
}
fun put(key: Int, value: Int) {
map[key]?.let { it.value = value; moveToFront(it); return }
val node = Node(key, value)
map[key] = node; addToFront(node)
if (map.size > capacity) {
val lru = tail.prev!!
removeNode(lru); map.remove(lru.key)
}
}
private fun addToFront(n: Node) {
n.next = head.next; n.prev = head
head.next!!.prev = n; head.next = n
}
private fun removeNode(n: Node) {
n.prev!!.next = n.next; n.next!!.prev = n.prev
}
private fun moveToFront(n: Node) {
removeNode(n); addToFront(n)
}
}
When to Use
LRU cacheWhy: HashMap for O(1) key lookup, DLL for O(1) access-order tracking. On get: move to front. On put: add to front, evict tail if over capacity. Both operations O(1).
LFU cacheWhy: Extend LRU with frequency buckets. Each frequency has its own DLL. HashMap maps key → node. On access, move node to the next frequency's DLL. Evict from lowest-frequency DLL's tail.
ordered access trackingWhy: Any scenario where you need O(1) reordering based on access/usage patterns. Browser history, recently used files, connection pools with idle timeout eviction.
- One of the most important data structure combinations in system design
- HashMap gives O(1) lookup; DLL gives O(1) reordering and eviction
- Sentinel head/tail eliminate all edge cases in add/remove operations
▶ Visualize
Common Pitfalls
// 1. Losing the head reference
var head = buildList(1, 2, 3)
while (head != null) head = head?.next // head = null!
// RIGHT: use a separate traversal pointer
var current = head
// 2. Not using a dummy head
// FRAGILE: special-casing head deletion
// CLEAN: val dummy = ListNode(0, head)
// 3. Creating cycles accidentally
c.next = a // cycle! traversal loops forever
// Always ensure last node's next = null
// 4. Not saving next before overwriting
curr.next = prev // lost original next!
// RIGHT: val next = curr.next first
// 5. == vs === for node comparison
// == compares values, === compares references
// For cycle detection, intersection: use ===
// 6. ConcurrentModificationException
for (x in list) list.remove(x) // crashes!
// RIGHT: list.removeIf { ... }
// 7. Not terminating after rearrangement
// After partition, reorder, etc. — set tail.next = null
// 8. Recursive solutions on large lists
// 100K nodes → StackOverflowError
// Prefer iterative for production code
// 9. Choosing LinkedList over ArrayDeque
val deque = ArrayDeque<Int>() // ← prefer this
// LinkedList only for O(1) arbitrary removal via iterator
- Always use a separate pointer for traversal — never advance
head
- Dummy head eliminates head-deletion special cases — use it by default
- Save
next before reversing any pointer — the #1 linked list bug
- Set
tail.next = null after any rearrangement — prevents accidental cycles
Quick Reference
| Operation |
Singly (head only) |
Singly (head + tail) |
Doubly Linked |
| Access by index | O(n) | O(n) | O(n)* |
| Search by value | O(n) | O(n) | O(n) |
| Insert at head | O(1) | O(1) | O(1) |
| Insert at tail | O(n) | O(1) | O(1) |
| Insert after node | O(1) | O(1) | O(1) |
| Delete at head | O(1) | O(1) | O(1) |
| Delete at tail | O(n) | O(n) | O(1) |
| Delete given node | O(n)** | O(n)** | O(1) |
| Reverse | O(n) | O(n) | O(n) |
| Find middle | O(n) | O(n) | O(n) |
- *DLL can start from nearer end:
O(min(i, n-i))
- **Singly linked: need to find the previous node first; O(1) if you have prev reference
- Memory per node: Singly ~32 bytes, Doubly ~40 bytes, vs Array ~4-8 bytes per element
| Algorithm |
Time |
Space |
| Reverse (iterative) | O(n) | O(1) |
| Detect cycle (Floyd) | O(n) | O(1) |
| Find middle (fast/slow) | O(n) | O(1) |
| Merge two sorted lists | O(n + m) | O(1) |
| Merge k sorted lists (heap) | O(N log k) | O(k) |
| Merge sort | O(n log n) | O(log n) |
| Remove nth from end | O(n) | O(1) |
| Find intersection | O(n + m) | O(1) |
| Palindrome check | O(n) | O(1) |
| Reorder list | O(n) | O(1) |
| Add two numbers | O(max(n,m)) | O(max(n,m)) |
| Partition | O(n) | O(1) |
| Rotate by k | O(n) | O(1) |
| Reverse in k-groups | O(n) | O(1) |
| Copy with random pointer | O(n) | O(n) or O(1) |
| LRU Cache (get/put) | O(1) | O(capacity) |
| If You See... | Try This |
| "reverse" / "flip" | Three-pointer reversal |
| "cycle" / "loop" / "circular" | Fast/slow pointers (Floyd's) |
| "middle" / "half" | Fast/slow pointers |
| "merge sorted lists" | Dummy head + two pointers |
| "sort linked list" | Merge sort (find mid + merge) |
| "nth from end" | Two-pointer gap technique |
| "intersection" | Virtual concatenation (A+B, B+A) |
| "palindrome" | Middle + reverse 2nd half + compare |
| "reorder" / "interleave" | Middle + reverse + merge |
| "partition around x" | Two dummy heads (less/greater) |
| "rotate by k" | Make circular, find new head |
| "add numbers" (as lists) | Digit-by-digit with carry |
| "swap pairs" / "k-group" | Iterative pointer rewiring |
| "LRU cache" | HashMap + Doubly Linked List |
| "copy with random pointer" | HashMap clone or interleaving |
Master these 5 → solve ~90% of LL problems:
1. Dummy head → any problem that modifies the head
2. Two pointers → nth from end, intersection (same speed)
3. Fast/slow → middle, cycle detection, palindrome
4. Reverse → palindrome, reorder, k-group reverse
5. Merge → merge sorted lists, sort list
- Dummy head — use by default in any LL function that returns a modified list
- Most problems combine 2-3 of these patterns (e.g., palindrome = fast/slow + reverse)
- Interview tip: use arrays/ArrayDeque by default; switch to linked lists only when O(1) insert/delete at arbitrary positions is needed
| Scenario | Winner |
| Random access by index | Array — O(1) vs O(n) |
| Insert/delete at front | Linked List — O(1) vs O(n) |
| Insert/delete at back | Tie — both O(1) |
| Insert/delete in middle* | Linked List — O(1) with node ref |
| Sequential iteration | Array — cache locality |
| Memory efficiency | Array — no pointer overhead |
| Implementing a queue | Linked List — O(1) both ends |
| LRU cache | Linked List — O(1) reordering |
- *Only O(1) if you already have a reference to the node — finding it is still O(n)
- Modern CPUs are designed for sequential access — arrays are 10-100x faster for iteration due to cache lines and prefetching
- Default to arrays/ArrayDeque; use linked lists when you need O(1) arbitrary insert/delete