Store pairs in an ordered list. Oldest (LRU) at index 0, newest (MRU) at the end. Every get or put that touches a key moves it to the end. Eviction removes list[0]. Simple, but O(n) per op due to linear scan and shift.
Executing Code
class LRUCache:
def__init__(self, capacity):
self.cache = []; self.capacity = capacity
defget(self, key):
for i inrange(len(self.cache)):
if self.cache[i][0] == key:
tmp = self.cache.pop(i)
self.cache.append(tmp)
return tmp[1]
return -1
defput(self, key, value):
for i inrange(len(self.cache)):
if self.cache[i][0] == key:
self.cache.pop(i)
self.cache.append([key, value]); return
if self.capacity == len(self.cache):
self.cache.pop(0)
self.cache.append([key, value])
Time: O(n) per get/put | Space: O(n)
2. Doubly Linked List + HashMap O(1) per op ★ optimal
Intuition
Two data structures combined:
HashMapcache[key] → node: O(1) lookup of any node by key.
Doubly Linked List: nodes ordered LRU→MRU. O(1) insert at tail, O(1) remove from any position (given a direct pointer).
Two dummy sentinels (LEFT and RIGHT) eliminate all edge cases — the real LRU is always left.next and the real MRU is always right.prev.
get(key): find node via hashmap → remove(node) → insert(node) before RIGHT → return val. All O(1).
put(key, value): if key exists: remove old node. Create new node, insert at MRU. If over capacity: evict left.next from list and hashmap. All O(1).
The visualization breaks every remove and insert into separate steps so you can see exactly which pointer updates happen and why.
if key in self.cache: self.remove(self.cache[key])
self.cache[key] = Node(key, value)
self.insert(self.cache[key])
iflen(self.cache) > self.cap:
lru = self.left.next
self.remove(lru); del self.cache[lru.key]
Time: O(1) per get/put | Space: O(n)
3. Built-in Ordered Map O(1) · language-specific
Intuition
Python's OrderedDict and Java's LinkedHashMap(accessOrder=true)maintain insertion/access order internally, giving the same O(1) behavior without manually managing the DLL. move_to_end marks MRU; popitem(last=False) evicts LRU.
Concise — fewer lines, same time complexity as the DLL approach.
Not allowed in interviews that ask you to implement from scratch.
Executing Code
from collections import OrderedDict
class LRUCache:
def__init__(self, capacity):
self.cache = OrderedDict()
self.cap = capacity
defget(self, key):
if key not in self.cache: return -1
self.cache.move_to_end(key) # mark MRU
return self.cache[key]
defput(self, key, value):
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
iflen(self.cache) > self.cap:
self.cache.popitem(last=False) # evict LRU
Time: O(1) | Space: O(n)
Common Pitfalls
Forgetting to update on get:get() must move the accessed node to MRU, not just return the value. Skipping this breaks the LRU invariant.
Missing 4-pointer update on insert: inserting a node before RIGHT requires updating 4 pointers — node.prev, node.next, right.prev.next, and right.prev. Missing any one corrupts the list.
Not storing the key in the node: when evicting, you need to remove the node from both the list and the hashmap. If your list node only stores the value, you can't find the hashmap key to delete it.
Evicting before or after inserting: in put, always insert the new node first, then check capacity and evict. Otherwise you might evict the node you just inserted.
Dummy node confusion:left.next is the LRU (not left itself). right.prev is the MRU. The dummies are never evicted or returned.
Discussion
…