LRU algorithm breakdown

Piotr Wójtowicz
3 min readAug 8, 2022

--

In my recent article I have mentioned an algorithm called LRU (least recently used). Today we’ll try to understand its role in the system and memory management as well as its structure.

Quick introduction

Virtual memory is a great concept and quite possibly the most important part of every operating system. It’s almost impossible to describe it briefly, however I will try to highlight the most siginificant elements. The main role of the virtual memory is to separate physical and logical memory. The translation from virtual address to physical one is taking place in MMU (memory management unit) this piece of hardware/software uses the PT (page table)and the TLB (translation lookaside buffer) as a sort of ‘dictionary’ to decode phycial addresses. That way each process is provided with an abstraction that it operates within a contiguous space in memory (a virtual address space) which contains stack, heap, program text etc. (that’s where all your (m)allocated memory was waiting for you to free() it). Besides, virtual memory helps with memory protection, allows shared libraries to even exist, in short, it just simplifies our lives.

What does all that have to do with LRU?

Well let’s consider this scenario. You have written a gallery app. User can either load photos from directory A or directory B. Let’s say he/she/it/they choose to share some memes from directory A and then leave/s. Would it be necessary to load photos from B to memory? No. To be responsive, the application should not even load all photos from directory A . Technique that waits to the last moment to load some ‘new’ data from memory is called demand pagining. What if system decides it can run couple of apps but then some of them need to load more data? That’s where the page replacement algoirthms come along.

Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

If you would like to dive into more detailed description of page replacement techniques and over-allocation here are some terms you’d need to read about first: free-frame list, copy-on-write, dirty bit, virtual memory fork, zero-fill-on-demand.

So now as we’ve learned what role LRU occupies in operating system we can talk about the actual algoirthm.

LRU breakdown

LRU can be implemented in few ways, however the most popular are

  • Linked list method - which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page.
  • PC (program counter) method — whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out.

There are also couple of variants of traditional LRU: LRU-K, ARC (which is really worth checking), 2Q.

Let me show you LRU in action. I have attempted to write an implementation of this algorithm (you can find it on my github you can read there about system assumptions I’ve made) I’ve decided to use the linked list method.

These are some random virtual addresses that are going to imitate CPU requests. With some calculations we can decode VPNs (virtual page numbers) which are as follows 1, 2, 1, 3, 1, 16, 1, 5 … Below you can see how LRU is dealing with that arrangement of virtual pages.

Notice how LRU deals with ‘already-stored’ incoming vritual pages on top of the list. When it encounters a page that is not stored in memory frames, LRU pushes it on the top.

References

  • Wikipedia
  • Operating System Concepts, by Abraham Silberschatz, Peter Baer Galvin and Greg Gagne

--

--

Piotr Wójtowicz
0 Followers

Aspiring computer scientists with open mind and too many unused tabs in browser.