Process Scheduling Policies : Part 2

Aditya Raghuvanshi
7 min readJul 1, 2024

--

In our previous exploration of process scheduling, we journeyed through various scheduling policies, each with its own strengths and weaknesses. We concluded with a pressing question: How can we effectively schedule processes when we don’t know their runtime in advance? Enter the Multi-Level Feedback Queue (MLFQ), a sophisticated scheduling algorithm that adapts to process behavior in real-time.

The Librarian’s New Strategy

Let’s return to our library analogy. Imagine our librarian has implemented a new system for managing the reference section. Instead of relying on patrons to accurately estimate their research time, the librarian decides to observe their behavior and adjust priorities accordingly. This is essentially what MLFQ does in the world of process scheduling.

CS3.301 Operating Systems and Networks, IIIT Hyderabad

The MLFQ Approach: Learning on the Job

MLFQ is designed with two primary goals:

  1. Reduce turnaround time by running shorter jobs first
  2. Minimize response time for interactive processes

The key constraint? It must achieve these goals without any prior knowledge of job lengths. Let’s dive into how MLFQ accomplishes this feat.

The Basic Rules of MLFQ

  1. MLFQ uses multiple queues, each with a different priority level.
  2. A job’s priority determines which queue it’s in.
  3. The scheduler always runs the job from the highest priority non-empty queue.

Think of these queues as different lines in our library, each with a different level of priority.

The Fundamental MLFQ Rules

  • Jobs that keep giving back the CPU — interactive jobs (higher priority)
  • Jobs that uses CPU for more time — Reduce priority
  • Learn about the processes as they run and predict future (Not using AI)

Two basic rules :

  1. If Priority(A) > Priority(B), A runs (and B doesn’t).
  2. If Priority(A) = Priority(B), A and B run in Round Robin fashion.

These rules form the foundation of MLFQ’s decision-making process. But how does MLFQ determine and adjust these priorities? That’s where the magic happens.

MLFQ in Action

Let’s consider a scenario with two types of processes:

  1. Interactive processes (like a text editor) that frequently relinquish the CPU
  2. CPU-bound processes (like a complex calculation) that use their entire time slice

MLFQ starts all processes at the highest priority queue. As processes run, MLFQ observes their behavior:

  • If a process uses its entire time slice, it’s moved down to a lower priority queue.
  • If a process gives up the CPU before its time slice expires, it stays at its current priority level.
Queue 2 (Highest Priority) | P1 P2
Queue 1 |
Queue 0 (Lowest Priority) |

Initially, both P1 (interactive) and P2 (CPU-bound) start in Queue 2. After one time slice:

Queue 2 (Highest Priority) | P1
Queue 1 | P2
Queue 0 (Lowest Priority) |

P1, being interactive, quickly gives up the CPU and stays in Queue 2. P2 uses its full time slice and moves down to Queue 1.

This behavior allows MLFQ to prioritize interactive processes while ensuring that CPU-bound processes still get a chance to run.

The Challenges of MLFQ: Starvation and Gaming the System

While MLFQ is clever, it’s not without its challenges. Two significant issues can arise:

  1. Starvation: Long-running, CPU-bound processes might never get a chance to run if there are too many short, interactive processes.
  2. Gaming the System: A process could trick the scheduler by voluntarily giving up the CPU just before its time slice expires, maintaining its high priority unfairly.

To address these issues, MLFQ implementations often include additional rules.

Priority Boost: Giving Everyone a Chance

CS3.301 Operating Systems and Networks, IIIT Hyderabad

To prevent starvation, MLFQ periodically boosts the priority of all processes to the top queue. This is like our librarian occasionally resetting all patrons to the highest priority line.

Non-Priority Boost Scenario

CS3.301 Operating Systems and Networks, IIIT Hyderabad

With Priority Boost (S = 50)

CS3.301 Operating Systems and Networks, IIIT Hyderabad

This periodic boost ensures that no process starves indefinitely and allows CPU-bound processes that might have become more interactive to get a fair shot at running.

Better Accounting: Preventing Gaming

To prevent processes from gaming the system, MLFQ can implement a rule that once a process uses up its time allotment at a given level (regardless of how many times it relinquishes the CPU), it is moved to a lower priority queue.

Source : https://velog.io/@dnr6054/OS-Scheduling-Basics

This is like our librarian keeping track of the total time a patron spends in the reference section, regardless of how many times they leave and come back.

Tuning MLFQ: The Art of Balancing

Implementing MLFQ requires careful tuning of several parameters:

  1. Number of Queues: How many priority levels should there be?
  2. Time Slice Length: How long should each time slice be?
  3. Priority Boost Interval: How often should all processes be moved to the top queue?

These parameters can significantly impact system performance. For example:

  • Higher priority queues often have shorter time slices to favor interactive processes.
  • Lower priority queues may have longer time slices to give CPU-bound processes a chance to make progress.

Real-World MLFQ: The Solaris Scheduler

Let’s look at a real-world implementation of MLFQ: the Solaris operating system scheduler.

Solaris uses a sophisticated version of MLFQ with the following characteristics:

  • 60 queues
  • Time slice lengths ranging from 20ms to several hundred milliseconds
  • Priority boost every 1 second

This configuration allows Solaris to handle a wide range of process types efficiently, from highly interactive GUI applications to long-running background jobs.

Summing Up MLFQ: The Five Golden Rules

To recap, here are the five key rules that govern most MLFQ implementations:

  1. If Priority(A) > Priority(B), A runs (B doesn’t).
  2. If Priority(A) = Priority(B), A & B run in Round Robin fashion.
  3. When a job enters the system, it’s placed at the highest priority.
  4. Once a job uses up its time allotment at a given level (regardless of how many times it gives up the CPU), its priority is reduced.
  5. After some time period S, move all the jobs in the system to the topmost queue.

These rules allow MLFQ to adapt to different types of processes, prioritize interactive jobs, prevent starvation, and maintain overall system responsiveness.

Conclusion: MLFQ — The Adaptive Maestro

MLFQ represents a significant leap forward in process scheduling. By learning from the behavior of processes as they run, MLFQ can effectively prioritize interactive processes while still ensuring fair execution time for CPU-bound tasks. Its ability to adapt makes it well-suited for general-purpose operating systems where the workload can vary widely.

Many modern operating systems, including BSD Unix derivatives, Windows NT, and Solaris, use variations of MLFQ as their basic scheduling algorithm. This widespread adoption is a testament to MLFQ’s effectiveness in balancing the complex demands of modern computing environments.

As we’ve seen, process scheduling is as much an art as it is a science. The perfect balance between responsiveness, fairness, and efficiency often depends on the specific use case and workload of the system. MLFQ provides a flexible framework that can be tuned to meet these diverse needs.

In our journey through the world of process scheduling, we’ve seen how algorithms have evolved from simple First-Come-First-Served approaches to the adaptive, learning-based approach of MLFQ. Each step in this evolution has brought us closer to the ideal of a responsive, fair, and efficient operating system.

As you continue your exploration of operating systems, remember that scheduling is just one piece of the puzzle. The interplay between scheduling, memory management, I/O handling, and other OS components creates the complex symphony that is a modern operating system. And at the heart of this symphony is the MLFQ, constantly adapting and adjusting to keep your computer humming along smoothly.

These articles are inspired by the Operating Systems and Networks (OSN) course taught at IIIT Hyderabad.

List of articles for OS&N :

  1. Introduction to Operating systems and Networks
  2. Process Virtualization: Creating the Illusion
  3. The Art of Juggling Processes: Unraveling Process Virtualization Mechanisms
  4. Process Scheduling Policies : Part 1
  5. Process Scheduling Policies : Part 2
  6. Networking Fundamentals: Connecting Processes Across Machines
  7. Understanding Networking from Layers to Sockets

I feel extremely happy sharing all this knowledge and do let me know if this article has helped you.

Thank you for reading, I hope this article helped you

Aditya Raghuvanshi ( IIIT Hyderabad, INDIA )

Connect me on the following :

Github | linkedin | Medium | Gmail : tanalpha.aditya@gmail.com

--

--

Aditya Raghuvanshi

AI researcher | NLP and Speech processing | IIIT Hyderabad