Round-Robin Scheduling Algorithm

Francesco Franco
4 min readAug 1, 2023

--

In my last post I discussed two versions of the Shortest Job First (SJF) CPU scheduling algorithm: preemptive and non-preemptive and provided links to their implementations in C. Today I want to proceed in the usual order and discuss another very important and even more straightforward algorithm that can and has been used in CPU scheduling, sometimes alone and sometimes mixed with priority scheduling in various degrees and forms.

Round-robin is basically just a preemptive version of FCFS (First Come First Served) scheduling. A small unit of time, called a time quantum or time slice is defined. A time quantum is generally from 10 to 100 milliseconds in length. The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to one time slice.

To implement RR scheduling, we again treat the ready queue as a FIFO queue of processes. New processes are added at the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after one time slice, and dispatches the process.

One of two things can then happen. The process may have a CPU burst of less than one time slice. In such a case, the process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. If the CPU burst of the currently running process is longer than one time quantum, the timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be placed at the tail of the ready queue. The scheduler will then select the next process in the ready queue.

The average waiting time under the RR policy is often long. Let us consider the following set or processes that arrive at time 0, with the length of the CPU burst given in milliseconds:

  Process                 Burst Time
--------- -----------
P1 24
P2 3
P3 3

If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time slice, and the CPU is given to the next process in the queue, process P2. Process P2 doesn’t need 4 milliseconds, so it quits before its allocated time slice is expired. The CPU is then given to the next process, process P3. Once each process has received one time slice, the CPU is returned to process P1 for an additional time slice. The resulting RR schedule is as follows:

P1    P2    P3    P1    P1     P1    P1    P1
0-4 4-7 7-10 10-14 14-18 18-22 22-26 26-30

The average waiting time for this schedule is as follows. P1 waits for 6 millisecond (10–4) , P2 waits for 4 milliseconds, and P3 waits for 7 milliseconds. Thus, the average is 17/3 = 5.66 milliseconds.

In the RR scheduling algorithm, no process is allocated the CPU for more than one time slice in a row (unless it is the only runnable process). If a process’s CPU burst time exceed one time slice, it is preempted and is put back in the ready queue.

The performance of the RR algorithm depends heavily on the size of the time slice. At one extreme, if the time slice is extremely large, the RR policy is the same as FCFS. On the other hand, if the time slice is extremely small (say, 1 millisecond), the RR approach can result in a large number of context switches. Assume, for example, that we have only one process of 10 time units. If the time slice is 12 time units, the process finishes in less than one time slice, with no overhead. If the quantum is 6 time units, however, the process requires 2 slices, resulting in a context switch. If the time slice is 1 time unit, then 9 context switched will occur, slowing the execution of the process accordingly.

So we want the time slice to be large with respect to the context-switch time. In practice, most modern systems have time slices ranging from 10 to 100 milliseconds. The time required for a context switch is usually less than 10 microseconds; so the context-switch time is a small fraction of the time slice.

Turnaround time also depends on the size of the time slice. The average turnaround time of a set of processes does not necessarily improve as the time slice size increases. Although the time slice should be large compared to the context switch time, it should not be too large. As noted above, if the time slice is too large, RR scheduling degenerates to the FCFS algorithm. A rule of thumb is that 80% of the CPU bursts should be shorter be shorter than the time slice.

Round Robin Scheduling Example: Wikpedia

The C implementations of Round Robin scheduling with the same arrival times can be found here and with different arrival times here.

--

--

Francesco Franco

BS in Computer Science | studying ML| amatuer philosopher| compulsive reader