The Art of Juggling Processes: Unraveling Process Virtualization Mechanisms

Aditya Raghuvanshi
5 min readJun 30, 2024

--

CS3.301 Operating Systems and Networks, IIIT Hyderabad

In our previous articles, we’ve explored the basics of operating systems and delved into the concept of processes. Now, let’s embark on a fascinating journey into the heart of process virtualization. We’ll uncover the mechanisms that allow our computers to seemingly perform multiple tasks simultaneously, even with limited resources. Along the way, we’ll solve two major challenges that operating systems face:

CS3.301 Operating Systems and Networks, IIIT Hyderabad

The Shell Game: Understanding How Commands Work

Before we dive into the intricacies of process virtualization, let’s start with a familiar example: the command-line interface, or shell. Have you ever wondered what happens when you type a command like wc process_sample3.c > output.txt into your terminal?

Here’s the behind-the-scenes magic:

  1. The shell, itself a process, reads your command.
  2. It then performs a clever trick by forking a child process.
  3. This child process redirects its standard output to the file output.txt.
  4. Finally, it executes the wc command on process_sample3.c.

This simple example illustrates the power and complexity of process management in operating systems. But it also raises an important question: How does the operating system manage to run multiple processes, switching between them seamlessly?

The Juggling Act: Running Multiple Processes

Imagine a circus performer juggling several balls. Each ball represents a process, and the performer is our CPU. The challenge is to keep all the balls in the air, giving each one just enough attention to keep it moving. This is essentially what the operating system does with processes.

To achieve this feat, we need two key components:

  1. Hardware Support: Low-level mechanisms to switch between processes.
  2. Software Support: Policies to decide which process should be executed next.

But before we can implement these solutions, we need to address two major challenges.

Challenge 1: Preventing Unintended Behavior

Imagine giving a visitor unrestricted access to a library. They might inadvertently damage rare books or access confidential records. Similarly, we can’t allow processes to do whatever they want with the system resources. This brings us to our first solution: Limited Direct Execution (LDE).

Limited Direct Execution: The Gatekeeper Approach

LDE is like having a vigilant librarian who allows visitors to browse freely but requires special permission for accessing restricted areas. In our operating system:

  1. We let processes run directly on the CPU for efficiency.
  2. We impose limits on what processes can do.
  3. We provide controlled access to privileged operations through the operating system.

To implement this, we introduce two modes of operation:

  1. User Mode: Where processes run with restricted privileges.
  2. Kernel Mode: Where the operating system runs with full access to hardware.

The TRAP Door: Entering Kernel Mode

But how does a process request privileged operations? This is where the TRAP instruction comes in. It’s like a special door that allows controlled movement between user and kernel modes.

When a process needs to perform a privileged operation:

  1. It executes a TRAP instruction.
  2. The CPU switches to kernel mode.
  3. It saves the process context and jumps to a pre-defined location in the kernel.
  4. The kernel performs the requested operation.
  5. It then uses a return-from-trap instruction to switch back to user mode and resume the process.

This mechanism ensures that processes can request privileged operations without compromising system security.

Challenge 2: Switching Between Processes

Now that we’ve secured our system, we face our second challenge: how to switch between processes? There are two main approaches:

CS3.301 Operating Systems and Networks, IIIT Hyderabad

1. The Cooperative Approach: Trusting Processes

In this approach, the operating system trusts processes to behave and voluntarily give up control. It’s like a group of polite people taking turns to speak in a conversation.

Processes transfer control back to the OS by making system calls or causing exceptions (like dividing by zero). While simple, this method has a major flaw: a misbehaving process can hog the CPU indefinitely.

2. The Preemptive Approach: Taking Control

To overcome the limitations of the cooperative approach, we need a way for the OS to regain control periodically. This is where timer interrupts come in.

Imagine a chess clock that limits each player’s turn. Similarly, the OS sets up a timer that interrupts the CPU at regular intervals. When the timer goes off:

  1. The current process is interrupted.
  2. Control transfers to the OS.
  3. The OS can then decide whether to continue with the current process or switch to another one, and this is know as “Context Switch” .

Context Switching

CS3.301 Operating Systems and Networks, IIIT Hyderabad

When the OS decides to switch processes, it performs a context switch. This is like a magician quickly swapping objects without the audience noticing.

During a context switch:

  1. The OS saves the state of the current process (registers, program counter, etc.) to memory.
  2. It then loads the state of the next process to run.
  3. Finally, it jumps to the instruction where the new process left off.

This sleight of hand is what creates the illusion of multiple processes running simultaneously on a single CPU.

Conclusion

By implementing these mechanisms — Limited Direct Execution, TRAP instructions, timer interrupts, and context switching — operating systems perform a remarkable balancing act. They ensure system security while providing the illusion of multiple processes running concurrently.

This delicate dance allows your computer to juggle numerous tasks, from playing music and checking emails to running complex calculations, all while maintaining system integrity and responsiveness.

In our next article, we’ll explore the policies that decide which process should run next, diving into the fascinating world of CPU scheduling algorithms. Stay tuned!

These articles are inspired by the Operating Systems and Networks (OSN) course taught at IIIT Hyderabad.

List of articles for OS&N :

  1. Introduction to Operating systems and Networks
  2. Process Virtualization: Creating the Illusion
  3. The Art of Juggling Processes: Unraveling Process Virtualization Mechanisms
  4. Process Scheduling Policies : Part 1
  5. Process Scheduling Policies : Part 2
  6. Networking Fundamentals: Connecting Processes Across Machines
  7. Understanding Networking from Layers to Sockets

I feel extremely happy sharing all this knowledge and do let me know if this article has helped you.

Thank you for reading, I hope this article helped you

Aditya Raghuvanshi ( IIIT Hyderabad, INDIA )

Connect me on the following :

Github | linkedin | Medium | Gmail : tanalpha.aditya@gmail.com

--

--

Aditya Raghuvanshi

AI researcher | NLP and Speech processing | IIIT Hyderabad