From Top to Bottom: Demystifying Computing with Insights into Process, Context Switching, IPC, Services, Threads, Parallelism, and Concurrency

Sanjay Nellutla
5 min readMar 11, 2024

--

Before we get into technical terms, let’s first understand the basics of computing: Parallelism and Concurrency. These two ideas play a crucial role in how computers manage tasks and processes, affecting the speed and efficiency of operations.

Parllelism vs Concurrency

Parallelism:

Referring to the diagram provided, you’ll notice that within a span of 1 minute, we successfully accomplish all 4 tasks. Translating this to the computing domain, if your server or computer boasts a higher number of cores, it means a greater capacity for handling multiple tasks simultaneously.

Concurrency:

Concurrency, on the other hand, involves performing multiple tasks simultaneously within a given timeframe. The operating system plays a crucial role in efficiently switching between these tasks based on their current states.

Now that we’ve clarified the distinction between parallelism and concurrency, let’s explore how we can leverage operating system’s capability to efficiently handle multiple tasks on a single core and also while utilizing multiple cores.

Process:

The operating system retains the entire set of program instructions and associated information in an isolated memory space, preventing the system’s memory space from becoming polluted. A process encompasses the program code, relevant data, and the execution context.

Consider that your operating system manages a list of processes to accomplish all sets of tasks concurrently or in a parallel manner, depending upon the number of cores available.

Processes can create child/subprocesses using popular system calls like fork(), which duplicates the parent process. When the parent process is terminated, its child processes are also terminated. Additionally, processes can communicate with each other using the concept of IPC (Inter-Process Communication). However, before delving into IPC, let’s first understand the different states of a process.

When a process is initiated, it enters the NEW state, and during this phase, all the necessary memory space required for the process is allocated. Once the allocation is complete, the process transitions to the READY state, signifying its readiness for execution on the processor.

Depending on the operating system’s priority management, the OS selects a process and transitions its state to RUNNING. This indicates that the instructions are currently being executed. However, during the execution, three possibilities may arise:

  1. If an interrupt occurs (a high-priority task initiated by a system call or hardware devices), the process returns to the READY state.
  2. If the process is waiting for an I/O operation to complete (initiated by the process itself), the state changes to WAITING. Upon completion of the I/O operation, the process returns to the READY state.
  3. If all instructions in the process are executed, the process gracefully transitions to the TERMINATED state.

IPC (Inter Process Communication):

In the context discussed earlier, where the operating system manages a list of processes and strives to accomplish tasks concurrently or parallely, processes can communicate with each other using IPC. There are two primary methods for this communication:

Shared Memory:

  • Involves allocating a specific memory space within one process, which can be accessed by other processes.
  • Facilitates efficient data exchange among processes.

Messages:

  • Involves the exchange of information between processes via messages.
  • Offers a structured and controlled way of data and signal exchange between processes.

IPC plays a crucial role in enabling processes to collaborate, share information, and synchronize their activities, enhancing the overall efficiency of the operating system.

Service:

In the realm of operating systems, a service is typically defined as a persistent background or daemon process. These processes run continuously and serve a fundamental purpose: actively monitoring specific events, especially network-related ones, and responding by executing pre-defined actions. These background processes are instrumental in coordinating system functionality, providing ongoing, automated support for various tasks without necessitating direct user interaction.

Several examples of network services include web servers, file servers, and DNS servers, each designed to facilitate communication and resource-sharing within a network. On the other hand, certain system-level services, such as the task scheduler, contribute to the overall management and scheduling of tasks within the operating system. Together, these services collectively ensure the seamless operation, efficiency, and responsiveness of the computing environment.

Abstracting all the implementation details of how the internet, network, OSI layers, and TCP/IP connections work, here are the steps outlining how network services operate within an operating system:

  • When the Network Interface Card (NIC) reads a message from the internet, it promptly checks the socket information (IP + port) and extracts the port from it.
  • Simultaneously, the operating system is running a web server, which operates as a background process and is bound to a specific port number. The OS maintains a data structure containing socket and ProcessID information, ensuring that NIC services can correctly route the message to the process associated with the designated port.

Threads:

Threads can be defined as a unit level execution inside a process, which share code, data and files of a process and has its own independent Program counters, registries and call stacks.

A process can make use of one or more threads to execute a set of instructions. Threads within a process have the ability to be executed on different CPU cores, enabling the process to complete tasks in parallel.

Threads can be executed concurrently even on a server or computer with a single core. The efficiency is dependent on the operating system’s scheduling.

Context Switching:

Context switching is a crucial mechanism in operating systems that allows multiple threads or processes to execute concurrently on a single CPU core. When multiple threads are running concurrently, the CPU switches between them, giving the illusion of simultaneous execution. Here’s an explanation of context switching

Context switching is triggered when the operating system decides to pause the execution of one thread and start or resume the execution of another.

Thread Blocking: If a thread is waiting for an I/O operation or another event, the operating system may switch to a different thread that is ready to run.

Context switching introduces some overhead due to the need to save and restore thread contexts. Minimizing this overhead is crucial for efficient multitasking.

Care must be taken to synchronize shared resources among threads to avoid data corruption or inconsistencies during context switches.

Conclusion:

In summary, this exploration has demystified key computing concepts, from parallelism and concurrency to processes, threads, IPC, services, and context switching. The seamless collaboration of these components within an operating system orchestrates the efficient execution of tasks. As we navigate this computing landscape, it’s clear that the harmonious interplay of these elements is essential for the robust functionality and responsiveness of modern systems.

--

--

Sanjay Nellutla

Working on JavaScript and JavaScript related technologies from 7+ years, having good experience in web development, problem solving and designing solutions.