Inter-Process Communication in Operating Systems: A Comprehensive Guide with Real-life Examples and Code

Vishal Sharma
13 min readMar 7, 2023

--

Introduction: Inter-Process Communication (IPC) is a crucial aspect of modern operating systems that allows processes to communicate with each other. This blog will explore the different IPC mechanisms available in operating systems, along with real-life examples and code snippets.

Section 1: Overview of Inter-Process Communication In this section, we will provide a high-level overview of IPC in operating systems and discuss its importance. We will also briefly discuss the different types of IPC mechanisms and their use cases.

Section 2: Pipes and FIFOs In this section, we will discuss pipes and FIFOs, two simple IPC mechanisms used to communicate between processes. We will explain how they work, their advantages and disadvantages, and provide real-life examples and code snippets.

Section 3: Shared Memory In this section, we will dive into shared memory, one of the most popular IPC mechanisms. We will discuss how shared memory works, its advantages and disadvantages, and provide some real-life examples and code snippets.

Section 4: Message Passing In this section, we will explore message passing, another popular IPC mechanism. We will discuss the different types of message passing, such as synchronous and asynchronous, and their use cases. We will also provide some real-life examples and code snippets.

Section 5: Semaphores and Mutexes In this section, we will discuss the synchronization primitives used to prevent race conditions in IPC. We will explain how they work, their advantages and disadvantages, and provide real-life examples and code snippets.

Section 6: Signals In this section, we will explore signals used to notify processes of events or errors. We will discuss how signals work, their advantages and disadvantages, and provide real-life examples and code snippets.

Section 7: Sockets In this section, we will discuss sockets, a popular IPC mechanism used for communication across a network. We will discuss the different types of sockets, such as TCP and UDP, and their use cases. We will also provide some real-life examples and code snippets.

Conclusion: In conclusion, IPC is a crucial aspect of modern operating systems, and various mechanisms are available for communication between processes. This blog explored some of the most popular IPC mechanisms, such as pipes and FIFOs, shared memory, message passing, semaphores and mutexes, signals, and sockets. We provided real-life examples and code snippets to help you understand how these mechanisms work. With this knowledge, you should be able to choose the right IPC mechanism for your application and implement it successfully.

Introduction:

Inter-process communication (IPC) is an essential concept in modern operating systems that allow processes to exchange data and synchronize activities. IPC mechanisms enable processes to work together, share resources, and coordinate operations to achieve a common goal. In a world where software applications are becoming more complex and systems are becoming more distributed, understanding IPC is crucial for any software developer or system administrator.

Image Source

This blog will provide a comprehensive guide to IPC in operating systems. We will start with an overview of IPC and its importance and then explore different IPC mechanisms with real-life examples and code snippets. By the end of this blog, you will have a solid understanding of IPC and be able to choose the right machine for your application’s needs.

Image Source

Section 1: Overview of Inter-Process Communication

Inter-process communication (IPC) is a mechanism that allows multiple processes running on an operating system to communicate and share resources with each other. IPC is an essential concept in modern computing systems because most applications are composed of multiple processes that need to work together to achieve their goals.

IPC enables processes to share data, synchronize activities, and coordinate operations. It enables processes to work independently and yet cooperate seamlessly.

Different types of IPC mechanisms are available in operating systems, each with advantages and disadvantages. These mechanisms include pipes, FIFOs, shared memory, message passing, semaphores and mutexes, signals, and sockets.

Pipes and FIFOs are used to communicate related processes with a common ancestry. Shared memory is used when two or more processes must share a large amount of data. Message passing is used when processes need to communicate with each other, but they are not related. Semaphores and mutexes prevented race conditions when two or more processes accessed shared resources simultaneously. Signals are used to notify processes of events or errors. Finally, sockets are used for communication across a network between processes running on different systems.

In the next sections, we will explore each IPC mechanism in detail, with examples and code snippets, to help you understand how and when to use them.

Section 2: Pipes and FIFOs

Pipes and FIFOs are two simple IPC mechanisms used for communication between processes that share a common ancestry. They are commonly used in Unix-based systems, such as Linux and macOS.

Pipes

A pipe is a unidirectional communication channel that enables the transfer of data between two related processes. A pipe has two ends: one for writing data (write end) and one for reading data (read end). The data written by one process can be read by another process. A pipe is a simple way for two related processes to communicate with each other.

The process that creates the pipe is called the parent process, and the two processes that communicate through the pipe are called the child processes. Pipes are created using the pipe() system call, which returns two file descriptors, one for the read end and one for the write end of the pipe.

Here’s an example of how to create and use a pipe in C:

#include <unistd.h>
#include <stdio.h>

int main() {
int fd[2];
char buffer[20];

pipe(fd);

if (fork() == 0) { // Child process
close(fd[0]); // Close read end
write(fd[1], "Hello World!", 13); // Write data to pipe
close(fd[1]); // Close write end
} else { // Parent process
close(fd[1]); // Close write end
read(fd[0], buffer, 20); // Read data from pipe
printf("%s\n", buffer);
close(fd[0]); // Close read end
}

return 0;
}

In this example, the parent process creates a pipe and forks a child process. The child process writes the string “Hello World!” to the write end of the pipe, and the parent process reads the data from the read end of the pipe and prints it to the console.

Pipes have the advantage of being simple to use and easy to understand. However, they have some limitations, such as being unidirectional, meaning data can only flow in one direction, and their size is limited to the system’s maximum pipe buffer size.

FIFOs

A FIFO, or named pipe, is a special type of pipe that enables the communication between unrelated processes. Unlike a pipe, a FIFO is created as a file in the file system and has a name associated with it. A FIFO is bidirectional, meaning data can flow in both directions.

FIFOs are created using the mkfifo() system call, which creates a file in the file system with a name and the properties of a FIFO. Processes can then open the file and communicate with each other.

Here’s an example of how to create and use a FIFO in C:

#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>

int main() {
char buffer[20];

mkfifo("myfifo", 0666);

int fd = open("myfifo", O_RDONLY);
read(fd, buffer, 20);
printf("%s\n", buffer);

return 0;
}

In this example, the process creates a FIFO named “myfifo” and opens it for reading. When another process writes data to the FIFO, this process reads the data and prints it to the console.

FIFOs have the advantage of being bidirectional, meaning data can flow in both directions and can be used for communication between unrelated processes. However, like pipes, they have a limited buffer size and can block if the buffer is full.

Section 3: Shared Memory

Shared memory is a mechanism used for communication between processes that need to share a common block of memory. In this mechanism, a region of memory is created that can be accessed by multiple processes. This allows for efficient communication between processes, as data can be accessed directly from memory without the overhead of copying data between processes.

Shared memory is created using the shmget() system call, which returns a shared memory identifier. Processes can then attach to the shared memory segment using the shmat() system call, which returns a pointer to the shared memory segment. Processes can read and write to the shared memory segment like any other memory block.

Here’s an example of how to create and use shared memory in C:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/shm.h>

int main() {
int shmid;
char *shmaddr;
int *shared_data;

// Create shared memory segment
shmid = shmget(IPC_PRIVATE, sizeof(int), IPC_CREAT | 0666);

// Attach to shared memory segment
shmaddr = shmat(shmid, NULL, 0);

// Set shared data
shared_data = (int *)shmaddr;
*shared_data = 42;

// Detach from shared memory segment
shmdt(shmaddr);

// Attach to shared memory segment again
shmaddr = shmat(shmid, NULL, 0);

// Read shared data
shared_data = (int *)shmaddr;
printf("Shared data: %d\n", *shared_data);

// Detach from shared memory segment
shmdt(shmaddr);

// Remove shared memory segment
shmctl(shmid, IPC_RMID, NULL);

return 0;
}

In this example, the process creates a shared memory segment using the shmget() system call and attaches to it using the shmat() system call. The process writes the value 42 to the shared memory segment and then detaches from it. The process then attaches to the shared memory segment again and reads the value 42 from it.

Shared memory is very efficient and fast for communication between processes that need to share large amounts of data. However, it has some disadvantages, such as the need for synchronization to prevent race conditions when multiple processes access the shared memory segment simultaneously.

Section 4: Message Passing

Message passing is another popular IPC mechanism used for communication between processes. In message passing, processes communicate by sending messages to each other.

There are two types of message passing: synchronous and asynchronous. In synchronous message passing, the sender blocks until the receiver acknowledge receipt of the message. In asynchronous message passing, the sender does not block and continues to execute while the message is being sent.

Message passing can be implemented using various mechanisms, such as sockets, named pipes, and message queues. In this section, we will focus on message queues.

A message queue is a mechanism used for communication between processes, where messages are stored in a queue until they are received by the receiver process. Messages can be sent to a message queue using the msgsnd() system call and received from a message queue using the msgrcv() system call.

Here’s an example of how to use message queues in C:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/msg.h>

struct msgbuf {
long mtype;
char mtext[100];
};

int main() {
int msqid;
key_t key;
struct msgbuf message;

// Create message queue
key = ftok("/tmp", 'a');
msqid = msgget(key, IPC_CREAT | 0666);

// Send message to message queue
message.mtype = 1;
sprintf(message.mtext, "Hello, world!");
msgsnd(msqid, &message, sizeof(message), 0);

// Receive message from message queue
msgrcv(msqid, &message, sizeof(message), 1, 0);
printf("Received message: %s\n", message.mtext);

// Remove message queue
msgctl(msqid, IPC_RMID, NULL);

return 0;
}

In this example, the process creates a message queue using the msgget() system call and sends a message to it using the msgsnd() system call. The process then receives a message from the message queue using the msgrcv() system call and prints it out.

Message passing has the advantage of being a flexible and reliable IPC mechanism that can be used for both synchronous and asynchronous communication between processes. However, it has some disadvantages, such as the overhead of copying data between processes and the need for synchronization to prevent race conditions when multiple processes access the message queue simultaneously.

Section 5: Semaphores and Mutexes

In inter-process communication, race conditions can occur when multiple processes access shared resources simultaneously. To prevent race conditions, synchronization primitives such as semaphores and mutexes are used.

Semaphores

A semaphore is a synchronization primitive that can be used to limit the number of processes accessing a shared resource simultaneously. A semaphore can be in one of two states: locked or unlocked. A process can only access the shared resource if the semaphore is in the unlocked state.

In Unix-like operating systems, semaphores are implemented using the semaphore.h library. Here's an example of how to use semaphores in C:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/ipc.h>
#include <sys/sem.h>

int main() {
int semid;
key_t key;
struct sembuf operation;

// Create semaphore
key = ftok("/tmp", 'a');
semid = semget(key, 1, IPC_CREAT | 0666);

// Initialize semaphore
semctl(semid, 0, SETVAL, 1);

// Acquire semaphore
operation.sem_num = 0;
operation.sem_op = -1;
operation.sem_flg = 0;
semop(semid, &operation, 1);

// Critical section
printf("This is a critical section.\n");
sleep(10);

// Release semaphore
operation.sem_num = 0;
operation.sem_op = 1;
operation.sem_flg = 0;
semop(semid, &operation, 1);

// Remove semaphore
semctl(semid, 0, IPC_RMID);

return 0;
}

In this example, the process creates a semaphore using the semget() system call and initializes it to the unlocked state using the semctl() system call. The process then acquires the semaphore using the semop() system call, enters the critical section, sleeps for 10 seconds, and then releases the semaphore using the semop() system call.

Mutexes

A mutex is another synchronization primitive used to prevent race conditions. Unlike a semaphore, a mutex can only be owned by one process at a time. If a process attempts to acquire a mutex already owned by another process, it will block until the mutex is released.

In Unix-like operating systems, mutexes are implemented using the pthread_mutex.h library. Here's an example of how to use mutexes in C:

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <pthread.h>

pthread_mutex_t mutex;

void* thread_function(void* arg) {
pthread_mutex_lock(&mutex);

// Critical section
printf("This is a critical section.\n");
sleep(10);

pthread_mutex_unlock(&mutex);

return NULL;
}

int main() {
pthread_t thread1, thread2;

// Initialize mutex
pthread_mutex_init(&mutex, NULL);

// Create threads
pthread_create(&thread1, NULL, thread_function, NULL);
pthread_create(&thread2, NULL, thread_function, NULL);

// Wait for threads to finish
pthread_join(thread1, NULL);
pthread_join(thread2, NULL);

// Destroy mutex
pthread_mutex_destroy(&mutex);

return 0;
}

In this example, two threads are created, each attempting to acquire the mutex using the pthread_mutex_lock() function. The first thread to acquire the mutex enters the critical section, sleeps for a random amount of time using the sleep() function, and then releases the mutex using pthread_mutex_unlock(). The second thread will then acquire the mutex and repeat the same process. This ensures that only one thread at a time can execute the critical section, preventing race conditions and ensuring data consistency.

Signals are a form of IPC used to notify processes of events or errors. Signals can be sent to a process from the kernel or from another process. When a signal is sent to a process, the operating system interrupts the process’s normal execution and jumps to a signal handler, a function specified by the process that handles the signal.

There are many different types of signals, including SIGINT (generated by pressing CTRL+C on the keyboard), SIGTERM (used to request that a process terminate gracefully), and SIGSEGV (generated when a process attempts to access memory it does not have permission to access).

One of the advantages of signals is their simplicity. They are easy to use and require very little overhead. However, they can also be dangerous if not used properly, as they can interrupt a process’s execution at any time, potentially leading to data corruption or other issues.

Here is an example of using signals in C:

#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <unistd.h>

void sigint_handler(int sig) {
printf("Received SIGINT signal!\n");
}

int main() {
// Register the signal handler for SIGINT
signal(SIGINT, sigint_handler);

while(1) {
printf("Waiting for SIGINT...\n");
sleep(1);
}

return 0;
}

In this example, we register a signal handler for the SIGINT signal (generated by pressing CTRL+C on the keyboard). When the signal is received, the signal handler prints a message to the console. The main function then enters an infinite loop, waiting for the SIGINT signal to be generated. When the signal is received, the signal handler is executed, and the message is printed to the console.

Sockets are a popular IPC mechanism used for communication across a network. Sockets allow processes on different machines to communicate with each other by sending and receiving messages.

There are two main types of sockets: TCP and UDP. TCP (Transmission Control Protocol) is a reliable, connection-oriented protocol that ensures that all data is received in the correct order and without errors. On the other hand, UDP (User Datagram Protocol) is a connectionless protocol that does not guarantee reliable delivery or ordering of messages but is faster and more efficient.

One of the advantages of sockets is their flexibility. They can be used to communicate between processes on the same machine and between processes on different machines across a network. Sockets also support a wide range of message types, including text, binary data, and multimedia.

Here is an example of using sockets in Python:

import socket

# Create a TCP/IP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

# Bind the socket to a specific address and port
server_address = ('localhost', 8080)
sock.bind(server_address)

# Listen for incoming connections
sock.listen(1)

while True:
# Wait for a connection
print('Waiting for a connection...')
connection, client_address = sock.accept()

try:
print('Connection from', client_address)

# Receive the data in small chunks and retransmit it
while True:
data = connection.recv(16)
print('Received {!r}'.format(data))
if data:
print('Sending data back to the client')
connection.sendall(data)
else:
print('No more data from', client_address)
break

finally:
# Clean up the connection
connection.close()

In this example, we create a TCP/IP socket and bind it to a specific address and port. We then listen for incoming connections and, when a connection is made, receive data from the client in small chunks and retransmit it back to the client. When there is no more data to receive, the connection is closed.

In conclusion, inter-process communication (IPC) is an essential aspect of modern operating systems that enables processes to communicate and collaborate with each other. There are several IPC mechanisms available, each with its advantages and disadvantages.

Pipes and FIFOs are simple and efficient for communication between related processes, while shared memory offers high-speed communication between unrelated processes. Message passing provides a flexible and reliable way to exchange data between processes, while semaphores and mutexes ensure synchronization and prevent race conditions.

Signals are useful for notifying processes of events or errors, and sockets are widely used for communication across networks. The choice of IPC mechanism depends on the specific requirements of the application.

Understanding IPC is essential for developers who want to write efficient and scalable applications. By using IPC mechanisms effectively, developers can ensure that their applications are robust and efficient and can easily communicate and collaborate with other processes.

In summary, IPC is a critical concept in modern operating systems and is used extensively in applications that require communication between processes. By mastering the different IPC mechanisms available, developers can write efficient and scalable applications to communicate and collaborate with other processes.

--

--

Vishal Sharma

Computer Science Research Scholar at IIT Guwahati, exploring machine learning and AI in mathematics, cosmology and history.