[OS] Intro to Concurrency — part 1

Yong Seung lee
3 min readDec 14, 2022

--

  • This story is about what I learned from the OS class.

I think one of the hardest concepts of Operating systems is Concurrency. Previously, the way I was writing code was all based on synchronized execution. At a given time, only one thing is happening. We do not have to worry about anything else… right?

The first real-world experience that made me think about multi-threading was when I was writing a crypto exchange program using Node.js. I knew Node.js is using something called asynchronous non-blocking I/O, but I had never deeply thought about it (I will make a separate post regarding Node.js’s structure.. it is pretty interesting). The problem I faced was a possible race condition that comes from async/await programming.

Awesome explanation/demo of possible race conditions in JS.

I had a function that runs periodically to process the transaction from the blockchain. Then, when I implemented an admin functionality that allows a manual process of that transaction, async/await can cause the same transaction to be processed twice.

Now, let’s go to Operating systems. What is concurrency?

“the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order” — wikipedia

It does not say anything about happening at the same time… It is actually true! In concurrent programming, we cannot assume anything about the ordering of execution. I would say, concurrency is, executing units of a program in a mixed order. And, the unit of concurrency is THREAD! If we translate that definition from Wikipedia, Concurrney means “the ability to execute different threads of a program in random order.”

I want to mention parallelism real quick over here. Parallelism is actually executing multiple instructions simultaneously using multiple CPUs. We can also talk about multi-processing, which is running multiple processes simultaneously using multiple CPUs. Otherwise, in concurrency with a single CPU, functions are executed one after another, but it was so fast that humans just could not recognize it (human response time is ~100ms). Thus, muti-threading does not guarantee simultaneous execution. It just uses threads to do concurrent programming.

Visualization of Multi-threading using a single CPU
Visualization of Multi-threading using a single CPU(https://www.chipestimate.com/Hardware-Multi-threading-a-Primer/Imagination-Technologies/Technical-Article/2017/01/31)

As we can see from the above picture, we can use multi-threading even though we only have one CPU. We just execute multiple threads alternatively with a really short time period(this is called an interrupt period).

Now, we sort of understand the basics. Then, why use this? An operating system is using this concurrency to handle multiple applications at the same time. When we open chrome, zoom, and VScode, we do not want to be blocked by a zoom call. We all want to keep working on VScode or google something on Chrome while we are taking a zoom lecture. The concurrency of OS allows us to do this.

Another easy example of multi-threading is a web server. When thousands of people send requests to the server, the server does not want to be blocked by one request which takes a long time. Therefore, we create a thread for each request and each thread handles each request. This way, we can process multiple requests at the same time. Actually, there is another mechanism to do this. Just like Node.js, we can use non-blocking I/O. This is like an event-driven architecture where we separate getting a request, processing it, and sending a response. We basically issue processing this request and do other stuff. After the request is fully processed, it fires the event, and the handler can catch this and send a proper response (we will talk more about it when we talk about the structure of Node.js).

Now we know what concurrency is, let’s talk about how OS does this from Part 2!

--

--