This is the first blog of a two part series about synchronization, and it will focus on the synchronization of computer processes. Process synchronization is not anything new, and it have been around for decades, and has varies methods to solve the issue. Well get to those later though, first we need to look at what the problem is.
Synchronization goes far beyond just computer science, it is an issue that affects many aspects of out civilization. This includes everything from agriculture to military. In its simplest form synchronization is about coordinating multiple actions such that they can occur simultaneously without issue, or to ensure that one action will always occur before another. This may sound like a simple problem to solve, just always do one action after the other, right? But how do you do this if the actions are being performed by two separate people, no where near each other, and the one that goes first is picked at random? Without some form of coordination there is no way to ensure the order of the actions. This is the problem that synchronization solves.
While for the purposes of this, and the next, blog we will be briefly talking about synchronization of threads, it is important to know that the problem and solutions span far more than just programming.
There are a great number of ways to solve this problem, each with positive and negative aspects, but we are only going to cover a few.
Firstly you can choose to simply ignore the issue entirely, or at least pretend it does not exist. This is by no means any type of solution, and will only lead to problems latter on. C++, prior to C++ 14, is actually a great example of this, as the language simply pretended that multi-threading did exist at all, and placed the responsibility of synchronization onto the programmers and system designers. The obvious advantage here is speed and low overhead, you don’t have to worry about communication or coordination.
The second strategy, and one that is least commonly used is to acknowledged that synchronization is an issue and to ensure that threads simply don’t overlap in ways that would cause any issues. This essentially requires that each thread performs its task without any form of overarching coordination. As I am sure you can tell, this effectively eliminates any advantage to making a program multi-threaded. There are few times that it is used for this reason. The draw back here is that you can’t break up larger tasks into smaller, more manageable pieces without running into race conditions at some point; however it also has the advantage of both speed and low overhead.
And lastly the most used method is to only allow one thread access to a resource at a time, and all the others have to wait. If the resource happens to be the CPU, then this means that you limit the program to only run one thread at a time. If one thread holds on to the resource for a long period of time, then the others effectively starve, and once again you are eliminating any advantage to multi-programming. This is clearly going to be slower, and have more overhead, than the previous options as you have to communication with the other threads and wait if the resource is already in use.
We’ll talk about how synchronization is actually done in computers in the next blog, but it should be noted that not every language is as change blind as C++. For example Java has a synchronize keyword with allows a programmer to ensure that only one thread will execute the code in the method or block at a time.
Until next time, good hunting.