Concurrency vs Parallelism (& Issues In Concurrent Systems)

Jessica Torres
5 min readJul 15, 2020

--

First, it’s important we determine what can be considered a concurrent system and where the differences between concurrency and parallelism lie.

Concurrency vs. Parallelism

Concurrency is the composition of independently executing processes. Parallelism is the simultaneous execution of multiple things (related or unrelated). Because the ideas are so closely related, it is easy to confuse one for the other or assume that parallelism is the ultimate goal of concurrency, which it is not. It makes it a little less complicated to think of concurrency as more to do with structure (dealing with several things at once) and parallelism more to do with execution (doing several things at once).

Defining them individually like this might give the impression that they are concepts that exist only separate from each other but in reality, that is not the case. Just because concurrent designs are not intrinsically parallel, does not mean they are not capable of having the characteristic.
They become parallel the moment their different processes are being performed at the same time, not when the structure is in place for the system to be able to perform multiple processes at the same time.

CONCURRENT MODELS

In a general sense and not in a specific case set up, a small example of concurrency would be in a single-core machine that is multitasking (where at least two tasks can, in overlapping time periods, start, run, and complete). The parallelism version mentioned above might be in a multi-core processor (where at least two threads are executing in the same instant).

Concurrency is referred to as a property of a system or program, and as such does not exist by default in every design. There are ways a program can be built that do not allow for multiple tasks to be started, but not necessarily completed, where the order in which the tasks will be executed in time is predetermined.

PROBLEMS IN CONCURRENCY

We’ll look mostly at frequently encountered issues that will arise in dealing with concurrency. Understand that several of these issues are problems that can occur, but might not occur more than a fraction of the times that the same processes are performed.

Therefore, the importance of identifying and eliminating them while creating the systems increases, when you consider that because some of these bugs will only show themselves after a certain sequence and combination of operations, you otherwise will not be aware they exist until after a user has suffered some inconvenience.

— — — — — — — — — — — — — — — — — — — — — — — — — —

Deadlock

Often the result of the concurrent use of shared resources, deadlock and resource starvation are issues that are born often of a source of indeterminacy.

Deadlock is a state, in which every process is on hold, waiting for another process to take action, allowing each process to move on to the resource it presently needs.

Characteristics:

  • Every resource a process needs to access is being “held” by some other process. All processes are then forced to wait to proceed — A deadlock has occurred.
  • A process is indefinitely unable to change its state, due to the above situation — The system itself is said to be in a deadlock.

There is, however, only a certain environment in which a deadlock can take place. Specifically, that environment results from a set of four conditions existing synchronously within a system.
These “conditions” are known as the Coffman conditions and are as follows:

  • Mutual Exclusion
  • Resource Holding/ Hold & Wait
  • No Preemption
  • Circular Wait

While there are no recognized “solutions” to deadlocks (in the sense that there are no sure-fire ways to ensure they never happen) most often, the preventative approach taken is one where the focus is turned to eliminating the possibility of one or more Coffman conditions’ emergence.

The reasoning being that if at least one condition can be stopped from manifesting within a system, then all four conditions can never coincide with one another and future deadlock is “blocked” as a result.

— — — — — — — — — — — — — — — — — — — — — — — — — —

Resource Starvation

In concurrent computing, it is possible for a process to be perpetually denied the resources it requires to complete its assigned task.

When certain processes hold a higher priority than others and are allowed to constantly revisit and use a resource, while a lower priority process is forced to sit stagnant in a state of, potentially indefinite, waiting this is called resource starvation.

Common causes include:

  • Errors in a scheduling algorithm
  • Errors in a Mutual Exclusion algorithm
  • Resource Leaks
  • Fork Bombs (or other denial-of-service attacks)

— — — — — — — — — — — — — — — — — — — — — — — — — —

Data Races

Often, but not always, referred to as a subset of race conditions (when a program, to operate correctly, depends on the timing or sequence of that program’s threads or processes) — A data race usually involves a situation where one thread’s memory operation might attempt to access a memory location, at the same time that another thread’s memory operation is writing to that memory location — and in most contexts, this has great potential for causing nondeterminism.

It can happen that the data stored in memory as a result of a data race will be inaccurate for any of these reasons (or others):

  • It winds up holding some arbitrary value that is a meaningless combination of the values that the individual threads were trying to write to memory.
    In this case, some value is stored that was not intended by either thread (a ‘torn write’).
  • It can be that the value stored is some meaningless combination of the value being written by a thread, and the previously stored value that is simultaneously being read by another thread.
    In this second case, some value is stored that represents neither the attempted written value nor the original version of the read value.

CONCLUSION::

Mentioned at the beginning of this article is the fact that parallelism is not the goal of concurrency, and by this point, it should have become clear that a good, efficient structure is.

Remembering that it can be easy to mix up parallelism and concurrency, examining the status of the operations/processes will usually give you a quick answer to your question. If two or more actions are in progress at the same time, it is in fact a concurrent design, but if two or more actions are being executed simultaneously then it is additionally parallel in nature.

Obvious from what has been covered also, is that in trying to create a system that can handle a greater amount, at a faster pace, there will be additional openings for error unearthed that then also need to be addressed and managed.

Thank you for reading!

--

--

Jessica Torres

Former train mechanic, current Full Stack Developer & Instructor documenting+illustrating my experience with technologies & the conceptual side of programming.