Illustration of famous “Dinning philosophers problem”

Concurrency vs Event Loop vs Event Loop + Concurrency

Tigran Bayburtsyan
5 min readMar 4, 2017

--

First of all lets explain terminology.
Concurrency — Means that you have multiple task queues on multiple processor cores/threads. But it’s completely different from Parallel execution, parallel execution wouldn’t contain multiple task Queue’s for Parallel case we will need 1 CPU core/thread per task for complete parallel execution, which in most cases we can’t define. That’s why for modern software development Parallel programming sometimes means “Concurrency”, I know it’s strange, but obviously it’s what we have for the moment (it depends on OS cpu/thread model).
Event Loop — Means single threaded infinite cycle which is making one task at a time and it’s not only making single task queue, but it is also prioritizing tasks, because with event loop you have only one resource for execution (1 thread) so for executing some tasks right away you need prioritizing tasks. In some words this programming approach is called Thread Safe Programming because only one task/function/operation could be executed at a time, and if you are changing something it would be already changed during next task execution.

Concurrent Programming

In modern computers/servers we have at least 2 CPU cores and min. 4 CPU threads. But on servers now avg. server have at least 16 CPU Threads. So if you are writing software that needs some performance you should definitely consider making it in such a way that it will use all CPU cores available on server.

This image is displaying basic model of concurrency, but of corse it’s not so easy that it is displayed :)

Concurrency programming is becoming really difficult with some shared resources, for example lets take a look on this Go simple concurrent code.

// Wrong concurrency with Go language
package main
import (
"fmt"
"time"
)
var SharedMap = make(map[string]string)func changeMap(value string) {
SharedMap["test"] = value
}
func main() {
go changeMap("value1")
go changeMap("value2")
time.Sleep(time.Millisecond * 500)
fmt.Println(SharedMap["test"])
}
// This will print "value1" or "value2" we don't know exactly!

In this case Go will fire 2 concurrent jobs probably to different CPU Cores and we can’t predict which one would be executed first, so we wouldn’t know what will be displayed at the end.
Why? — It’s simple! We are scheduling 2 different tasks to different CPU cores but they are using single shared variable/memory, so they both changing that memory, and in some cases that would be the case of program crash/exception.

So to predict concurrency programming execution we need to use some locking functions like Mutex . With it we can lock that shared memory resource and make it available only for one task at a time.
This style of programming called Blocking because we actually blocking all tasks until current task is done with shared memory.

Most of the developers don’t like concurrent programming because concurrency not always means performance. It depends on specific cases.

Single Threaded Event Loop

This software development approach is way more simple than concurrency programming. Because the principle is very simple. You have only one task execution at a time. And in this case you don’t have any problem with shared variables/memory, because program is more predictable with single task at a time.

General flow is following
1. Event Emitter adding task to Event queue to be executed on a next loop cycle
2. Event Loop getting task from Event queue and processing it based on handlers

Lets write same example with node.js

let SharedMap = {};const changeMap = (value) => {
return () => {
SharedMap["test"] = value
}
}
// 0 Timeout means we are making new Task in Queue for next cycle
setTimeout(changeMap("value1"), 0);
setTimeout(changeMap("value2"), 0);
setTimeout(()=>{
console.log(SharedMap["test"])
}, 500);
// in this case Node.js will print "value2" because it is single
// threaded and it have "only one task queue"

As you can imagine in this case code way more predictable than with concurrent Go example, and it is because Node.js running in a single threaded mode using JavaScript event loop.

In some cases Event loop is giving more performance than with concurrency, because of Non Blocking Behavior . Very good example is Networking applications, because they are using single networking connection resource and processing data only when it is available using Thread Safe Event Loops.

Concurrency + Event Loop — Thread Pool with Thread Safety

Making applications only concurrent could be very challenging, because memory corruption bugs would be everywhere, or simply your application will start blocking actions on every task. Especially if you want to get maximum performance you need to combine both!

Lets take a look on Thread Pool + Event Loop model from Nginx Web Server Structure

Main networking and configuration processing is made by Worker Event Loop in a single thread for safety , but when Nginx needs to read some file or need to process HTTP request headers/body, which are blocking operations, it is sending that task to his Thread Pool for concurrent processing. And when task is done, the result is sent back to the event loop for thread safe processing executed result.

So using this structure you are getting both Thread Safety and Concurrency, which is allowing to use all CPU cores for performance and keep non blocking principle with single threaded event loop.

Conclusion

A lot of software is written with pure concurrency or with pure single threaded event loop, but combining both inside single application making way more easy to write performant applications and use all available CPU resources.

--

--