[C++] MUTEX: Write Your First Concurrent Code

Learn to design concurrent code by implementing a thread-safe queue

Oct 22 · 7 min read

In the last article, we understood what concurrency is and why synchronization is needed. Now, it’s time to explore different synchronization primitives offered by the C++ Standard Template Library.
The first one is std::mutex. First of all, here is a quick card about this article (this will come in handy if you start feeling lost between too many new concepts).

Now let’s start.

What is Mutex?

This is the basic structure of synchronization.
It models MUTual EXclusive access to shared data between multiple threads, by using a memory barrier (you can think of it as a door).


  • Header | #include <mutex>
  • Declaration | std::mutex mutex_name;
  • To acquire the mutex | mutex_name.lock();
    The thread asks for unique ownership of the shared data protected by the mutex. It can successfully lock the mutex (and then no one else can access the same data) or block if the mutex is already locked by another thread.
  • To release the mutex | mutex_name.unlock();
    When the resource is not needed anymore, the current owner must call, in order to let other threads access it. When the mutex is released, the access is permitted to a random thread among the waiting ones.
#include <mutex>
#include <vector>
std::mutex door; // mutex declaration
std::vector<int> v; // shared data
/* This is a thread-safe zone: just one thread at the time allowed
* Unique ownership of vector v guaranteed


Let’s understand how to implement the simplest possible thread-safe queue: a queue that can be accessed by multiple threads in a safe way.
It wraps a standard queue (rawQueue) and offers thread-safe ways to retrieve-and-delete the front integer and push back a new one.
First, let’s understand why these two operations could be problematic for multithreading.

  • retrieve-and-delete
    In order to retrieve and suddenly delete the front, it is necessary to perform 3 operations:
    1. Check if the queue is empty
    2. If it is not, retrieves the reference to the front (rawQueue.front())
    3. Removes the front ( rawQueue.pop() )
    Between these three steps, other threads can access the queue, reading/modifying it. Try it yourself.
    For example:
As you can see, “1” is deleted even if it has never been retrieved, since thread B retrieves 0 but pops 1.
Even worst if rawQueue has just one element, thread B observes a non-empty queue and, immediately after that, thread A pops the last value. Now, thread B tries to pop the front value of an empty queue, causing undefined behavior! A real horror story.
  • push
    Now, let’s focus on pushing a new value using rawQueue.push(): it adds a new element at the end of the container, after its current last element. Then, it increases the size by one. Can you spot the problems here? What if two threads push a new value at the same time, observing the same current last element? And what about the moment between the new element insertion and the size increment? Someone can read the wrong size. Try it yourself.

We need to make sure that when we are performing such tasks, no one else is touching the queue. Let’s use a mutex to protect these multi-phases operations, so each one can cumulatively be seen as an atomic operation.


  1. The link between the mutex and the protected resource is just in the programmer’s mind
    We know that mutex m is protecting rawQueue, but it is not explicitly specified.
  2. Lock at an appropriate granularity
    The use of a mutex decreases parallelism. Let’s suppose to use just one mutex to protect a vector and a string that doesn’t have any dependence (e.g the value of the variable doesn’t depend on the vector and vice-versa). Thread A locks the mutex, reads the string and starts processing some other data before pushing a new value in the vector and unlocking the mutex. Now, thread B just needs to modify the string but when it tries to lock the mutex, it unnecessary blocks until all the operations on the vector are completed too. With an additional mutex for the string (locked before the reading, and unlocked immediately after), we solve the problem.
    → Always try to understand the right amount of data to protect with just one mutex.
  3. Hold a lock only for the operations that actually require it
    See above.
  4. Don’t call lock() if you already own the mutex
    You will block forever, waiting for yourself (but you are blocked, so..).
    If you really need this, you can use std::recursive_mutex. A recursive mutex can be acquired repeatedly by the same thread but must be release as many times as it was acquired.
  5. Use try_lock() or std::timed_mutex if you don’t want to block indefinitely
    try_lock() is a non-blocking method offered by std::mutex. It returns immediately, even if the acquisition failed, with a value of true if the mutex is acquired, false if not.
    std::timed_mutex offers two non-blocking methods for locking the mutex: try_lock_for() and try_lock_until(), both returning when the time elapsed with a true or false value based on the acquisition success.
  6. Always remember to call unlock() or, when possible, use std::lock_guard (or similar)
    See below.

Lock guard, give me some RAII

We have two major problems with that simple mutex:

  • What happens if we forget to call unlock()? The resource will be unavailable during all the mutex lifetime, and, if destroyed while still locked, the behaviour is undefined.
  • What happens if an exception is thrown before the unlock() call? The unlock() will never be executed and we will have all the above-cited troubles.

Luckily, this can be solved with std::lock_guard. It always guarantees to unlock the mutex using the RAII (Resource Acquisition Is Initialization) paradigm: the raw mutex is encapsulated inside a lock_guard that invokes lock() at its construction and unlock() at its destruction, when it exits its scope. This is safe even in case of exceptions: the stack unwinding will destroy the lock_guard, by calling its destructor, and hence unlocking the wrapped mutex.

  • std::lock_guard<std::mutex> lock_guard_name(raw_mutex);
#include <mutex>
#include <vector>
std::mutex door; // mutex declaration
std::vector<int> v;
std::lock_guard<std::mutex> lg(door);
/* lg Constructor called. Equivalent to door.lock();
* lg allocated on the stack */

Unique ownership of vector guaranteed */

} /* lg exits its scope. Destructor called.
Equivalent to door.unlock(); */

Now, let’s see how our threadSafe_queue can be modified (try to focus where, this time, the mutex is unlocked).

Unique lock, give me some freedom

Once the mutex is acquired with std::lock_guard, it can just be unlocked. std::unique_lock extends this behaviour by allowing acquiring and releasing (always in this order) the mutex multiple times, without losing the RAII safety granted by lock_guard.

  • std::unique_lock<std::mutex> unique_lock_name(raw_mutex);
#include <mutex>
#include <vector>
std::mutex door; //mutex declaration
std::vector<int> v;
std::unique_lock<std::mutex> ul(door);
// ul Constructor called. Equivalent to door.lock();
// ul allocated on the stack
// unique ownership of vector guaranteed


// execution of operations that don't concern the vector
// ....
// now I need to access the vector again

// Unique ownership of vector guaranteed again} /* unique_lock exits its scope. Destructor called.
Equivalent to door.unlock(); */

When to use this?

  • When you don’t always need to have the resource locked
  • With std::condition_variable (in the next article)
  • To lock a std::shared_mutex in exclusive mode (see below)

Shared mutex + Shared lock, give me some readers

std::mutex can be own just by one thread at the time. Nevertheless, this constraint is not always necessary. For example, multiple threads could safely and simultaneously read the same shared data. They are just observing, not touching. But in case of writing access, only the writing thread can access the data.
From C++17, std::shared_mutex models this two-types access:

  • Shared access: multiple threads can own the same shared mutex and access the same resource. This type of access can be requested using a std::shared_lock (the lock guard for shared mutex). When shared, any exclusive access is blocked.
  • Exclusive access: the resource is accessed just by one thread. This type of request is done using a unique lock.


  • Header | #include <shared_mutex>;
  • Declaration | std::shared_mutex raw_sharedMutex;
  • To lock it in shared mode |
    std::shared_lock<std::shared_mutex> sharedLock_name(raw_sharedMutex);
  • To lock it in exclusive mode |
    std::unique_lock<std::shared_mutex> uniqueLock_name(raw_sharedMutex);
#include <shared_mutex>
#include <vector>
std::shared_mutex door; //mutex declaration
std::vector<int> v;
int readVectorSize() {
/* multiple threads can call this function simultaneously
* no writing access allowed when sl is acquired */

std::shared_lock<std::shared_mutex> sl(door);
return v.size();
void pushElement(int new_element) {
/* exclusive access to vector guaranteed */

std::unique_lock<std::shared_mutex> ul(door);

Scoped lock, give me more mutexes (and no deadlock)

Introduced in C++17, it extends std::lock_guard by allowing the acquisition of multiple mutexes. Without std::scoped_lock, such operation is a tricky one, since it can cause deadlock.
A short deadlock story:

Thread A wants to move 200$ from Jack’s bank account (BA) to Becky’s BA as an atomic operation. It starts by locking the mutex protecting Jack’s BA to subtract the money, and then tries to lock Becky’s BA.
At the same time
Thread B wants to move 100$ from Becky’s BA to Jack’s BA. It acquires the lock on Becky’s BA, subtract the money and try to lock Jack’s BA. Both threads block, waiting for each other.

std::scoped_lock simultaneously lock (and then unlock) all the mutexes passed as argument, with an all-or-nothing policy and even if just one acquisition throws an exception, all the already locked mutex are unlocked.

  • std::scoped_lock<std::mutex> scoped_lock_name(raw_mutex1, raw_mutex2, ..);


If you feel lost between too many new concepts:

  • Use the map at the beginning of the article (or create one your own).
  • Put in practice what you have learnt and try to write some simple code.

If you want me to deepen some topic, let me know (you can find me on instagram too, @ valentina.codes).

See you in the next article about condition variable to discover how to synchronize multiple threads!

The Startup

Medium's largest active publication, followed by +538K people. Follow to join our community.


Written by


Computer engineering student @Polito and @GrenobleINP. Former Wireless Network research assistant @LIG. Passionate in what I do.

The Startup

Medium's largest active publication, followed by +538K people. Follow to join our community.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade