Multithreading Programming with Future & Promise

Ben Lau
E-Fever
Published in
5 min readJun 7, 2017

Multithreading programming may not be difficult at first glance. You have to pay attention to your shared data to avoid race condition/deadlock. So you learn mutex and semaphore and do it carefully. The result works perfectly on your machine.

However, one day, your program hang. You spent an hour to trace out the problem and find out the order of code execution is not the same as your expectation. So you added few mode condition checking and fixed the problem.

After a few weeks of development, the program is getting more complicated. Moreover, it begins to crash randomly. This times even spent a day you still can’t figure out what is wrong and admit that it is totally out of control.

Have you heard a similar story? It is not rare to find complain about random crash/hang due to the misuse of a thread. Is it really difficult to write multithreading programming?

The answer is yes and no. It depends on your software requirement and architecture.

In this article, it is going to introduce a lock-free multithreading programming method by using QtConcurrent and AsyncFuture. That would make multithreading programming easier.

Let’s take an example. In the code below, it shows an asynchronous ImageReader class. The readImageWorker function will be executed on another thread that won’t block the UI. QFuture represents the result of computation and reports the status change.

An example ImageReader

Example of use:

Multithreading programming with QtConcurrent is pretty easy. It just takes an input, then produce an output later. QtConcurrent handles all of the low-level threading primitives.

But it is limited to the condition that the concurrent function does not access shared data with another threads. If that happen, it may still need a lock in order to maintain a critical session. That will fall back to the old traditional way.

Make it support Image caching

The above example is quite an ideal case. And of course, a real world problem is usually not that simple. Let’s change the requirement — Make it support image caching.

The class declaration:

Before getting an image, you have to query is the cache available:

This solution works, but the API is not ideal. Because that would violate the “Tell, don’t ask” principle. The best way is to combine readCache() and read() into a single function that always returns a QFuture object. But there has a problem, QFuture/QtConcurrent can only obtain a result from a thread. It is quite odd to start a thread but the data is already available. To get rid of this problem, we need a 3rd party library.

AsyncFuture

AsyncFuture is a C++ library that could convert a signal into a QFuture type and use it like a Promise object in Javascript. It provides a unified interface for asynchronous and concurrent tasks. The library only contains a single header file, so that is very easy to bundle in your source tree. A QPM package is also available.

Project Site:

Let’s rewrite the above function with AsyncFuture:

This time is almost perfect. The deferred object provides an interface to complete/cancel a QFuture manually. That could replace the readCache() by returning an already finished future object.

Moreover, it has added a new feature to avoid duplicated image reading. In case you have made requests to load the same image twice before it is cached, the original design would start two threads which are totally wasting CPU power. This version solves it by keeping all the running future in a future pool and return that future for duplicated read.

Make it even more complicated

Currently, it is still a very simple example. Let’s try to make it even more complicated.

Requirements

  1. Add a readScaled(fileName, size) function that returns an image which is scaled to specific size
  2. Code reuse is a must
  3. The scaling must be done in another thread to emulate a high CPU usage function
  4. Load cached image if available
  5. But scaled image do not need to keep in cache

The most optimistic solution is to make use of the result of read() directly. That mean you have to create a thread that depends on the result of another thread. That is a bit hard to get it works with only QtConcurrent and probably it needs to use a lock. But it can be easy to be done with AsyncFuture’s future chaining feature.

First of all, it calls read() function to obtain an image from QFuture. It doesn’t care about the caching mechanism as it is already handled by the read() function.

Then it creates a new future object to represent the whole work flow of the chain:

QFuture<QImage> output = AsyncFuture::observe(input)
.subscribe(callback)
.future();

A chain begins with a observe() function, then followed by an observer function to bind the callback to the observed future, and that will create a new future object to represent the result of the callback.

auto callback = [=](QImage result) {   
return QtConcurrent::run(scaleImageWorker, result);
};

You may wonder is it alright to run another worker function within the callback function. In fact, it is a feature of AsyncFuture. It provides a chainable API that works like a Promise object in JavaScript. If the callback returns a QFuture object, it will be added to the chain. Then the final output future will depend on the returned future. Therefore, the output future is in fact represents the result of read() , callback() and scaleImageWorker(). The flow could be visualised by this diagram:

Diagram: The workflow of readScaled() — it shows how it uses a single QFuture to represent the result of a multiple steps task.

Conclusion

Use QtConcurrent without sharing data between threads could make multithreading programming easier because it doesn’t need to manage an access lock. But real world problems are usually more complicated. A task may not be able to complete without interaction from other threads. In this case, it may still need an access lock to protect critical session. But once you have used it, it will fall back to the old traditional way, and probably it may get the same problem mentioned at the beginning.

In this article, it has presented an alternative solution: Use Concurrent API together with asynchronous callback then chain them into a sequence by a promise like API. It works by breaking down a task into multiple steps. Whatever a concurrent function seeks for extra information from another thread, it should just terminate itself and pass the control back to the main thread. So that it don’t need an access lock that may raise issues like deadlock and race condition.

The whole workflow could be represented by a QFuture object, a unified interface for all kind of asynchronous workflow.

However, it doesn’t mean it could get rid of lock completely. They are still necessary for some scenario. So It should choose a solution case by case.

Related Articles

  1. AsyncFuture Cookbook 1 — Calling QtConcurrent::mapped within the run function

--

--