Parallel Programming with Swift: What could possibly go wrong?
Introduction
In the last few articles, we’ve inspected different ways to control concurrency. There are some low-level basics provided by the operating system. For example, Apple provided frameworks or other ideas such as promises, which are highly used in JavaScript. Even though some pitfalls have been mentioned earlier I realized I didn’t give enough credit to them. As a result, trying to be comprehensive, some parts of this article are recaps.
This article is all about, what could possibly go wrong if you don’t understand concurrency. Let’s dive in!
Atomic
Atomic contains has the same idea as a transaction in a database context. You want to write a value all at once behaving as one operation. Apps compiled for 32 bit, can have quite the odd behavior, when using int64_t and not having it atomic. Why? Let’s look into detail what happens:
int64_t x = 0
Thread1:
x = 0xFFFF
Thread2:
x = 0xEEDD
Having a non-atomic operation can result in the first thread starting to write into x. But since we are working on a 32bit Operating System, we have to separate the value we write into x into two batches of 0xFF.
When Thread2 decides to write into x at the same time, it can happen to schedule the operations in the following order:
Thread1: part1
Thread2: part1
Thread2: part2
Thread1: part2
In the end we would get:
x == 0xEEFF
which is neither 0xFFFF nor 0xEEDD.
Using atomic we create a single transaction which would result in the following behavior:
Thread1: part1
Thread1: part2
Thread2: part1
Thread2: part2
As a result, x contains the value Thread2 sets. Swift itself does not have atomic implemented. On Swift evolution there is a proposal to add it, but at the moment, you’ll have to implement it yourself.
Just recently I had to fix a crash, which resulted from writing to an Array from two different threads. Remember the error handling in Concurrency with Swift: Operations? It contained an error, which is quite easy to overlook. What happens if two operations in a group can run in parallel and fail at the same time? They will try to write to the error array at the same time and it will result in an “allocate capacity” crash within Swift.Array. To fix it, the array needs to be threadsafe. One option to do this could be this: Synchronized Array.
But in general, you will have to lock each and every write access.
Don’t get me wrong, reading could also fail:
In this case, we are looping over an array, checking if the length is not 0 and then after dispatching the element to our plugins, we delete it. It is a very easy way to produce “index out of range” exceptions.
Memory Barriers
CPUs are remarkable pieces of technology. Especially now with multiple cores and intelligent compilers, we are unaware of which CPU our code is running. The hardware even optimizes our memory operations. Bookkeeping ensures them to be in the correct order for this core. Sadly this can result in one core seeing changes to memory in a different order than they were implemented. A simple example would be this:
You would expect this code to always print 42 since it’s set before f is set to false and thus stops the loop. Sometimes it can happen, that the second CPU sees the changes to memory in reverse order, thus first finishing the loop, printing the value and then realize the new value is 42.
I haven’t seen this on iOS yet, but that doesn’t mean it won’t happen. Especially now, with more and more cores, awareness of this low-level hardware trap is essential.
How do we fix it? Apple provides Memory-Barriers. Basically, they are commands to ensure one memory operation is done before the next one is executed. This will prevent the CPU from optimizing our code resulting in slightly slower execution time. But you shouldn’t be able to notice it except on high-performance systems.
Using it is quite easy, but be aware, this is an Operating System function, not Swift. So the API is in C.
OSMemoryBarrier() // from <libkern/OSAtomic.h>
The above example with memory barriers would look like this:
This way all our memory operations will be in order and we don’t have to worry about unwanted side effects resulting from hardware memory reordering.
Race Conditions
Race Conditions are situations in which the behavior of multiple threads depends on the runtime behavior of a single one. Imagine having two threads. One doing a calculation and storing the result in x. The other one started later (maybe from a different thread, or e.g. user interaction) will print the result to the screen:
Depending on the timing of these threads, it can happen, that Thread2 doesn’t print the result of the calculation onto the screen. Instead, it has the prior value, which is undesired behavior.
A different situation would be two threads writing to an array. Let’s say the first writes each word of “Concurrency with Swift:” into the array. The other one writes “What could possibly go wrong?”. Implemented in a rather obvious way:
we can get the undesired behavior of having the title mixed within the array:
“Concurrency with What could possibly Swift: go wrong?”
Not really what we expected right? There are multiple ways to solve this:
Another way would be to use the Dispatch Queues:
Depending on your requirements, one or the other is preferable. In general, I would tend to use Dispatch Queues. They are one way to prevent situations such asDeadlocks, which we will look into detail next.
Deadlocks
In Concurrency with Swift: Basics we’ve talked about different ways to resolve Race Conditions. If we use Locks, Mutexes or Semaphores we introduce a different kind of problem to our code base: Deadlocks.
Deadlocks result out of a circular waiting position. One thread waits for resources, which a second thread holds. And the second thread waits for resources being held by the first thread.
A simple example is having a bank account and a transaction needs to be executed. This transaction is divided into two parts: first withdrawal and second deposit.
The code could look similar to this:
Without having noticed we’ve introduced a dependency between our transactions, which will result in a Deadlock.
A different problem is the dining philosophers problem. It’s stated on Wikipedia as:
“Five silent philosophers sit at a round table with bowls of spaghetti. Forks are placed between each pair of adjacent philosophers.
Each philosopher must alternately think and eat. However, a philosopher can only eat spaghetti when they have both left and right forks. Each fork can be held by only one philosopher and so a philosopher can use the fork only if it is not being used by another philosopher. After an individual philosopher finishes eating, they need to put down both forks so that the forks become available to others. A philosopher can take the fork on their right or the one on their left as they become available, but cannot start eating before getting both forks.
Eating is not limited by the remaining amounts of spaghetti or stomach space; an infinite supply and an infinite demand are assumed.”
You can spend quite some time to solve this, but a trivial approach such as:
1. grab a fork on your left, if it is available,
2. wait for one on the right,
2a. if it is available: take it and eat,
2b. if after a specific amount of time, there is no fork, place your left back,
3. back off and start again.
Might not work and has actually a quite high probability to end in a Deadlock.
Livelock
A special case of a Deadlock is a Livelock. While a Deadlock is waiting for one resource to be freed, Livelocks are multiple threads waiting for a resource on other threads. These resources constantly change their state. As a result the threads switch between theirs not making any progress.
In the real life, a Livelock can occur in a small alley. Two people want to pass each other. Out of politeness, they move aside but pick the same side. So they try to switch to the other side, but as they both do it, they once again block each other. This can continue indefinitely and thus result in a Livelock. You’ve probably experienced this before.
Heavily Contended Locks
Another problem resulting from locks are Heavily Contended Locks. Imagine a toll gate. If cars arrive faster at the toll gate, than the gate can process them, a traffic jam will occur. The same happens with locks and threads. If a lock is heavily contended and the synchronized part is slow in execution. It would result in many threads queuing up not being executed. This can influence your performance.
Thread Starvation
As mentioned before threads can have different priorities. This is quite useful, as we can ensure, that specific tasks will be executed as fast as possible. But what happens, if we add only a few tasks to a low priority thread and a lot onto one with a high priority? The low priority thread will starve to death, as it will get next to no execution time. The result is, the task will not be executed or takes a long time.
Priority Inversion
The above thread starvation gets interesting as soon as we add locking mechanisms. Now imagine having a low priority thread 3, which locks a resource. A high priority thread (1) wants to access this resource, so it has to wait. Having a third thread (2) which has a higher priority than 3, will result in a catastrophe. As its priority is higher than 3 it will execute first. If this is thread is now long-running it will take all the resources 3 could use. Suddenly, since 3 can’t execute but blocks 1, 2 becomes the thread which will execute and starve 1 to death. This is the case even though 1 has a higher priority than 2.
Too many threads
Having talked so much about threads there is one last thing to mention. You probably won’t run into this, but still, it can happen. Every thread change is a context change. Remember how we as developers often complain that switching tasks (or being interrupted by people) make us inefficient? The same happens to the CPU if we do a context switch. All preloaded commands need to be flushed and it can’t do any command prediction for a short while.
So what happens if we switch threads too often? The CPU won’t be able to predict anything anymore and thus become inefficient. It will only work with the most current command and has to wait for the next one, which results in further overhead.
As a general guideline, try not to use too many threads:
“As many as necessary, as less as possible.”
Swift Caveat
There is one last caveat to look out for. Even if you did everything correctly, have full control over your synchronization, locks, memory operations and threads. The Swift compiler does not guarantee that your code order is preserved. This can result in your synchronization mechanisms not being in the order you’ve written them.
In other words:
“Swift in of itself is not 100% thread-safe.”
If you want to be sure about your concurrency (e.g. when using AudioUnits) you might want to go back to Objective-C.
Conclusion
As you can see, concurrency is not an easy topic. Quite a lot can go wrong, but at the same time, it can help immensely. As always the tools we use are only as good as we are as developers. If you give 100% writing code, you will be unable to debug it. So chose your tools wisely.
Apple provides some tools for debugging concurrency such as Activity groups and Breadcrumbs. Sadly they are currently not supported on Swift (though there is a wrapper doing this at least for activities).