Grand Central Dispatch (Swift 3)
The Grand Central Dispatch (GCD) is a Apple’s technology to optimize application support for systems with multi-core processors and other symmetric multiprocessing systems. It offer great flexibility and options when trying to achieve concurrency, performance, and parallel tasks.
The Grand Central Dispatch was first introduced in iOS 4 and was written in C programming language. The GCD coding style was quite close to low level C syntax and had none of the Swift language design features. Finally Swift 3 changes this bringing many improvements to GCD syntax.
But before going through the swift code let’s talk a bit about some specific concepts to understand better GCD.
The dominating concept in GCD is the Dispatch Queue. GCD provides dispatch queues to manage tasks (in the form of blocks of codes) that can be executed in a FIFO pattern (First In, First Out) guaranteeing that the queue that comes first for execution will also finish first. Queues can be either serial or concurrent.
When we choose to create a queue as serial queue, the queue can only execute one task at a time. All tasks in the same serial queue will respect each other and execute serially. Serial queues are good for managing a shared resource; in fact it guaranteed serialized access to the shared resource and prevents race conditions. The main advantage of serial queues is that tasks are executed in a predictable order.
As the name suggests, concurrent queues allows us to execute multiple tasks in parallel. Tasks starts in the order in which they are added in the queue, but their execution occur concurrently and they don’t have to wait for each other to start.
GCD provides three main types of queues:
The first type are generic Custom queues that we can create and that can be of serial or concurrent type.
The second type is the Main queue: this is a special serial queue that is “married” to the application’s main thread and all tasks are performed on it. We should be always cautious when assigning tasks to the main queue because this one should always remain available to serve the user’s interactions and the UI requirements. For the same reason any changes we want to apply to the UI must be always done on the main thread.
The last type are Global queues: these are concurrent queues that are shared by the whole system. This high-priority global queue should be theoretically be faster than the respective concurrent custom queue (different thread priority). A custum concurrent queue could be useful anyway for example to debug your app making easier to identify the individual threads in the debugger.
When setting up queues we can specify a Quality of Service (QoS) class property. This will indicate the task’s importance and guide GCD into determining the priority to give to the task. The quality of service replaces the old priority attributes.
There are four primary QoS classes, each corresponding to a level of work importance:
In addition to the primary QoS classes, there are two special types of QoS:
Adding Tasks to a Queue
To execute a task, this has to be dispatched to an appropriate dispatch queue. Tasks can dispatched synchronously or asynchronously, singly or in groups. Once in a queue, the queue becomes responsible for executing the tasks as soon as possible, given its constraints and the existing tasks already in the queue.
When block object or function is added to a queue, there is no way to know when that code will be executed. As a result, adding blocks or functions asynchronously lets us schedule the execution of the code and continue to do other work from the calling thread. Anyway there may still be times when we need to add a task synchronously to prevent race conditions or other synchronization issues.
Now let’s write a bit of code and create a new custom dispatch queue as in the following code:
Above we passed the queue name and quality of service attribute. This is a serial queue.
Now let’s modify the initialisation of the previous queue to create a concurrence queue.
When this parameter is present with the “.concurrent” value, then all tasks of the specific queue will be executed simultaneously. If we don’t use this parameter, then the queue is a serial one.
If we dispatch (asynchronously) multiple time the previous two queue (you can easily test it in a playground) we can easily see different execution results:
Note that by changing the QoS class the execution of the tasks is affected as well. However, as long as the queue is initialised as a concurrent one, then the parallel execution of the tasks will be respected.
To show how to access to the Main and the Global Queues we can think to a very common use of GCD that’s is to perform work on a global background queue (for example to get data from API) and update the UI (to show those data) on the main queue as soon as the work is done. Here the code:
A bit more…
- Dispatch queues are thread-safe which means that we can access them from multiple threads simultaneously.
- Do not call dispatch_sync function from a task that is executing on the same queue (serial) that you pass to your function call. Doing so it will deadlock the queue. In fact with dispatch_sync the function does not return until the block has finished (blocking the current thread). So the task cannot be executed until the current iteration of the run loop is finished; but we are waiting for it to be finished before running the task. Here one example:
- Delaying the Execution: Sometimes it’s required by the workflow to delay the execution of a task. CGD allows us to do that by callings the
asyncAfter(deadline:qos:flags:execute:)function that accepts four parameters. Because for the second and third parameter we can use default values we can usually the shorter version
asyncAfter(deadline:)and just passs to this the deadline time (current time “Now()” plus additional time for the delay).
- Waiting on Groups of Queued Tasks: Dispatch groups are a way to block a thread until one or more tasks finish executing. We can group together multiple tasks and either wait for them to be completed or be notified once they are complete.
- Dispatch Semaphore: If tasks neeed to access to a finite resource, it’s possible to use a dispatch Semaphore to regulate the number of tasks simultaneously accessing that resource. When resources are available a GCD semaphore is faster than a traditional system semaphore because it does not call down into the kernel.
- Here you can find more details about Dispatch Queues.
Thanks for reading! I hope you find this article useful.
Get in touch on Twitter: stefanofrosoni