Parallel Programming with Swift — Part 1/4

Concurrency, Parallelism and Dispatch Queues

Aaina jain
Swift India
11 min readSep 21, 2018

--

Credits: freepik.com

Whenever we hear parallel programming, two confusing terms comes in mind: Concurrency and Parallelism. Let’s see first there role in real life:

Concurrency: Our day-to-day work with multitasking. Many times we do multitasking which ends up in one task at the same time but ideally, that is context switching. It has some limitations also. Let’s say we have 100s of work to do daily, but we can’t increase this to 1000’s of work.

Parallelism: If we could get more physical things, then two or more work can be done at the same time. If I get 4 arms, then this article can be finished in half of the time.

Let’s have a look at both terms in the computer world:

Concurrency:

Concurrency means that an application is making progress on more than one task at the same time (concurrently). If the computer only has one CPU, then the application may not make progress on more than one task at exactly the same time, but more than one task is being processed at a time inside the application. It doesn’t completely finish one task before it begins the next.

Concurrency is essentially applicable when we talk about minimum two tasks or more. When an application is capable of executing two tasks virtually at the same time, we call it a concurrent application. Though here tasks run looks like simultaneously, essentially they may not. They take advantage of CPU time-slicing feature of the operating system where each task runs part of its task and then go to waiting state.

When first task is in waiting state, CPU is assigned the second task to complete its part of the task. The operating system works based on the priority of tasks, thus, assigns CPU and other computing resources e.g. memory; turn by turn to all tasks and give them chance to complete. To end user, it seems that all tasks are running in parallel.

Complexity in Concurrency

Let’s assume 5 friends have moved into a house and each has a bed to make. Which is the more complex way to structure this?

  • 5 people assembling one bed at the same time Or
  • each person assembling their own bed

Think about how to write instructions for several of your friends on how to assemble a bed together, without holding each other up or having to wait for tools. They would need to coordinate their actions at the right time to assemble the parts of a bed into a finished bed. Those instructions would be really complicated, hard to write and probably hard to read, too.

With each person building their own bed, the instructions are very simple and no one has to wait for other people to finish or for tools to be available.

Credits: clojurebridgelondon.github.io

There is a talk by Rob Pike on Concurrency is not Parallelism.

Parallelism

Parallelism doesn’t require two tasks to exist. It physically runs parts of tasks or multiple tasks, at the same time using the multi-core infrastructure of CPU, by assigning one core to each task or sub-task. Parallelism requires hardware with multiple processing units, essentially. In single core CPU, you may get concurrency but not parallelism.

Concurrency is the composition of independently executing processes, while parallelism is the simultaneous execution of computations. Parallelism means that an application splits its tasks up into smaller subtasks which can be processed in parallel, for instance on multiple CPUs at the exact same time.

Concurrency is about dealing with lot of things at once, it’s more focus is on structure. Parallelism is about doing lot of things at once , it’s focus is on execution.

  • An application can be concurrent — but not parallel, which means that it processes more than one task at the same time, but two tasks are not executing at same time instant.
  • An application can be parallel — but not concurrent, which means that it processes multiple sub-tasks of a task in multi-core CPU at same time.
  • An application can be neither parallel — nor concurrent, which means that it processes all tasks one at a time, sequentially.
  • An application can be both parallel — and concurrent, which means that it processes multiple tasks concurrently in multi-core CPU at same time.

One of the greatest improvements in the technology of CPUs since their existence is the capability to contain multiple cores and therefore to run multiple threads, which means to serve more than one task at any given moment. In iOS, there are 2 ways to achieve concurrency: Grand Central Dispatch and OperationQueue.

Both topics are vast so I am dividing this article into 4 parts:

Concurrency & GCD — Parallel Programming with Swift — Part 1/4

GCD — Parallel Programming with Swift — Part 2/4

Agenda:

Grand Central Dispatch

  1. Synchronous and Asynchronous Execution
  2. Serial and Concurrent Queues
  3. System-Provided Queues
  4. Custom Queues
  5. References

Grand Central Dispatch:

Grand Central Dispatch (GCD) is a queue based API that allows to execute closures on workers pools in the First-in First-out order. The completion order will depend on the duration of each job.

A dispatch queue executes tasks either serially or concurrently but always in a FIFO order. An application can submit a task to queue in the form of blocks either synchronously or asynchronously. Dispatch queue executes this block on a thread pool provided by the system. There is no guarantee on which thread submitted task will execute.

The GCD API had a few changes in Swift 3, SE-0088 modernized its design and made it more object oriented.

This framework facilitates to execute code concurrently on the multi-core system by submitting a task to dispatch queues managed by the system.

Synchronous and Asynchronous Execution

Each task or work item can be executed either synchronously or asynchronously. In case of synchronous, it will wait for the first task to finish before starting the next task and so on. When a task is executed asynchronously the method call returns immediately and the next task starts making progress.

Serial and Concurrent Queues

A dispatch queue can be either serial, so that work items are executed one at a time, or it can be concurrent so that work items are dequeued in order, but run all at once and can finish in any order. Both serial and concurrent queues process work items in first in, first-out (FIFO) order.

  • Serial Queue
Credits: Raywenderlich

Let’s have a look at an example of dispatching a task to the main queue asynchronously.

As this task is executing asynchronously so initially for loop on current queue get executed, then task inside dispatch queue and then another task.

While doing this experiment, I came to know that we can’t use DispatchQueue.main.sync

Attempting to synchronously execute a work item on the main queue results in dead-lock.

Do not call the dispatch_sync function from a task that is executing on the same queue that you pass to your function call. Doing so will deadlock the queue. If you need to dispatch to the current queue, do so asynchronously using the dispatch_async function.

— Apple Documentation

Given that the main thread is a serial queue (which means it uses only one thread), the following statement:

DispatchQueue.main.sync {}

will cause the following events:

  1. sync queues the block in the main queue.
  2. sync blocks the thread of the main queue until the block finishes executing.
  3. sync waits forever because the thread where the block is supposed to run is blocked.

The key to understanding this is that DispatchQueue.main.sync does not execute blocks, it only queues them. Execution will happen on a future iteration of the run loop.

As an optimization, this function invokes the block on the current thread when possible.

  • Concurrent Queue:
Credits: Raywenderlich

Global Concurrent Queue can be get by passing QoS to this method:

DispatchQueue.global(qos: .default)

System-Provided Queues

When an app launches, the system automatically creates a special queue called the main queue. Work items enqueued to the main queue execute serially on your app’s main thread. Main queue can be accessed using DispatchQueue.main

Apart from main queue, system provides several global concurrent queues. When sending tasks to the global concurrent queues, you specify a Quality of Service (QoS) class property.

** Primary QoS:

  • User-interactive: This represents tasks that must complete immediately in order to provide a nice user experience. Use it for UI updates, event handling and small workloads. The total amount of work done in this class during the execution of your app should be small. This should run on the main thread.
  • User-initiated: The user initiates these asynchronous tasks from the UI. Use them when the user is waiting for immediate results and for tasks required to continue user interaction. They execute in the high priority global queue.
  • Utility: This represents long-running tasks. Use it for computations, I/O, networking, continuous data feeds and similar tasks. This class is designed to be energy efficient. Utility tasks typically have a progress bar that is visible to the user. This will get mapped into the low priority global queue. Work to be performed takes a few seconds to a few minutes.
  • Background: This represents tasks that the user is not directly aware of such as prefetching, backup. This will get mapped into the background priority global queue. It’s useful for work that takes significant time, such as minutes or hours.

** Special QoS:

  • Default: The priority level of this QoS falls between user-initiated and utility. Work that has no QoS information assigned is treated as default, and the GCD global queue runs at this level.
  • Unspecified: This represents the absence of QoS information and cues the system that an environmental QoS should be inferred. Threads can have an unspecified QoS if they use legacy APIs that may opt the thread out of QoS.

Global Concurrent Queues:

In past, GCD has provided high, default, low, and background global concurrent queues for prioritizing work. Corresponding QoS classes should now be used in place of these queues.

Custom Queues

GCD provides 3 types of queues i.e. main queue, global queue and custom queues.

There are three init methods available to us when creating our own queues:

  • init(“queueName”)
  • init(“queueName”, attributes: {attributes})
  • init(“queueName”, qos: {QoS class}, attributes: {attributes}, autoReleaseFrequency: {autoreleaseFrequency}, target: {queue})

The first initializer will implicitly create a serial queue.

attributes mentioned in 2nd and 3rd initializer refers to DispatchQueue.Attributes, an option set with two options: .concurrent, which we can use to create a concurrent queue, and .initiallyInactive, which allows us to create inactive queues. Inactive queues can be modified until they’re not active and will not actually begin executing items in their queue until they’re activated with a call to activate() . autoReleaseFrequency refers to DispatchQueue.AutoreleaseFrequency.

.inherit: Dispatch queues with this autorelease frequency inherit the behavior from their target queue. This is the default behavior for manually created queues.

.workItem: Dispatch queues with this autorelease frequency push and pop an autorelease pool around the execution of every block that was submitted to it asynchronously. When a queue uses the per-workitem autorelease frequency (either directly or inherited from its target queue), any block submitted asynchronously to this queue (via async(), .barrier, .notify(), etc…) is executed as if surrounded by a individual autoreleasepool. Autorelease frequency has no effect on blocks that are submitted synchronously to a queue (via sync(), .barrier).

.never: Dispatch queues with this autorelease frequency never set up an individual autorelease pool around the execution of a block that is submitted to it asynchronously. This is the behavior of the global concurrent queues.

Let’s see how these queues work synchronously or asynchronously:

  • Serial Queue executing task asynchronously
Serial queue + async

Task will run in a different thread(other than main thread) on using async in GCD. Async means execute next line, don’t wait until the block executes which results in non-blocking main thread & main queue. Since its serial queue, all tasks are executed in the order they are added to the serial queue. Tasks added serially are always executed one at a time by the single thread associated with the Queue.

  • Serial Queue executing task synchronously
Serial queue + sync

Task may run in the main thread when you use sync in GCD. Sync runs a block on a given queue and waits for it to complete which results in blocking main thread or main queue. Since the main queue needs to wait until the dispatched block completes, the main thread will be available to process blocks from queues other than the main queue. Therefore there is a chance of the code executing on the background queue may actually be executing on the main thread Since it’s serial queue, all are executed in the order they are added(FIFO).

  • Concurrent Queue executing task asynchronously
Concurrent queue + async

Task runs in other thread when you use async in GCD. Async means execute next line, don’t wait until the block executes which results in non-blocking main thread. As in concurrent queue, task are processed in the order they are added to queue but with different threads attached to the queue. They are not supposed to finish the task in the order they are added to the queue. Order of task differs each time as threads are handled and assigned by the system. All tasks get executed in parallel.

  • Concurrent Queue executing task synchronously
Concurrent queue + sync

Task may run in the main thread when you use sync in GCD. Sync runs a block on a given queue and waits for it to complete which results in blocking main thread or main queue. Since the main queue needs to wait until the dispatched block completes, the main thread will be available to process blocks from queues other than the main queue. Therefore there is a chance of the code executing on the background queue may actually be executing on the main thread. Since its concurrent queue, tasks may not finish in the order they are added to the queue. But with synchronous operation it does, although they may be processed by different threads. So it behaves as this is the serial queue.

asyncAfter:

If you want to execute task on queue after some delay you can provide delay in asyncAfter() instead of using sleep()

Thanks for reading article. If you have any doubt please add in below comment section.

You can catch me at:

Linkedin: Aaina Jain

Twitter: __aainajain

--

--