Concurrency in iOS, Part 1
What is the meaning of concurrency?
Simply it refers to performing multiple tasks concurrently, It enables your app to execute multiple operations together, such as fetching data from a server and processing large amounts of data, without blocking the main user interface thread.
Multithreading in Swift allows the CPU to produce and run many threads together. A CPU typically does one operation at a time. However, by using multithreading in Swift, we might be able to let the CPU switch between various tasks so that they can run at the same time
The first time, you face the Concurrency concept you may see the “Parallelism” expression, Let's discuss the difference between Concurrency and Parallelism
If we have two tasks to perform :
In concurrency, one processor or core handles the two tasks concurrently
In parallelism, two or more processor or core runs the tasks in parallel
By increasing the number of cores, a single chip could execute more CPU instructions per cycle without increasing its speed, size, or thermal output.
Apple simplified the concept of cores to a more abstract thing called thread that expresses the cores
We have two types of threads in iOS :
Main thread: it is responsible for handling user interface updates, responding to user interactions, and executing tasks related to the UI, blocking the main thread with long-running tasks can lead to a frozen or unresponsive UI, resulting in a poor user experience.
Background Threads: these are crucial for performing non-UI-related operations, such as network requests, data processing, or complex computations, by executing these tasks on background threads, the main thread remains free to handle UI updates and maintain a responsive user interface.
Background threads can be managed and controlled using various concurrency frameworks like Grand Central Dispatch (GCD) or Operation Queue.
Before going in deep with Grand Central Dispatch (GCD) or Operation Queue we must know why we use them or concurrency, to keep the UI responsive. Other purposes to use concurrency include performing independent tasks and utilizing hardware.
Grand Central Dispatch
GCD stands for Grand Central Dispatch. It is a powerful concurrency framework provided by Apple in Swift and Objective-C to simplify concurrent programming.
DispatchQueue
GCD allows you to perform tasks concurrently and manage concurrent execution in a more efficient and organized manner. the main component of GCD is DispatchQueue it is like dealing with threads
The are two types of DispatchQueue :
Serial queue: in this type, the first task that enters the queue is the first one out (FIFO) if we have two tasks they are performed serially, the first one must end to start the second one.
let queue = DispatchQueue(label: "com.queue.serial")
Concurrent queue: all tasks are performed concurrently they start sequentially, but it is not a must to end sequentially also, None of the tasks are disabled at the expense of the other.
let queue = DispatchQueue(label: "com.queue.concurrent", attributes: .concurrent)
There are two techniques to deal with the DispatchQueue be it serial or concurrent :
Synchronous: starting a task will block the calling thread until the task is finished
override func viewDidLoad() {
super.viewDidLoad()
queue.sync {
upload()
}
}
func upload(){
for i in 0...9999999{
print(i)
}
}
Asynchronous: starting a task will directly return on the calling thread without blocking
override func viewDidLoad() {
super.viewDidLoad()
queue.async { [weak self]
guard let self = self else {return}
self.upload()
}
}
func upload(){
for i in 0...9999999{
print(i)
}
}
Synchronous vs. asynchronous and concurrent vs. serial are two separate concepts. Synchronous vs. asynchronous is about when the caller can continue. Concurrent vs. serial is about when the dispatched task can run.
A popular mistake
override func viewDidLoad() {
super.viewDidLoad()
queue.async { [weak self]
guard let self = self else {return}
self.upload()
}
}
func upload(){
var counter = 0
for i in 0...9999999{
counter += i
}
label.text = "\(counter)"
}
accessing UI elements on the background thread the code will crash because it's not allowed to access UI elements in the background thread to solve this you must access UI elements like labels on the main thread
override func viewDidLoad() {
super.viewDidLoad()
queue.async { [weak self]
guard let self = self else {return}
self.upload()
}
}
func upload(){
var counter = 0
for i in 0...9999999{
counter += i
}
DispatchQueue.main.async{ [weak self]
guard let self = self else {return}
self.label.text = "\(counter)"
}
}
If I have a queue in the background thread the tasks on it are very important you can set a priority to this queue to execute faster and not block the main thread, the value of QOS comes here.
what is QOS?
QOS refers to the quality of service, with QOS you can set the priority of executing the tasks in a queue.
Highest priority :
//1 userInteractive
let userlnteractiveQueue - DispatchQueue(label: "com.queue.serial",qos: .userInteractive)
//2 userlnitiated
let userlnitiatedQueue = DispatchQueue(label: "com.queue.serial",qos: .userlnitiated)
//3 utility
let utilityQueue = DispatchQueue(label: "com.queue.serial",qos: .utility)
Lowest priority :
let backgroundQueue = DispatchQueue(label: "com.queue.serial",qos: .background)
Default priority :
let defaultQueue1 = DispatchQueue(label: "com.queue.serial")
//OR
let defaultQueue2 = DispatchQueue.global()
By creating a custom queue its default target that is executed is the background queue if we don’t specify any priority by default its default priority is the global queue
Target queue
Say we have multiple queues in the app. We can redirect the execution of their tasks to one specific queue which is called the target queue.
by adding QOS for the target queue all queues executed on it will be the same priority of the target if the priority of these queues is less than the priority of the target, if they are assigned there is no change on them until this QOS of them is not assigned it will be .userinitiated
this point may be confusing a little bit let’s see an actual example
let targetQueue = DispatchQueue(label: "com.test.targetQueue", qos: .utility)
let queue1 = DispatchQueue(label: "com.test.queue1", target: targetQueue)
let queue2 = DispatchQueue(label: "com.test.queue2", qos: .background, target: targetQueue)
let queue3 = DispatchQueue(label: "com.test.queue3", qos: .userInteractive, target: targetQueue)
targetQueue.async {
print(DispatchQoS.QoSClass(rawValue: qos_class_self()) ?? .unspecified)
print(Thread.current)
}
queue1.async {
print(DispatchQoS.QoSClass(rawValue: qos_class_self()) ?? .unspecified)
print(Thread.current)
}
queue2.async {
print(DispatchQoS.QoSClass(rawValue: qos_class_self()) ?? .unspecified)
print(Thread.current)
}
queue3.async {
print(DispatchQoS.QoSClass(rawValue: qos_class_self()) ?? .unspecified)
print(Thread.current)
}
The Result :
This result will resolve the confusion.
If I have a movies app or music app and I want to download 20 films or songs how much time will they take?
A lot of time as anyone expects, here is the question why do these films take a huge time to be downloaded, because they use the shared resource “Network” Your network is divided into 20 parts then we want to control access to this shared resource “Network” we do this through a signaling mechanism called DispatchSemaphores
DispatchSemaphores
They are signaling mechanisms, DispatchSemaphores are commonly used to control access to a shared resource, and Semaphores can be used to control concurrency in our app by allowing us to lock “n” number of threads.
They are signaling mechanisms because there are no tasks that will be executed before the end of the max current available count of threads under execution
Stop..Wait..Signal this is the job of the Semaphores.
First, we create a variable holding the number of currently executing threads.
var maxCurrent = 3
We will initialize the semaphore that takes the maxCurrent
let semaphores = DispatchSemaphore(value: maxCurrent)
Then we will create a queue to execute the task of downloading a song
let downloadQueue = DispatchQueue(label: "com.downloadQueue.concurrent",attributes: .concurrent )
Let’s perform our task.
for i in 0..<20{
downloadQueue.async {
// 1 Stop and Wait
self.semaphores.wait()
// 2 Task
self.download(i + 1)
// 3 if a one download of the three ends another one start in a row
self.semaphores.signal()
DispatchQueue.main.async {
// UI changes
}
}
func download(_ int : Int)->Void{
var counter = 0
for _ in 0..<10000{
counter += 1
}
}
In this block of code and inside the downloadQueue let’s see the behavior of the semaphore.
First, the semaphore Stops the execution and asks the question “Is there an empty place in the three places we declared” If not Wait until one of the downloads ends.
self.semaphores.wait()
If yes another download starts.
self.semaphores.signal()
In semaphores, we made a limit of tasks execution number what about grouping a number of tasks?
DispatchGroups
DispatchGroups are a way for grouping operations even if queues or tasks either of them start or end first the important thing is that all operations are ended then the DispatchGroup tells me that then I execute a block of code
Let’s see the functionality of the DispatchGroup, first, we create a group
var group = DispatchGroup()
Before starting any task we must enter it into the group
group.enter()
After it ends it must leave the group
group.leave()
To avoid forgetting to type the leave method at the last line in the scope of the task we use “defer”
defer{group.leave()}
DispatchGroup after the end of all tasks notifies us that the group of operations is done and you can execute any code depending on this group or not.
group.notify(queue: .main){
// code
}
This is an example that explains all about DispatchGroup.
// enter the group
group.enter()
concurrentQueue.async{
// leave the group
defer{group.leave()}
for i in 0...100{
print(i)
}
print("1 Done")
}
// enter the group
group.enter()
concurrentQueue.async{
// leave the group
defer{group.leave()}
for i in 100...200{
print (i)
}
print("2 Done")
}
Until this moment, we have not been exposed to a scenario that requires canceling any mission, but what if we fall into this?
DispatchWorkItem
After a long day of work, you get hungry, and on your way home you called the restaurant to order some food, and the order is confirmed, after arrival you found your mom preparing lunch for you, and at this moment you called the restaurant “sorry, I don’t need the order anymore” he will stop himself from this task and notify you with a reply “ok’’ In this case, the order is canceled.
This example is typically the same as the mechanism of the DispatshWorkItem or the cancellation mechanism.
DispatchWorkItem: It represents a unit of work that can be executed on a dispatch queue. It encapsulates the task or code block that needs to be executed.
Let's code and see this practically.
First, we will create a workItem.
var workItem : DispatchWorkItem!
Then we implement our task.
workItem = DispatchWorkItem{
for i in 0...100{
print(i)
sleep(1)
}
}
After this, we create a queue to execute this work item on it
let queue = DispatchQueue(label: "com.queue.workItem",attributes: .concurrent)
Then call the queue to execute the task
queue.async(execute: workItem)
After completing the task workItem will notify us that it is completed.
workItem.notify(queue: .main) {
print("workitem is completed")
}
This code will print numbers from 0 to 100 between each number and the other one second then print “workitem is completed”
to cancel a task you will call the cancel function from the workItem
queue.asyncAfter(deadline: .now()+3){
self.workItem.cancel()
}
If I want to cancel this task during execution we will use workItem.cancel() but you will suddenly get that the task won’t be canceled after 3 seconds.
The cancel function will return true but the execution won't be abort
Let’s see how to deal with this situation.
We will check the return of the cancel function every time inside the loop in our example or every time we call workItem in general.
workItem = DispatchWorkItem{
for i in 0...100{
if self.workItem.isCancelled{
print("Cancelled")
break
}else{
print(i)
sleep(1)
}
}
}
the output will be.
It works…🥳
Conclusion
Concurrency allows for the execution of multiple tasks simultaneously without blocking the main thread.
Multithreading in Swift enables tasks to run concurrently by utilizing multiple cores or processors.
The Main thread handles UI updates, while background threads are used for non-UI operations to maintain a responsive user interface.
Grand Central Dispatch (GCD) provides a concurrency framework with DispatchQueue, allowing for organized and efficient management of concurrent execution.
DispatchSemaphores provide signaling mechanisms to control access to shared resources and manage concurrency.
DispatchGroups allow for grouping operations and executing a block of code after all operations have been completed.
DispatchSemaphores are used to limit concurrent access to shared resources, while DispatchGroups are used to track the completion of multiple operations.
DispatchSemaphores are based on the concepts of stopping and waiting, while DispatchGroups are based on grouping and notification.
DispatchWorkItem allows for encapsulating a task or code block to be executed.
Cancellation of a DispatchWorkItem is done by calling the
cancel()
method on the work item.To handle cancellation effectively, check the
isCancelled
property of the work item at appropriate points within the task and take necessary actions.Proper handling of cancellation ensures graceful termination of the task and prevents unnecessary execution of remaining code.