Multithreading issues and counter measures
There are 3 main issues which can occur in concurrent environment where many threads try to access and modify the same section (critical section) and which can lead to unexpected ambiguous results and crashes which are very hard to debug and find in production code.
Let see what are the 3 most common issues in below sections with coding practice and measures to prevent them.
Priority Inversion :
As the name suggests, Priority inversion is a scenario in which low priority thread get access or lock to critical section and high priority queue has to wait for the signal of low priority thread signal. If suppose there are 100 more threads with mid level or moderate quality of service trying to getting access to shared resource. In that case high priority thread has to wait for the other threads and is being starved of CPU time and It is called as Thread Starvation.
It’s better to use Semaphores only among threads of the same priority.
Deadlock refers to a specific condition when two or more processes or threads are each waiting for another to release a resource, or more than two processes are waiting for resources in a circular chain
Deadlock is a common problem in multiprocessing where many processes share a specific type of mutually exclusive resource known as a lock.
Let see some examples and common scenarios in which deadlock can occur:
- Suppose there are 2 resources A and B and there are 2 threads with one as high priority and other as low priority. And both thread need both the resources for their tasks to finish.
- Let say we take 2 semaphore to protect each resources A and B for locking so that once one thread access the resource other cannot enter untill the unlock signal from former thread.
- Now Thread 1 with high priority locks Resource A and Thread 2 locks Resource B. Thread 2 need Resource A access for finishing it task and then only it can release lock from resource B. Similarly Thread 1 locks Resource A and wait for Resource B which is locked by Thread 2. This is classic case of deadlock.
DeadLock In Synchronisation:
Never call sync on same queue especially Main Thread because Synchronous returns control to the current queue only after the task is finished. It blocks the queue and waits until the task is finished.
See below screenshot:
Here global().sync blocks the current thread which is main thread and main.sync will block current thread/queue which is global and try to get access to main queue which was previously being blocked by global sync queue and main queue is released only after outer queue task is completed and which depends upon inner queue which in turn requires main thread.
So, In short never use main synchronously, except inside async global queue.
Race condition — A race condition happens when two or more threads access a shared data and change it’s value at the same time.
Suppose we have a bank account with balance as 1000. Transaction can be done by ATM and by online. If user use his debit card to withdraw 700 and at the same time uses online payment to buy some goods worth 600. Then this lead to corrupted system. To avoid this synchronisation technique like Semaphore and mutex locks can be used.
How to Avoid threading issues in concurrency or multithreading environment:
- Better Design with avoiding shared resources among multiple threads.
- Using Locks like NSLock or DispatchSemaphore but this can lead to priority inversion or deadlock if not coded and designed correctly.
- Using Serial Dispatch queues.
- Using barrier flag with concurrent queue so that when
When dispatching a code block to a concurrent queue, you can assign a flag to it indicating that it is a barrier task, meaning that when it is time to execute this task, it should be the only executing item on the specified queue.
Race Condition : Race condition happens when a critical section is modified at the same time by two threads which leads in crash or invalid state of system.
DeadLock: DeadLock happens when 2 threads need resource of one another to break the lock and neither can release the resource for other because of their waiting for each other resource.
Priority Inversion : Priority Inversion is phenomenon when low priority thread locks the critical section and high priority thread are executed later after low priority thread unlocks the section.