Grand Central Dispatch 101

Apple has this tendency to hide the complex CS underneath it’s interfaces (i.e. Interface Builder and Core Data/SQL Lite). Grand Central Dispatch is one of those features that has a lot going on underneath the covers.

It was released back when Mac OS X 10.6 and iOS 4 were the hottest systems, and it’s primary purpose was (and still is) to make systems more efficient by taking memory management away from the developer and moving it closer to the operating system. It allows the system to organize tasks into smaller blocks and send them to queues. These queues then distribute them to threads using a thread pool pattern.

Admittedly, it makes building native apps a hell of a lot easier. It’s wonderful that for the most part, we can add smaller tasks to our application and it can manage the queues and threads that are in use. However, if you’re anything like me, you want to know whats going on under the hood.

Explain Like I’m 5:

from xkcd.com

I would definitely not go into this much detail when talking to a five year old, but here’s the gist for fellow GCD newbies:

When you send jobs for the phone to process, it sets them up in different queues. It sends the jobs you submitted in FIFO (First in, First out) to available threads, which means the first one to get it line will the first to be processed. To make your machine fast, you can’t let your line slow down and clog up. Imagine if Whole Foods had one line and one register. To make it worse, someone ahead of you has about a month’s worth of groceries. Everyone in line would be throwing their organic asparagus water and Whole Foods would be backed up all the way to the Charcuterie station.

To make your system faster, Apple created Grand Central Dispatch that uses a series of queues that are built in as well as the capability to create queues if the load requires it. The main purpose of Grand Central Dispatch is to distribute jobs in a way that is most efficient.

Threads and Queues

Queues can be viewed as an abstraction of threads. If we continue on with the Whole Foods/Grocery Store metaphor, queues are the line you wait in before getting to a register, and threads are the registers themselves. The relationships between queues and threads are specific to the types of queues in GCD.

At its most basic, there are two types of queues: Serial and Concurrent.

SERIAL

(Mail kimp?) Serial queues are uncommitted but monogamous. Jobs in serial queues can be sent to any thread for processing. Jobs are also not run concurrently, so another job will not start until the previous one is done.* The job can switch between threads, so it is not committed to the thread it’s on.

To create a serial queue:

dispatch_queue_t myQueue = 
dispatch_queue_create("label",DISPATCH_QUEUE_SERIAL);

The main queue is a special type of serial queue. Like other serial queues, the main queue is monogamous, but unlike other serial queues, the main queue is married to the main thread. It’s general knowledge that the UI runs on the main queue, so if you submit jobs to the main queue/main thread, you need to make sure it doesn’t slow down the interface.

To access the main queue:

dispatch_get_main_queue();

CONCURRENT

As the name implies, concurrent queues run jobs concurrently. If serial queues are described as monogamous, concurrent queues are the “you can’t tie me down” type. They aren’t assigned to any particular thread, and run different blocks at the same time. This means despite jobs going in FIFO, there is no guarantee that completion will happen in the same order. This is most useful for jobs that are not dependent on the results of another block.

To create a concurrent queue:

dispatch_queue_t myOtherQueue = 
dispatch_queue_create("label",DISPATCH_QUEUE_CONCURRENT);

Here’s a rabbit hole: you can also create low, high, background or default priority queues. Let’s save that for GCD 201.

Submitting jobs

There are a few ways to submit jobs to queues. There are options for a delayed submission (dispatch_after) and submitting the same block of code multiple times in parallel (dispatch_apply). From my research, I’ve found that the most commonly used methods for submission are dispatch_async , dispatch_sync, and dispatch_once.

Asynchronous submissions (using dispatch_async) are differentiated by the fact that it returns immediately after the job has been added to a queue, whether or not the block has been processed and completed.

To submit an asynchronous block:

dispatch_queue_t myQueue = 
dispatch_queue_create("myQueue",DISPATCH_QUEUE_CONCURRENT);
dispatch_async(myQueue, ^{
      //Your block here
//Return will occur immediately after submitting to myQueue.
});

On the other hand, synchronous submissions (using dispatch_sync) do not return until the job is completed.

dispatch_queue_t someQueue = 
dispatch_queue_create("someQueue",DISPATCH_QUEUE_SERIAL);
dispatch_sync(someQueue, ^{
       //Your block here
//Return will not occur until block is done.
});

A single dispatch (using dispatch_once) is a particularly useful one; a common metaphor for this particular method is a train ticket. It only needs to get punched once, and it can manage this status no matter what thread you’re on. The syntax is somewhat similar to the methods above, but you may notice something special about this one.

static dispatch_once_t onceToken = 0;
dispatch_once(&onceToken, ^{
      //Your block of code here
//Will only occur once, no matter how many threads are trying
to run this code.
});

Note the use of “static” and the token. To all you five years olds reading this, the most opaque explanation I can offer is that dispatch_once essentially checks for a “done, did run” status for onceToken. It runs the static (read: keyword for “limited scope”) onceToken of type dispatch_once_t. As it runs this token, it also runs your block of code. Think of the onceToken as your ticket, and the block code the person who holds it. The ticket may be the one that has the punch, but the person also gets through.

Conclusion

I’ve only covered the simplest of ideas in GCD — like everything in programming, it is nuanced and complex in its utility. Serial queues aren’t always one job at a time. Mutability can really start messing with jobs in queues. A series of blogs can be dedicated to just run loops in the main queue. There is far more that can be discussed about GCD, but I’ll have to wait til we’re all six. Maybe seven.


Responses
The author has chosen not to show responses on this story. You can still respond by clicking the response bubble.