Dispatchers - IO and Default Under the Hood.

Sahil Thakar
9 min readApr 21, 2024

--

Hey folks,

We’ve explored the inner workings of Kotlin’s flow and some basic concepts, but we’ve never really dived into coroutine dispatchers. So let’s pull back the curtain and take a closer look at this topic.

First off, let’s clarify what “dispatcher” means in plain English: it’s someone who’s in charge of sending people or vehicles where they need to go, often in emergencies.

In Kotlin coroutines, a dispatcher is part of the coroutine context that decides which thread the coroutine scope will run on. Let’s break it down and see how this impacts our code. 🚦✨

Dispatchers.Default:-

  • Dispatchers.Default is a pre-defined dispatcher in Kotlin coroutines.
  • If you don't assign a specific dispatcher to a coroutine scope, it defaults to using Dispatchers.Default. This ensures that your coroutines have a standard place to run even when you don't explicitly set one.

Note:-

runBlocking sets its own dispatcher if no other one is set; so, inside it, the Dispatcher.Default is not the one that is chosen automatically.

For viewModelScoope it is Dispatcher.Main.

  • It’s designed for CPU-intensive tasks. It uses a thread pool with a count equivalent to the number of CPU cores in your machine, with a minimum of 2 threads. This setup is theoretically optimal for efficient thread utilization.
  • So if you have an 8-core machine and you’re using theDispatchers.Default, you can run up to 8 parallel processes — not more. But how can we confirm that the Dispatchers.Default uses a thread count based on your CPU cores?
  • Let’s check the code to understand this better. Let’s say I’ve got a 12-core processor, but that doesn’t guarantee that the Dispatchers.Default will use 12 threads. When it’s initialized, it first checks how many processors are available, and then allocates threads accordingly.
  • Enough chit-chat. Let’s dive into the code! 🔍💻

suspend fun main(): Unit = coroutineScope {

println(Runtime.getRuntime().availableProcessors())

launch {
printCoroutinesTime(Dispatchers.Default)
}
}

private suspend fun printCoroutinesTime(
dispatcher: CoroutineDispatcher
) {

CoroutineScope(Dispatchers.Default).launch {
val test = measureTimeMillis {
coroutineScope {
repeat(10) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}
println("#1 $dispatcher took: $test")

}
}

Output:-

10
#1 Dispatchers.Default took: 1010

You can see from the output that we have 10 processors available, so we can run 10 tasks in parallel, finishing them all within a second.

But what happens if we try to run more tasks than that?

What’s the result?

suspend fun main(): Unit = coroutineScope {

println(Runtime.getRuntime().availableProcessors())

launch {
printCoroutinesTime(Dispatchers.Default)
}
}

private suspend fun printCoroutinesTime(
dispatcher: CoroutineDispatcher
) {

CoroutineScope(Dispatchers.Default).launch {
val test = measureTimeMillis {
coroutineScope {
repeat(12) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}
println("#1 $dispatcher took: $test")

}
}

Output:-

10
#1 Dispatchers.Default took: 2009

In this case, the output takes more than a second because the system has to wait for the first 10 tasks to finish before it can process the remaining 2 tasks.

If you take a peak at the code above I have shared, you’ll notice that there’s a specified maximum and minimum limit for thread allocation.

This is why the execution time increases when the number of tasks exceeds the number of available threads.

You can’t set limitedParallelism to more than the number of CPU cores, with the minimum value set at 2.

But,

We're going to talk more about the limitedParallelism method in a bit, so no worries!

Again, Dispatchers.Default is only for CPU-Intensive tasks only, not blocking task.

Now, let’s say a heavy task comes along and starts hogging all the threads fromDispatchers.Default, starving other coroutines using the same dispatcher. How can we avoid this thread-blocking disaster?

  • No need to stress — Kotlin coroutines have our back! Here’s what we can do to prevent this situation.

limitedParallelism:-

This approach allows you to set a limit on the number of threads allocated to a specific process. It essentially sets a boundary, preventing a single process from monopolizing too many threads.

Let’s take a look at the code to see how it’s done:

suspend fun main(): Unit = coroutineScope {

launch {
val dispatcher = Dispatchers.Default
.limitedParallelism(6)
printCoroutinesTime(dispatcher)
}
}

private suspend fun printCoroutinesTime(
dispatcher: CoroutineDispatcher
) {
val test = measureTimeMillis {
coroutineScope {
repeat(7) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}
println("#1 $dispatcher took: $test")
}

Output:-
10
#1 LimitedDispatcher@26f3706c took: 2015

Here’s the example to help you understand — it’s pretty straightforward.

But let me break it down anyway. Say you have 10 CPU cores available, but you set a limit of 6. This means no more than 6 processes can run at once, so the 7th process has to wait until one of the earlier ones finishes. That’s why it ends up taking more than a second to complete. The limited parallelism sets a cap to manage resource use.

Note:-

limitedParallelism has a whole different concept in Dispatchers.Default and Dispatchers.IO. Here in Dispatchers.Default helps you set the limit but it should be less than your core size. If you set a limit more than the core size, then it will simply ignore it and put actual core sizes instead. You can peak on the above-shared screenshot of under the hood of the Dispatcher.Default.

Let’s take a quick peek at that.

  • We’ll use the same example to see how it works:

suspend fun main(): Unit = coroutineScope {
println(Runtime.getRuntime().availableProcessors())
launch {
val dispatcher = Dispatchers.Default
.limitedParallelism(11)
printCoroutinesTime(dispatcher)
}
}

private suspend fun printCoroutinesTime(
dispatcher: CoroutineDispatcher
) {
val test = measureTimeMillis {
coroutineScope {
repeat(11) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}
println("#1 $dispatcher took: $test")
}

Output:-

10
#1 LimitedDispatcher@491a53c6 took: 2011

Explanation:-

  • Here, we tried to set a limit that was higher than the available CPU cores, which resulted in the task taking longer than a second. Also, take note that it didn’t return Dispatchers.IO, but a LimitedDispatcher. If you check the code for limitedParallelism, you'll see it returns a new CoroutineDispatcher with a specified thread limit.
  • When we apply limitedParallelism to Dispatchers.Default or any other dispatcher, it creates a new dispatcher with the additional limit, but it's still bound by the constraints of the original dispatcher.
  • Feel free to experiment with these examples and try making some changes to see how they affect the outcome. It’s a great way to learn!

There are a few important points to remember about the Default Dispatcher:-

Why should the Dispatchers.Default be used for CPU-intensive tasks and not for blocking operations?

  • The Dispatchers.Default is fine-tuned to handle CPU-bound tasks efficiently. It does a great job with operations that require processing power.
  • The Dispatchers.Default has a set number of threads. If you use it for blocking operations — tasks that might take a long time — those threads get tied up, potentially starving other coroutines that also need to run. This can lead to significant delays.
  • If a blocking operation runs longer than expected, it can cause exceptions, leading to crashes and instability in your app. This is why it’s best to avoid using the Dispatchers.Default for anything that might block threads for a significant period.

But Father, What if I want to do blocking I/O operation, you said we can’t do it with Dispatchers.Default then how can do that?

Son, That’s where IODispatcher comes into the picture.

IODispatcher:-

  • The IODispatcher is designed for handling blocking I/O operations like reading files, making network requests, or accessing databases. It has a minimum limit of 64 threads, providing plenty of bandwidth for concurrent I/O tasks.
  • In the code example below, the task takes around 1 second because Dispatchers.IO can support over 50 active threads running simultaneously. This high capacity for parallel execution makes it ideal for I/O operations, ensuring that long-running tasks don't hold up other coroutines.
  • With this setup, you can run multiple blocking I/O tasks without worrying about starving other coroutines or causing performance bottlenecks. It’s a good fit for operations that need to wait for external resources, but keep in mind that it’s not intended for CPU-intensive work.

suspend fun main(): Unit = coroutineScope {
launch {
val dispatcher = Dispatchers.IO
printCoroutinesTime(dispatcher)
}
}

private suspend fun printCoroutinesTime(
dispatcher: CoroutineDispatcher
) {
val test = measureTimeMillis {
coroutineScope {
repeat(50) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}
println("#1 $dispatcher took: $test")
}


Output:-
#1 Dispatchers.IO took: 1013

Explanation:-

As you guys can see 50 operations happened in 1 single second.

When you dig into the internals of the IO Dispatcher, you’ll find that it uses the UnlimitedIoScheduler to allocate threads.

The IO Dispatcher has an “unlimited” pool size, but it starts with an initial cap of 64 threads. We’ll revisit this topic later.

Now, here’s something crucial to know:

  • Default and IO Dispatchers share a common thread pool. This optimization allows threads to be reused, and dispatching is often not required. If you’re running a task on the Dispatchers.Default and then switch to the IO Dispatcher, it’s likely that it’ll stay on the same thread. The key difference is that the thread count now applies to the IO Dispatcher’s limit instead of the Dispatchers.Default’s limit. Their limits operate independently, so one won’t starve the other.
  • Let’s explore this with an example to see how it plays out in practice:
suspend fun main(): Unit = coroutineScope {
launch(Dispatchers.Default) {
println(Thread.currentThread().name)
withContext(Dispatchers.IO) {
println(Thread.currentThread().name)
}
}
}

Output:-
DefaultDispatcher-worker-1
DefaultDispatcher-worker-1

Explanation:-

  • As you guys can see, it did stay on the same thread.
  • To see this more clearly, imagine that you use both Dispatchers.Default and Dispatchers.IO to the maximum. As a result, your number of active threads will be the sum of their limits. If you allow 64 threads in Dispatchers.IO and you have 8 cores, you will have 72 active threads in the shared pool. This means we have efficient thread reuse and both dispatchers have strong independence.

The only problem is when such functions block too many threads. Dispatchers.IO is limited to 64. One service that is massively blocking threads might make all others wait for their turn. To help us deal with this, we again use limitedParallelism.

IO dispatcher with a custom pool of threads

Dispatchers.IO has very different behavior than Dispatchers.Default. When you set limitedParallelism to Dispatchers.IO that time it can be set in both directions, you can set limits less than 64 and greater than 64. Because it has unlimited pool threads.

Also when you set a value greater than 64 that time it has nothing to do with Dispatcher.IO the limit, it is a whole new dispatcher with an additional limit, that is still limited just like the original dispatcher.

Let’s look at this via an example:-


suspend fun main(): Unit = coroutineScope {
launch {
val dispatcher = Dispatchers.IO
.limitedParallelism(100)
printCoroutinesTime(dispatcher)
}
}

private suspend fun printCoroutinesTime(
dispatcher: CoroutineDispatcher
) {
val test = measureTimeMillis {
coroutineScope {
repeat(100) {
launch(dispatcher) {
Thread.sleep(1000)
}
}
}
}
println("#1 $dispatcher took: $test")
}

Output:-
#1 LimitedDispatcher@32a1006c took: 1017

Explanation:-

  • Here you see in the output that it created a new LimitedDispatcher with a new limit of 100, and performed 100 operations in 1 sec.

Conceptually, there is an unlimited pool of threads, that is used by Dispatchers.Default and Dispatchers.IO, but each of them has limited access to its threads. When we use limitedParallelism on Dispatchers.IO, we create a new dispatcher with an independent pool of threads (completely independent of Dispatchers.IO limit). If we use limitedParallelism on Dispatchers.Default or any other dispatcher, we create a dispatcher with an additional limit, that is still limited just like the original dispatcher.

  • limitedParallelism used on Dispacthers.Defaultmakes a dispatcher with an additional limit. Using limitedParallelism on Dispatcher.IO makes a dispatcher independent of Dispatchers.IO. However, they all share the same infinite pool of threads.

There are a few things that you should remember about Default Dispatcher:-

  • Dispatchers.IO should be used for I/O operations only.
  • Dispatchers.IO should not be used for CPU-intensive operations because it is used to block operations, and some other process might block all its threads.

Yeah, Now that’s it for this class.

Did you guys enjoy it ??

Try it out, and put a question in the comments if you guys have any.

Follow me here on Medium for an interesting topic

Follow me on — Linkedin as well to increase our developer’s network.

Will see you guys soon with some new amazing topics.

--

--

Sahil Thakar

I'm Sahil, an Android developer with 4 years of experience in Java/Kotlin. I've built apps with millions of downloads, like Punch, Woovly and Pratilipi.