Programming

Scala’s Execution Contexts: A Deep Dive

An in-depth look into Scala’s execution contexts, their role in concurrency, and best practices for efficient coding

Anas Anjaria
7 min readNov 27, 2023

In the world of programming, clarity often emerges from applying concepts through multiple scenarios. Just as in mathematics, where solving diverse problems illuminates underlying principles.

I find the same principle beneficial in understanding programming concepts. Scala’s execution context is one such topic that significantly benefits from exploration through diverse examples.

I have delved deeply into the theoretical aspects of Scala’s execution context in my previous article titled “Scala Execution Context — Simply Explained”.

I highly recommend familiarizing yourself with it before diving into this piece. However, I have put it in a nutshell for you later in this article.

Here, I aim to elucidate this topic further by providing practical examples that illuminate its nuances.

Recap — Scala Execution Context — Simply Explained

Executor service concept. The concept is borrowed from this article. Created by the author using draw io.

Whenever a Future is created, the future block is appended to a task queue. If the thread pool has an idle thread, the future block is executed on the idle thread. Otherwise, it will wait until a thread is available from the thread pool.

Example program for demonstration — Keeping it simple

To maintain focus on the execution context, I’ll omit the implementation details involving communication with databases (DB).

Instead, I’ll simulate this behavior, assuming a 500ms retrieval time and subsequent business logic execution.

// Sample method mimicking data retrieval from DB
private def fetchDataFromDB(implicit executionContext: ExecutionContext): Future[Int] = Future {
printWithThreadName("Fetching data from DB")
Thread.sleep(500) // Simulating retrieval time
1
}

// Simulated business logic execution
private def applyBusinessLogic(n: Int)(implicit executionContext: ExecutionContext): Future[Int] = Future {
printWithThreadName("Applying business logic")
Thread.sleep(500) // Simulating business logic execution time
n * 2
}

// Sample method for printing information with a thread name.
private def printWithThreadName(message: String): Unit = {
val threadName = Thread.currentThread.getName
println(s"${LocalDateTime.now} - [$threadName] - $message")
}

Usecases

Please note that code blocks used in the following usecases only show relevant part, ignoring unceccesary detail.

You can find complete code later in this article.

Usecase 1 — No parallelism at all

In this scenario, we employ an execution context with a single-threaded pool for 2 concurrent operations:

// ...
val executorService = Executors.newFixedThreadPool(1,
new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
implicit val executionContext: ExecutionContext = ExecutionContext
.fromExecutor(executorService)

val op1 = fetchDataFromDB.flatMap(applyBusinessLogic)
val op2 = fetchDataFromDB.flatMap(applyBusinessLogic)

// ...

Output

# Increasing timestamps indicating sequential execution

2023-11-23T12:00:25.351 - [app-thread-pool-0] - Fetching data from DB
2023-11-23T12:00:25.854 - [app-thread-pool-0] - Fetching data from DB
2023-11-23T12:00:26.357 - [app-thread-pool-0] - Applying business logic
2023-11-23T12:00:26.864 - [app-thread-pool-0] - Applying business logic
Time taken: 2.018 seconds

Explanation — Upon execution, the output demonstrates synchronous behavior despite leveraging asynchronous constructs like Future.

This is due to the single-threaded nature of the execution context.

Usecase 2 — Execute all operations concurrently

Here, by increasing the thread pool size to 2, we witness improved performance through parallelism:

// ...
val executorService = Executors.newFixedThreadPool(2,
new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
implicit val executionContext: ExecutionContext = ExecutionContext
.fromExecutor(executorService)

val op1 = fetchDataFromDB.flatMap(applyBusinessLogic)
val op2 = fetchDataFromDB.flatMap(applyBusinessLogic)

// ...

Output

# Started concurrently
2023-11-24T06:44:13.295 - [app-thread-pool-0] - Fetching data from DB
2023-11-24T06:44:13.296 - [app-thread-pool-1] - Fetching data from DB
# Started concurrently
2023-11-24T06:44:13.797 - [app-thread-pool-1] - Applying business logic
2023-11-24T06:44:13.797 - [app-thread-pool-0] - Applying business logic
Time taken: 1.005 seconds

Explanation — The program exploits parallelism, fetching data concurrently, leading to a notable reduction in execution time.

Usecase 3 — Use dedicated execution context for DB and business logic

In this setup, we segregate execution contexts for database communication and business logic evaluation:

// ...
val dbExecutorService = Executors.newFixedThreadPool(2,
new ThreadFactoryBuilder().setNameFormat(s"db-thread-pool-%d").build())
val dbExecutionContext: ExecutionContext = ExecutionContext
.fromExecutor(dbExecutorService)

val executorService = Executors.newFixedThreadPool(2,
new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
implicit val executionContext: ExecutionContext = ExecutionContext
.fromExecutor(executorService)

// Explicitly specifying execution context for communication with DB.
val op1 = fetchDataFromDB(dbExecutionContext).flatMap(applyBusinessLogic)
val op2 = fetchDataFromDB(dbExecutionContext).flatMap(applyBusinessLogic)

// ...

If you pay close attention to the following setup, I have explicitly specified dbExecutionContext for database communication. All other computation uses implicit execution context i.e executionContext.

Output

# Pay close attention to the thread pool used for each operation.
2023-11-24T06:59:57.209 - [db-thread-pool-0] - Fetching data from DB
2023-11-24T06:59:57.209 - [db-thread-pool-1] - Fetching data from DB
2023-11-24T06:59:57.712 - [app-thread-pool-1] - Applying business logic
2023-11-24T06:59:57.712 - [app-thread-pool-0] - Applying business logic
Time taken: 1.015 seconds

Explanation — Using dedicated execution context enables us to execute computations in respective thread pool.

Despite having dedicated execution contexts, this configuration, although faster than sequential execution, exhibits a slightly slower runtime possibly due to context switching overhead.

Switching between different execution contexts involves some overhead.

NOTE:

My exmaples are not even close to our production system.
--> So, I can't make a rule of thumb.

Based on educated guess, it sounds reasonable to me.

Depending on the workload, this behaviour can differ.

Usecase 4 — Increase number of concurrent operation, but using only single execution context

In this setup, more operations than the available number of threads in the pool are performed using a single execution context:

val executorService = Executors.newFixedThreadPool(2, 
new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
implicit val executionContext: ExecutionContext = ExecutionContext
.fromExecutor(executorService)

val op1 = fetchDataFromDB.flatMap(applyBusinessLogic)
val op2 = fetchDataFromDB.flatMap(applyBusinessLogic)
val op3 = fetchDataFromDB.flatMap(applyBusinessLogic)

Output

# execute concurrently
2023-11-24T07:22:59.700 - [app-thread-pool-0] - Fetching data from DB
2023-11-24T07:22:59.700 - [app-thread-pool-1] - Fetching data from DB
# execute concurrently
2023-11-24T07:23:00.201 - [app-thread-pool-0] - Fetching data from DB
2023-11-24T07:23:00.201 - [app-thread-pool-1] - Applying business logic
# execute concurrently
2023-11-24T07:23:00.703 - [app-thread-pool-0] - Applying business logic
2023-11-24T07:23:00.705 - [app-thread-pool-1] - Applying business logic
Time taken: 1.508 seconds

Explanation — The resulting output illustrates that due to the limited number of threads, only two operations are executed in parallel.

Notably, all DB-related operations are executed first, followed by the application of business logic.

val op1 = fetchDataFromDB         // Step 1 
.flatMap(applyBusinessLogic) // Step 2
val op2 = fetchDataFromDB // Step 1
.flatMap(applyBusinessLogic) // Step 3
val op3 = fetchDataFromDB // Step 2
.flatMap(applyBusinessLogic) // Step 3

The execution order is reflected in the output’s timestamps.

Before running the program, my assumption was that op1 and op2 would be executed first, followed by op3 due to dependencies (roughly as follows).

val op1 = fetchDataFromDB         // Step 1
.flatMap(applyBusinessLogic) // Step 2
val op2 = fetchDataFromDB // Step 1
.flatMap(applyBusinessLogic) // Step 2
val op3 = fetchDataFromDB // Step 3
.flatMap(applyBusinessLogic) // Step 4 becuase it depends on step 3

I think, the runtime scheduling seems to intelligently optimize the execution plan, which is apparent from the observed behavior.

Usecase 5 — Increasing concurrent operations with dedicated execution context

Similar to Usecase #4, this setup utilizes dedicated execution contexts for database communication and business logic evaluation.

val dbExecutorService = Executors.newFixedThreadPool(2, 
new ThreadFactoryBuilder().setNameFormat(s"db-thread-pool-%d").build())
val dbExecutionContext: ExecutionContext = ExecutionContext
.fromExecutor(dbExecutorService)

val executorService = Executors.newFixedThreadPool(2,
new ThreadFactoryBuilder().setNameFormat(s"app-thread-pool-%d").build())
implicit val executionContext: ExecutionContext = ExecutionContext
.fromExecutor(executorService)

val op1 = fetchDataFromDB(dbExecutionContext).flatMap(applyBusinessLogic)
val op2 = fetchDataFromDB(dbExecutionContext).flatMap(applyBusinessLogic)
val op3 = fetchDataFromDB(dbExecutionContext).flatMap(applyBusinessLogic)

Output

# execute concurrently
2023-11-24T07:42:44.021 - [db-thread-pool-0] - Fetching data from DB
2023-11-24T07:42:44.022 - [db-thread-pool-1] - Fetching data from DB

# execute immediately as the thread from the pool is available
2023-11-24T07:42:44.522 - [db-thread-pool-0] - Fetching data from DB

# execute concurrently
2023-11-24T07:42:44.524 - [app-thread-pool-1] - Applying business logic
2023-11-24T07:42:44.524 - [app-thread-pool-0] - Applying business logic

# execute immediately as the thread from the pool is available
2023-11-24T07:42:45.028 - [app-thread-pool-1] - Applying business logic
Time taken: 1.509 seconds

Explanation —The concurrent nature of dedicated execution contexts allows the fetching of data from the DB and the application of business logic to proceed (closely) concurrently.

Notably, the third operation initiates immediately upon the pool’s availability.

Source code

You can find complete source code here.

Real-world Scenario: Utilizing Dedicated Execution Contexts

While the examples presented earlier served to illustrate various scenarios, the setup in a real production system tends to be more complex and nuanced.

One crucial practice in managing concurrency is the use of dedicated execution contexts tailored for specific components, ensuring better isolation and improved system performance.

In practical terms, consider the case of a Data Access Object (DAO) layer responsible for database interactions and a service layer that interacts with this DAO.

Ideally, these layers should operate with their dedicated execution contexts instead of sharing them.

This approach aids in better managing concurrent tasks, avoiding interference between different layers’ operations, and simplifying debugging efforts.

Let’s delve into a sample implementation:

DAO Layer

class ProductsDAO(
...,
private implicit val executionContext: ExecutionContext
) {

def getProductById(id: Int): Future[Product] = ???

// Additional methods...
}

Service layer

class ProductsService(
dao: ProductsDAO,
private implicit val executionContext: ExecutionContext
) {

def getProductById(id: Int): Future[Product] = {
// Service layer logic utilizing DAO
???
}

// Additional methods...
}

In the main entry point of the application, it’s advisable to define and employ dedicated execution contexts for these classes.

Application Entry Point

class MyApp extends App {
// Basic setup; frameworks like Guice for dependency injection
// can enhance this further

// Define dedicated execution context for the database operations
val dbExecutionContext = ???

// Define dedicated execution context for the service layer
val serviceExecutionContext = ???

val dao = new ProductsDAO(
...,
dbExecutionContext
)

val service: ProductsService = new ProductsService(
dao,
serviceExecutionContext
)

// Further application logic...
}

The use of dedicated execution contexts in such a manner not only aids in managing concurrent tasks effectively but also facilitates scalability, performance optimization, and ease of maintenance within a complex production system.

Conclusion

Exploring Scala’s execution context through practical examples unveils its behavior in concurrent programming.

By leveraging parallelism and separating concerns, developers can optimize system performance while understanding the nuances of context switching overhead.

Thanks for reading! I hope these examples shed light on the topic. Happy coding!

If you like this post, you might also enjoy

If you enjoy my content, you can subscribe here and get my future work directly in your inbox

https://medium.com/@anasanjaria/subscribe

--

--

Anas Anjaria

I simplify software engineering by sharing practical lessons and insights. My goal is to help early-career developers grow into proficient Software Engineers.