The Future.await(s) for no one

Idan Koch
Idan Koch
Nov 17 · 4 min read

Ever wondered what happens to a scala Future if you don’t wait for it to resolve? This post is a lesson learned from a production issue that happened to me a while ago while writing a new service. Long story short, not waiting for a Future to resolve can lead to an OutOfMemoryError. Side Note: If you read any of my previous posts you know I like to keep it short… this post will be no different.

To recreate the issue, we will create a simplified example. Let’s start by defining a small program that performs a simple quick operation called performAction. All that performAction does is print “Hello world”. performAction is wrapped with a Future and it is called time and time again.
Furthermore, we define a thread pool of core size 2 and max size 10 that will handle the future execution.

import java.util.concurrent._
import scala.concurrent.ExecutionContext
import scala.concurrent.Future
val executor = new ThreadPoolExecutor(
2, //corePoolSize
10, //maxPoolSize
new LinkedBlockingQueue[Runnable]()
def performAction()(implicit ec: ExecutionContext) =
Future {
println("Hello world")
implicit val ec = ExecutionContext.fromExecutor(executor)while (true) {

When running the code in REPL the console starts filling up with “Hello world” prints. After a short period an OutOfMemoryError is thrown, But why?

Console printout example:

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-1-thread-1"Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "pool-1-thread-2"

Side Note: you can speed up the process by lowing the scala REPL memory setting, like so:

env JAVA_OPTS="-Xms100m -Xmx100m -XX:+PrintGCDetails -Xloggc:gclog.log" scala

It seems that only 2 threads are running even though the maximum thread pool size is 10. Adding debug lines confirms this:

println(s"heapFreeSize: ${Runtime.getRuntime.freeMemory}")
println(s"getPoolSize: ${executor.getPoolSize}")
println(s"getQueueSize: ${executor.getQueue.size()}")
getQueueSize: 666698
heapFreeSize: 1092544
getPoolSize: 2
getQueueSize: 667516
heapFreeSize: 990720
getPoolSize: 2
getQueueSize: 667793
heapFreeSize: 990720
getPoolSize: 2
getQueueSize: 668054
heapFreeSize: 886224
getPoolSize: 2

The printout indicates that free memory decreases over time, no big whop there. The executor queue size starts growing and growing as a result of submitting the Future tasks to the queue. This is a direct result of defining performAction as asynchronous. The function doesn’t wait for the print to occur and new tasks are created faster than the threads in the thread pool can handle. This explains the OutOfMemoryError, but why only 2 threads are used? The answer is found in documentation:

When a new task is submitted in method {@link #execute(Runnable)},
and fewer than corePoolSize threads are running, a new thread is
created to handle the request, even if other worker threads are
idle. If there are more than corePoolSize but less than
maximumPoolSize threads running, a new thread will be created only
if the queue is full
. By setting corePoolSize and maximumPoolSize
the same, you create a fixed-size thread pool. By setting
maximumPoolSize to an essentially unbounded value such as {@code
Integer.MAX_VALUE}, you allow the pool to accommodate an arbitrary
number of concurrent tasks. Most typically, core and maximum pool
sizes are set only upon construction, but they may also be changed
dynamically using {@link #setCorePoolSize} and {@link

Meaning, the use of LinkedBlockingQueue default queue size (which is Integer.MAX_VALUE) leads to new threads not be created (you will run out of memory before the queue is full). Defining core pool size to be equal to the max pool size is advised (worst case you will have some idle threads). Unfortunately, this setting will just delay the problem, after awhile all threads being used will not be able to handle the pace. Additionally, throwing more threads at the problem won’t help, since your machine has its limits too (each thread created takes up memory). Another solution would be to cap the queue size. However, this leads to a new problem. After a while tasks will get rejected once the queue is full and you will start seeing RejectedExecutionException like this:

java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@27605b87 rejected from java.util.concurrent.ThreadPoolExecutor@67b8d45[Running, pool size = 1000, active threads = 1, queued tasks = 0, completed tasks = 1244]
at java.base/java.util.concurrent.ThreadPoolExecutor.reject(
at java.base/java.util.concurrent.ThreadPoolExecutor.execute(
at scala.concurrent.impl.ExecutionContextImpl.execute(ExecutionContextImpl.scala:20)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:68)
at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete(Promise.scala:368)
at scala.concurrent.impl.Promise$KeptPromise$Kept.onComplete$(Promise.scala:367)
at scala.concurrent.impl.Promise$KeptPromise$Successful.onComplete(Promise.scala:375)
at scala.concurrent.impl.Promise.transform(Promise.scala:29)
at scala.concurrent.impl.Promise.transform$(Promise.scala:27)
at scala.concurrent.impl.Promise$KeptPromise$Successful.transform(Promise.scala:375)
at scala.concurrent.impl.Promise$KeptPromise$

Which leads to the obvious of waiting for the future to resolve:

Await.result(performAction(), 1.seconds)

After that, if you still are unable to handle the load on your server then cheer up you just wrote a piece of code that is widely used ….so just sit back and scale-up.

Thanks to Dmitry Komanov

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade