Gentle Introduction to Async Scala

Dropsource
Dropsource Blog
Published in
6 min readJun 13, 2017

Computer processors aren’t getting much faster; we’re simply getting more of them. Because of this new trend, parallel processing has become hugely important. An application which can take full advantage of the number of processors on a given machine will prove to be exceptionally scalable. Because of this, functional programming (FP) has finally made its way into the mainstream industry.

One of FP’s greatest cornerstones is the concept of “immutability”, which means data can’t be changed once it is set. Immutability is critical for parallelization, because when multiple cores fight over the same piece of data, the application can become unpredictable or deadlocked (or both). Scala, when used as intended, takes advantage of immutability. Furthermore, its standard library features a very large toolkit for parallel and asynchronous processing. Right out of the box and with just a few lines of code, a Scala application can instantly take advantage of every core on the machine. Today’s post focuses on a few core concepts of asynchronous Scala programming.

Let’s start with a few analogies:

  • Synchronous Behavior: a restaurant with one table, with only one diner at a time
  • Asynchronous Behavior: a restaurant with many tables, with employees handling multiple diners at once
  • Execution Context: the restaurant’s management, servers, and hosts
  • Promise: An empty dinner plate, upon which a chef will eventually place food
  • Future: A picture of the food once it’s available

Synchronous vs. Asynchronous

Those who come from a background in Java, C, and various other imperative languages may be more accustomed to synchronous programming, whereas users who come from Javascript are quite familiar with async programming. Both have their respective advantages and disadvantages depending on the use-case, but in modern development, async tends to be much more performant.

To understand the difference, it’s important to understand how long certain operations take. In general, there are a few places your computer can access data, namely: CPU caches (L1/L2/L3), RAM, disk, and network. CPU caches are very small but very fast, whereas hard drives are very large but very slow (relatively speaking). A database request will take millions of times longer than simply fetching a value from memory.

In general, computations are broken down into two categories: CPU-bound and IO-bound. CPU-bound tasks needn’t wait for data to be piped in from elsewhere; they can generally rely on fast access to RAM. IO-bound tasks, on the other hand, will spend most of their time waiting for data to be sent or received. This means your CPU will actively sit there and wait, which prevents other tasks from making progress. This behavior is called blocking, and is generally bad.

Every second wasted on waiting is money and energy being spent on nothing. Again, a database request will take millions of times longer than a memory request. That’s not to say you’ll cut costs by a factor of a million, but switching to a non-blocking architecture will keep your CPU as busy as possible, thus wasting less money and energy.

Promises and Futures

In roughly 99% of circumstances, you will only need to work with Futures; however, Futures are read-only “views”, so where does the value actually come from? If it’s read-only and I don’t get to write it, who does? That’s where a Promise comes into play. A Promise is a container that you will write to exactly once, at some point. A Promise is “kept” if a successful value is written to it. Alternatively, a Promise can be “failed” with an exception. I tend to use Promises when working with async Java libraries.

Let’s take a quick look at a Promise example:

scala
import scala.concurrent.{Promise, Future}
// Create a new Promise object. We’ll set the value later.
val someNumber: Promise[Int] =
Promise[Int]()
// Now call some async Java library which uses completion listeners
SomeAsyncJavaLib.doTheThing(
new AsyncListener() {
def onSuccess(result: Int): Unit =
// We can now complete the Promise with the value
someNumber.success(result)
def onFailure(t: Throwable): Unit =
// Or, if the async call failed, we can complete the
Promise with a failure
someNumber.failure(t)
}
)
val someNumberFuture: Future[Int] =
someNumber.future

We’ve created a new Promise which will eventually be given a value. We then call a library that uses completion listeners, and in its completion, we fill in the Promise. Finally, we spin off a Future which gives us a read-only view into the Promise.

Now that we’ve converted an async Java library call into a Scala Future, we can start to transform and work with the value. We can do several operations on this Future now, including adding completion listeners, for-comprehensions, and completing other Promises. But to me, the most straightforward way to work with them is to use higher-order transformers. In other words, you can use map, flatMap, filter, and collect to transform a Future.

For example, let’s say I wanted to increment the value in the Future. To do that, we can use a simple `.map(…)` call:

val someNumberFuture: Future[Int] =
someNumber.future
val someIncrementedFuture: Future[Int] =
someNumberFuture.map(_ + 1)

Keep in mind that these transformations will spin off new Futures, so the original Future’s data is still preserved.

Now let’s say we want to do a bit of validation on this Future. We only want even numbers in this Future, and if the value is odd, return an exception instead. To do that, we can use `.filter(…)`, which will keep the value if it passes the predicate, or turn it into a failed Future (NoSuchElementException) otherwise:

val someIncrementedFuture: Future[Int] =
someNumberFuture.map(_ + 1)
val onlyEvenFuture: Future[Int] =
someIncrementedFuture.filter(_ % 2 == 0)

And finally, let’s say we have some method which expects a number parameter, performs a database lookup, and returns the resulting Future. We can chain our previous Future together with this method using `.flatMap(…)`

val onlyEvenFuture: Future[Int] =
someIncrementedFuture.filter(_ % 2 == 0)
def databaseLookup(number: Int): Future[String] =
???
val stringFuture: Future[String] =
onlyEvenFuture.flatMap(databaseLookup)

Why does this matter?

In the olden days of monoliths, people would simply horizontally scale out their existing architecture by throwing it behind a load balancer and calling it a day. Although this method of horizontal scaling is great and works very well, without fixing an underlying blocking system, you’ll also scale up your wasted dollars. A single API server which implements blocking can only handle a few requests at a time. The requests get in the way of each other, because database calls need to finish one request before the next one can start. Instead, if the system can make the DB calls and immediately get to work on the next request, then the CPU stays busy doing actual work. Once the original DB requests complete, the original request can also be completed. And all of that can be done without needing to wait for other requests in the queue to finish. No throttling, no blocking; just hundreds (or thousands or millions) of concurrent requests.

This is all particularly important when working with the Akka or Play frameworks. Akka (and Play) operate off of dispatchers. A dispatcher is a special execution context which handles the delivering and processing of actor messages. You can generally use this dispatcher for your computations and other app logic, though there are several use-cases where you’d want to use a custom one (a topic for a future blog post). This dispatcher is often shared with your whole application. It’s fast and takes full advantage of the CPU, but if you throw blocking calls into the mix, the whole system is defeated. Blocking the dispatcher means actor messages can’t be delivered, and there’s a high risk of deadlock. You can throw as much work as you want at the dispatcher, so long as the work keeps the CPU busy with real work. Avoid blocking wherever possible, and definitely avoid blocking on a dispatcher (or any fork-join pool for that matter).

Stay tuned for future posts from me on Scala development. There’s a very large world of async Scala to cover, including different types of execution contexts, the “blocking” construct, parallelism, and more.

Please feel free to leave comments, feedback, suggestions, and requests!

Sean Cheatham is a Software Engineer at Dropsource. Hailing from world-famous Lansing, NY, Sean is a Syracuse University CS & IT graduate. Sean develops the Dropsource code generator, is passionate about functional programming, and can often be found enjoying a cold IPA. Follow Sean at @SeanCheatham.

For more details on how Dropsource can help your product team build mobile prototypes and ship truly native mobile apps faster and easier, please click here or email us at info@dropsource.com

--

--

Dropsource
Dropsource Blog

Use Dropsource to visually create powerful, data-driven and truly native iOS and Android apps without writing any code. It’s next level. @dropsource