Demystifying functional effect systems in Scala

Build your own ZIO/Cats-Effect/Monix…

Dmitry Karlinsky
Jun 25, 2020 · 8 min read

Coauthor: Dima Ryskin (Twitter: @__sapien)

Photo by Héctor J. Rivas on Unsplash

Functional Programming in Scala is gaining popularity, and with it, the use of systems for controlling effects, such as Cats Effect, Monix Task and the most recent addition ZIO.

These effect systems allow describing effects (non pure operation such as IO) as values that can be easily and safely composed into descriptions of whole programs. They also allow describing asynchrony and concurrency using Fibers, that can be thought of as lightweight (green) threads. The description is then executed (or run) by the effect system at the edge of the program (ideally in the main method).

But how do they work? Production grade effect systems are highly optimized for speed and memory efficiency, so understanding their code can be challenging. Here, for example, is the implementation of the main runtime loop in ZIO.

In this post, we’ll attempt to build a toy effect system from the ground up, to gain better understanding of how such a system might work.

Note: this post assumes some interest in FP on the part of the reader and its goal is demystifying the inner workings of popular FP libraries, not convincing you that FP is superior to other programming styles (even if we tend to think so :)).

Iteration 1: creating and combining effects

Effect systems are usually based on the concept of IO Monad, which is a construct, wrapping a zero argument function that may execute an effect and produce a value. Being a Monad allows multiple such wrapped (or suspended) effects to be composed, to describe sequential execution of the underlying effects.

Note: It is out of our scope to define and explain Monads in general, but understanding this concept is in no way a requirement for following this post.

A basic IO monad might be defined as the following trait, with the effect constructor (also commonly called pure) on the companion object, for creating actual concrete instances (no implementation shown):

Using flatMap we can construct arbitrarily complex programs out of simple IO wrapped effects (here flatMap is hidden behind the syntactic sugar of our for comprehension):

Introducing TIO

Let’s define a Toy IO monad. We’ll call it TIO.
It will (loosely) follow the style and API of ZIO
(for simplicity we’ll omit ZIO’s typed errors and environment).

We represent our basic operations as case classes, so each instance of TIO is an immutable value describing the effect to run. Such descriptions are sometimes called an Algebra.

We also move the concern of running the effects into our Runtime(also called an interpreter). This allows a clean separation of describing programs as compositions of TIO s and the implementation details of running the composed programs. As Runtime becomes more complex, we'll likely want to make it configurable, or even support multiple runtimes.

Let’s implement the interpreter using simple recursion:

There are just two case we need to handle:

  • Effect - our constructor: we just run the the underlying function a and wrap the result in a Try
  • FlatMap - our operator :
  • first we recursively evaluate tio (which can itself be either Effect or FlatMap)
  • If the result is Success, we apply function f to the result and recursively evaluate the result of that.
  • If the result is Failure, we simply return it.

So, how do we use our TIO?
Lets define a helper trait TIOApp for apps that run TIO:

Now to run a TIO effect:


running first effect
running second effect

To reduce boiler plate, lets introduce a TIO friendly print function:

Of course the code lifted into TIO.effect can throw an exception, let's see what happens then:


running first effect
Exception in thread "main" java.lang.RuntimeException

See sources for iteration 1

What if we wanted to catch and recover from the error?

Iteration 2: Failing and Recovering from Errors

To support errors, we will introduce a proper way to fail (without lifting a throw statement into an effect) and a new combinator called recover:

Now to handle the new effects in the interpreter:

Usage example:

Which will print:

running first effect
recovered from failure: java.lang.RuntimeException


This already seems pretty powerful, but so far, our toy IO has at least two serious limitations:

  • it is fully synchronous (everything runs in the thread that calls unsafeRunSync)
  • It is not stack safe.

To demonstrate the second point, lets implement a foreach combinator. It will apply an effectful function f: A => TIO[B] to a each element of xs: Iterable[A] and produce a TIO[Interable[B]]:

Now we run it on a 10000 element sequence:

This fails with StackOverflowError:

Exception in thread "main" java.lang.StackOverflowError
at step2.tio.Runtime$.eval(TIO.scala:50)
at step2.tio.Runtime$.eval(TIO.scala:50)

We can think of foreach(xs, f) as equivalent to the following code:

.flatMap(soFar => f(xs(0)).map(x => soFar :+ x))
.flatMap(soFar => f(xs(1)).map(x => soFar :+ x))
.flatMap(soFar => f(xs(1)).map(x => soFar :+ x))
.flatMap(soFar => f(xs(1)).map(x => soFar :+ x))
// ...
.flatMap(soFar => f(xs(xs.size - 1)).map(x => soFar :+ x))

Which builds a tree like structure, that is nested as deep as the length of xs .
Here’s a diagram visualizing this structure for the first 3 elements of xs:

Our interpreter will then have to recursively traverse this structure, requiring a stack at least as large as the length of xs, which for a long enough sequences (e.g 10000) results in StackOverflowError.

See sources for iteration 2

Iteration 3: Asynchrony

What is asynchrony?

Fabio Labella in his excellent talk “How do Fibers Work? A Peek Under the Hood” (that in part inspired this post) defines asynchrony as:

A process that continues its execution in a different place or time than the one it started in

We want to support asynchronous effects. Effects that are not bound to the thread that may have initiated them. This is essential for supporting non-blocking operations, such as IO.

A common way to express such asynchronous processes is by using callbacks. Lets change our Runtime API to async style by adding a new method unsafeRunAsync:

Now unsafeRunSync can be implemented using unsafeRunAsync (for example, using Scala Promise and Future):

We will also introduce a new async effect:

This will be our basic building block for integrating with async processes in outside of TIO, such as non-blocking IO, Scala futures, etc.

Let’s reimplement our Runtime using callbacks adding support for EffectAsync:

Our handling of EffectAsync is just to pass our done callback to it, which it will call when it is done.
Now, for example, we can create a non blocking sleep , based on a simple timer.

This allows us to write code like this:

Which prints:

[2020-06-18T13:35:56.814Z] running first effect on main
[2020-06-18T13:35:58.888Z] running second effect on TIO-Timer

Nice! We’ve got some asynchrony — the two “print” effects ran on different threads.

See sources for iteration 3 so far

The first ran on the main thread, but after sleep the subsequent effects will continue on the timer thread, because the timer completed the EffectAsync by invoking its callback.

This is far from ideal however, as now our subsequent effects run on the timer thread, which might interfere with its scheduling.

Our stack safety problem also remains: Foreach10k will still crash with StackOverflowError

What we need is to queue our effects and execute them on worker thread(s).

Lets define a convenient API for queueing work for execution — Executor, implemented using java executors:

Now, in our Runtime, we define a shared Executor and simply submit each eval to its queue:

Now our execution is truly asynchronous and our previous example with sleep prints:

[2020-06-18T14:41:05.736Z] running first effect on tio-default-2
[2020-06-18T14:41:07.766Z] running second effect on tio-default-6

And it is stack safe — TIO.foreach(1 to 10000)(i => TIO.effect(println(i))) completes without an issue. The reason is that, each effect is now submitted to an executor - i.e. a work queue from which effects are picked up by the executor's threads. When an effect is handled by a thread it results in zero or more new effects submitted back to the queue, but no stack growth. For example for the deeply nested FlatMap(FlatMap(FlatMap(..))) tree from our foreach, each iteration will peal of the outer FlatMap and submit the two children to the executor queue.

Being asynchronous however doesn’t mean it allows for concurrency — meaning more than one thing happening at the same time. Our basic monadic combinator flatMap, describes sequential execution, so right now that's all we can describe with our effects.

See source for iteration 3

Iteration 4: Concurrency with Fibers

We saw that effects constructed using flatMap combinators, naturally describe sequences of operations, running one after another. These sequences are logical threads of execution, though not bound to OS threads. Such logical thread are sometimes called Fibers.

To describe concurrency we need a way to create new fibers and allow one fiber to wait for the result of another.

To support this we will introduce the fork and join combinators.

We want to be able to write code like this:

This should:

  • print1 immediately
  • Start a fiber which will print 2 after 2 seconds
  • print 3 without waiting for the fiber
  • join the fiber and wait for it to finish
  • print 2 after two seconds
  • print fiber1 done: 1 immediately after

Let’s define our algebra for fork and join and the trait for Fiber :

Now how do we handle them in our interpreter?

Forkcreates a new fiber, starts it, and passes it to done :

Join subscribes to the fiber done notification, with its current done callback.

Now let’s sketch out our Fiber implementation:

How do we start our fiber?

To simplify things, lets move our eval method into FiberRuntime , and make FiberRuntime an inner class in our Runtime:

Lets run our sample code from before:

It prints as expected:

fiber1 done: 1

We’ve achieved concurrency!

To demonstrate our fork and join, lets implement a concurrent version of our foreach combinator:

Which prints:

[2020-06-20T11:24:38.097Z] foreachPar:
7 after 71 milliseconds
10 after 130 milliseconds
6 after 278 milliseconds
2 after 374 milliseconds
5 after 418 milliseconds
4 after 431 milliseconds
1 after 561 milliseconds
3 after 598 milliseconds
9 after 899 milliseconds
8 after 936 milliseconds
[2020-06-20T11:24:39.051Z] foreachPar done

See source for iteration 4


In this post we have implemented a toy effect system TIO from the ground up, starting with a basic IO monad with a fully synchronous and non stack-safe interpreter and building it up into a fully async effect system supporting fiber based concurrency.

Note: this is still a toy effect system, intentionally using the simplest possible techniques for as long as possible. As well as having zero optimizations, that are present in real effect systems, we fully expect it to have concurrency bugs and edge cases we failed to consider. Concurrency is hard, and if possible you should always use exiting constructs from battle tested libraries written by extremely smart people.

I hope you enjoyed this post.
Please comment with your thoughts and questions.
Follow me on medium and on twitter.
Thanks to Dima Ryskin, who came up with the idea and worked with me on this post. Go follow him as well :)


Fabio Labella — How do Fibers Work? A Peek Under the Hood

FIBER SUPERVISION IN ZIO — Wiem Zine El Abidine, Adam Fraser | Scalar 2020

A Poor Man’s Concurrency Monad

What’s Next?

In the next post(s) we may attempt to implement the following capabilities of real effect systems:

  • Locking effects to executors — to support running blocking effects on a separate pool, for example.
  • Interruption support — allow interrupting fibers.
  • Interruption safe resource management — fiber based “try with resources” with guaranteed cleanup on fiber interruption.
  • Supervision — interrupting child fibers when the parent fiber is interrupted (for example in foreachPar)

Wix Engineering

Architecture, scaling, mobile and web development…

Wix Engineering

Architecture, scaling, mobile and web development, management and more, this publication aggregates blog posts written by our very own Wix engineers. Visit our official blog here:

Dmitry Karlinsky

Written by

Backend developer at Working on data streaming infrastructure, built around Kafka. Love Scala and typed FP. Contributing to ZIO.

Wix Engineering

Architecture, scaling, mobile and web development, management and more, this publication aggregates blog posts written by our very own Wix engineers. Visit our official blog here: