Leveraging The Powers of Functional Code — Part 2

Octavio Bokel
Booking.com Engineering
9 min readAug 3


The Fully Functional Haskell Solution

Part one can be found here: https://medium.com/booking-com-development/leveraging-the-powers-of-functional-code-bfaf32fd33ce

The Solution:

Regarding the Haskell code — don’t worry if you don’t understand everything. I am going to explain the main points of it by drawing a parallel to the Java implementation.

If you are curious about FP, I cannot recommend this book enough, and the online version is free: http://learnyouahaskell.com/

It is a pleasant read with lots of humor (just the illustrations by themselves make me laugh), explaining complex concepts in a simple and accessible way. It truly can change how you think about code.

First, let’s define the ClosedInterval typeclass. ClosedInterval is a great base class because it can always represent the full range of the type “a”.

Considering that a deep copy is not needed to update values, I decided to split the set method. setStartClosed :: i -> a -> i
Read the signatures as: If i is a price and a an integer (as the Java interface), so this is a function that receives a Price, an Integer, and returns a Price.

(Ord a) => This is the most important part. This means that I need an instance of Ord of a in scope and a having an order is carried down the line. I don’t have to manually define a Comparator<Integer> comparatorOfA() since whoever is getting the start of the interval knows the behavior of Ord is defined for a. This behavior comes from a typeclass instance and never needs to be passed manually. Most of the basic types have an instance of Ord already.

Finally, half open interval can be defined as:

Given that I have an instance of closed interval of i a, and a Enum of a (enumeration), I can derive a half open interval. Most of the basic types are enumerations as well (and can be easily derived if needed), Int is an enumeration of all possible 32 bit integers, and have pred and succ defined, that gives the previous and successor elements. You can read succ . endClosed as succ(endClosed(i)). Math definition: (f (g x) = (f . g) x). Since HalfOpenInterval needs an instance of ClosedInterval to exist, startClosed and setStartClosed already exist and don’t need to be defined again.

Just for the sake of completeness, open interval can be also defined as (without requiring the Enum or Ord dependencies):

This class definition could be defined in a module, with other interval related functions.

Now let’s create the Price data type (simplified over the real Price class, details about syntax can be found here: http://learnyouahaskell.com/making-our-own-types-and-typeclasses) :

We did not have to provide an instance of Ord, there is one defined on Prelude (base module) for Int. Since we have a closed interval of Price, we can have the other types of interval without the need to declare any function:

instance HalfOpenInterval Price Int
instance OpenInterval Price Int

This means that Price can just be passed to any function that expects one of the interval types.

Considering there is no mutability, the outcome of a computation is fully expressed by the input and output of a function. The Java version receives the index of the event, and with it can access the intervals that start, that end, and the previous event, while mutating an active set of intervals and a list for the result. We don’t have the “convenience” of mutability, and this gives us some awesome powers. I will freely use an analogy that makes sense to me. What we need is a function that receives the current state of the computation, the next event, and produces the next state.

What is our event?

It can be defined as eventPoint (when the event happens, but the interval does not necessarily represent a date, that is why I am using the generic term event point).
The list of intervals starting.
The list of intervals ending.

My state is:

My result list of intervals so far.
The active set of intervals.
The previous eventPoint.

A brief explanation of fold:

It keeps applying the provided function that combines the current result, with the next element of the list (or anything foldable), and produces the next result. The final result is defined after folding all elements of the list. For example, the deletion can be read as, start with the active set, and apply delete for each element of my list. In each step, I have a new set that will be applied to the next element of the list. If the reader wants a more in depth explanation can check http://learnyouahaskell.com/higher-order-functions and can be complemented by this reading https://wiki.haskell.org/Foldr_Foldl_Foldl' . The closest thing you have in java is the stream reduce operation.

A quick glimpse of referential transparency and laziness:

There is nothing outside my processEvent function that contributes to the result calculation. Each element of my result tuple can run in any order, since I am not destroying on changing the received active set (no intermediary states). This also enables the computation to be fully lazy, Haskell will, by default, defer evaluating a computation until it needs the value, this can only be done since there are no side effects. We call this Referential Transparency. (https://en.wikipedia.org/wiki/Referential_transparency). Laziness can really simplify how you think and compose larger computations, not worrying if all the flows will actually use a defined computation, since you “pay for what you use.”

I still need to prepare the list of events from my input of intervals. The decision to not use the event index came from how my immutable data structures are defined and the ease to compose my list of events. Also helps to further decouple the problem of preparing the data and running the computation.

My map is sorted already since it is a tree map, and the default list is a single linked list, which is a great structure for functional code. Pre-appending elements to a list can fully reuse the existing list, doing so in constant time and preserving the immutable structure (at the expense of random access and data locality). In the Java code we just merge the key sets whereas here we are going to merge the maps themselves, into a map that for an event point key will contain the start and ending events.

We are going to define 2 generic functions to do these operations, the first is groupByKey (it is not defined in the default Map library, kudos to Java), the second one is unionAsTuple that combine 2 maps into one, keeping both values and falling back to a default value if not defined in one of the maps. Both are very easy to define with the available functional operations, and there is no code overhead of making them completely generic and reusable:

Read as: The list becomes a pair of (key, [value]) using the key extractor and a list of tuples can construct a Map, on key collision the 2 lists are added. A perfectly reusable function. Ord k is necessary since Map is a tree set. Operations usually are Log(n) and can reuse most of the original structure.

Read as: The values of the first map mapA becomes (v, fallback), and the mapB becomes (fallback, v). In case of key collision, The values are joined by keeping the first element of mapA, with the second mapB. If for a key there is no collision, the outcome still preserves from what map the value came from.

Putting everything together and defining asDisjointIntervalGroups:

We create the maps, and join them with the default value being an empty list (in case of no start or end events). We fold using our processNext function, with the initial state of an empty list of intervals, no active intervals and an undefined previous point.

While I could define beforeEventPoint more explicitly as Maybe a (Very similar to Java Optional), it would require a little bit more code. undefined is a value that, if evaluated, will “explode” (It is not even possible to test if something is undefined). Leaving like this makes a statement that previousEventPoint will not be evaluated for the base case provided. The previousEventPoint will never be undefined again. This is just a personal preference.

About side effects:

Now you are probably wondering if there are no side effects, so FP is useless in the real world. In reality, FP is really about keeping your code as pure (in terms of referential transparency) as possible, pushing the side effects to the “edges” of your program. When a side effect needs to be performed, it is a well-described operation. A side effect is done using an I/O Monad or similar abstraction that can contain the result of the side effect, or an error. http://learnyouahaskell.com/a-fistful-of-monads.

The easiest way to reason about it coming from Java is to think about the Optional type, it can contain a Value or be empty. You can chain many Optional results with flatmap and the end result will be empty if one of them is empty. The basic composability of the IO Monad is the same, but collapsing to an error if any of the chained operations returns an error. The same way an Optional type make splicity the possibility of an undefined value, IO Monad expresses side effects and forces you to acknowledge a possible error.

Depending on your IO implementation, you get a non-blocking behavior for “free”. The Monad is controlling the flow of the application. You describe the computation in a lazy way and run it at the very end, making it extremely composable. A Scala implementation of the IO monad: https://typelevel.org/cats-effect/

The Performance of the functional code:

I was reluctant to include this section, since that is not the objective of the article, I don’t want to optimize the raw performance, but the quality of code, and the true metrics I want to mention, there is no simple way to measure, those include:

  • Code readability, expressiveness and maintainability.
  • Job Satisfaction (I am happier when I can write something I perceive as better code, experience might vary per developer).

I am keeping this session as a simple observation over some benchmarks results, without diving any further for now. I did some runs of both codes with different input sizes. My Haskell code performs really close to the Java implementation in terms of processing time. This is really impressive considering the Log(N) performance of some immutable collections used. The benchmark analysis might become a third part in the future.


If you came this far congratulations and thank you for your attention.

So, what are the powers of functional code shown in this article?

Learning a new paradigm will always give you new alternatives to solve problems.

Type classes are a great way to extend functionality in a horizontal way, and is even more important considering that microservices are becoming more common. Having a shared library for protocol makes perfect sense, and extending these classes in a vertical way to take into account the needs of all clients is not feasible. Even harder to do so if the classes are defined from some language agnostic protocol, (like protobuf in our case) and the classes are generated for multiple languages.

Some problems can be much better solved with a functional approach (like the groupByKey example). The resulting code usually requires little to no effort to make the code fully generic and reusable.

Immutability, while pragmatically speaking, might not always be an option, brings a lot in terms of readability, even more so when combined with writing referential transparent code. No need to see the implementation or read docs so often, because the method signature and name already tells you a lot. Passing mutable object references binds your code in a way that an undesired modification might happen several methods down the stack. One great example is the keySet method used in the Java example, it is not obvious or trivial to reason that it is still binded with the original Map, and can be easily modified by mistake.

Understanding how functional code works also makes it easier to use libraries built around this paradigm. Using java streams becomes much more trivial, and allows you to write shorter and more expressive code in many cases.

It can help you achieve greater throughput with I/O bound applications. In some reactive non-blocking frameworks your application is described as something similar to a stream (providing much more operators). Understanding FP also helps a lot to use them. One example of such frameworks: https://projectreactor.io/

Hopefully I have shown that FP is a great tool worth learning — knowledge brings you the freedom to solve problems with a completely new paradigm. FP also allows you to better utilize libraries and frameworks that revolve around it. It is a foundational programming paradigm that I truly think most developers should learn at least the basics. If you are already using a JVM stack, while Java has some support for FP, there are JVM languages with much better support for it that can be integrated with existing Java code, like Scala, Kotlin or Clojure.

and we barely scratch the surface …

Special thanks to Akash Khandelwal, Dunya Kirkali, Eugene Koontz, Tripti Chhajer and Sierra Hatcher for their help, ideas and article review.