Purely Functional Data Structures and JMH

Recently, I started contributing to an open source project called Dogs which is a subproject of Typelevel/cats. Dogs aims to bringing Purely Functional Data Structures to the Scala ecosystem, similar to what Scalaz does. However, Cats and Dogs are being conceived with the goal of providing a lightweight, modular, and extensible library that is approachable and powerful.

At this point in time, Dogs is very immature, but is growing on top of very strong foundations. I decided to join efforts with its main contributor, so I can spend my time doing some useful while learning about the open source world since this my very first time contributing to a project like this.

Purely Functional Data Structures

Purely functional data structures is a concept that is quite new for me, and if we look back, is quite new in general. Chris Okasaki published his book about this topic in the 1996, grouping many of these concepts together. Before that time, it was hard to find a centralized source for all this knowledge.

The opportunity of working in Dogs allows me to embrace Scala, my programming language of choice outside the .NET Framework. I found that I have learned more Scala working on this project that all my previous experiences together and that is why I am so excited about it.

Writing Purely Functional Data Structure has been fun, after all, I have learned and studied a lot in order to keep up. The mind shift required to write code in a functional fashion takes some times to mature, but eventually you start modeling your ideas in a different way.

The first data structure I implemented was a generic Balanced Binary Search Tree (AVL) with its basic operations as described by Okasaki.

The second data structure I wrote was a Discrete Interval Encoding Tree (Diet). Diet is a little more complicated than the AVL and also took me longer to implement. Some people suggested that I should look at Scalaz’s implementation called Diev (they used vectors), but I really ignored the suggestions and adventure myself to implement it by my own.

After a few days, I pull request my changes into the Dogs/master branch. After some healthy discussions my code was merged. However, there is not proof that Diet works better or worse than the Scalaz’s Diev, even though functionality wise it complains all the requirements.

Diet Benchmarks with JMH

JMH is a Java harness for building, running, and analyzing nano/micro/milli/macro benchmarks written in Java and other languages targeting the JVM.

This tool is completed new for me, but it worths to take a look at it and at the same time we can benchmark how our Diet compares to the Scalaz’s Diev.

Because we are using Scala we are going to use reference JMH using SBT.

On the Project Plugins.sbt file we need to include:

addSbtPlugin("pl.project13.scala" % "sbt-jmh" % "0.2.3")

This references a plugin that we can enable in the specific subproject we want.

In the project where we want to do the benchmarking, we add this line to the build.sbt file.

enablePlugins(JmhPlugin)

Once this is done, we can start writing our benchmark tests.

In order to start with something easy, I decided to benchmark the worst and best case scenarios that can be present in a Discrete Interval Encoding Tree.

The worst scenario is when doing an In Order iteration of the elements of the tree, they are separated by a hole of exactly one (1) space. We say one space because Diet is generic, and the step (or space) between each element is described by a discrete operation represented by this Scala trait.

/**
* Represent discrete operations that can be performed on A
*/
trait Enum[A] extends Order[A] {
/**
* Return the successor of x.
*/
def succ(x: A): A
  /**
* Returns the predecessor of x.
*/
def pred(x: A): A
  /**
* Returns true if x and y are consecutive.
*/
def adj(x: A, y: A): Boolean = succ(x) == y
}

Worst Case

The worst case scenario can be benchmarking by generating even numbers. In the Integer domain they are separated by a hole of size one.

Let’s take a look at the code that benchmark this case.

@State(Scope.Benchmark)
class DietBench {
  var items: Seq[Int] = Seq[Int]()
  @Setup
def setup: Unit = {
items = DietDataGen.getWorstCaseData
}
  @Benchmark
def dogsDietAdd: Unit = {
var diet = dogs.Diet.empty[Int]
    items.foreach{ i=>
diet = diet + i
}
}
  @Benchmark
def scalazDievAdd: Unit = {
var diev = Diev.empty[Int]
    items.foreach{i =>
diev = diev + i
}
}
}

As we can see, the benchmarking code is fairly easy in this particular case. We have two methods that each of them will add worst case items to Diet and Diev respectively. They are marked with the @Benchmark attribute so JMH can work with them.

In order to run this initial benchmark test we use sbt in the terminal as follows, where bench in the name of our subproject that contains our tests.

sbt bench/jmh:run

The current project is the local Dogs repository. JMH will do its magic that will take some time since it has to do a lot of warming up and iterations on the JVM so the tests are reliable.

The output of this initial test is:

Benchmark                               Mode  Cnt      Score      Error  Units
[info] DietBench.dogsDietAdd thrpt 200 175.308 ± 2.859 ops/s
[info] DietBench.scalazDievAdd thrpt 200 4428.201 ± 26.171 ops/s

Our Diet in Dogs works slower than the one found in Scalaz when adding items one by one from the worst case data set. Interestingly, diet is not balanced so it eliminates the overhead of rebalancing the tree after each insertion, a price that we might pay when searching, yet it performs badly.

Best Case

The best case is that all elements added are contained within the Range(min, max) so there is not holes and the tree has only one node.

This time, we are going to generate ranges, so the union of all these ranges creates only one range as we just described.

Generating these ranges can be done using the following methods.

def getBestCaseData: scala.IndexedSeq[scala.Range] = {
var x = 0
    for (x <- scala.Range(1, 1000)
if (x % 10 == 0)
) yield scala.Range(x, x + 10)
}

We then run the same command as before to create the benchmarks

sbt bench/jmh:run
Benchmark                               Mode  Cnt      Score      Error  Units
[info] DietAddBench.dogsDietAddRange thrpt 200 83327.184 ± 1380.656 ops/s
[info] DietAddBench.scalazDievAddRange thrpt 200 39994.062 ± 577.651 ops/s

This time we beat Scalaz, yet there is not a big difference.

Search

Let’s now see how the search behaves and also check if our previous theory was correct. We are going to add 1000 numbers and randomly search for them.

We run one more time:

sbt bench/jmh:run

The results come back after a 13 min wait.

[info] Benchmark                       Mode  Cnt     Score    Error  Units
[info] DietAddBench.dogsDietSearch thrpt 200 1514.527 ± 27.925 ops/s
[info] DietAddBench.scalazDievSearch thrpt 200 375.785 ± 1.287 ops/s

Scalaz looses again. Our advantage can be the balancing problem of our Diet, a problem I was planning to look at quite soon.

Conclusions

I have never considered benchmarking before as a way to compare my code against other existing libraries. My main focus has been most of the time on the functionality and the algorithmic time. On the other hand, benchmarks can help us to recognize pieces of our code that require improvements, to identify bottlenecks and to establish metrics in our code.

The JMH tool is quite extensive and learning it all could be overwhelming. Start with something simple and from there build more complex tests.

JMH might help you to avoid earlier optimizations, as happened to the balancing feature of our Diet. Optimizing Diet seems logical, but after benchmarking it, it still outruns Scalaz’s Diev. Please, be careful of upfront optimizations.

Incorporate benchmarking as part of the development process so it grows with your code and your tests. Don’t wait until the end to find that you have a performance problem. Fail fast so you can fix it faster.