Scientist is a library that is used to refactor critical paths. Github created this when they wanted to move their system that holds the access from members to repositories to a new architecture (https://githubengineering.com/scientist/). In complex systems it might not be enough to write tests. This is where Scientist can help. You can build the new architecture next to the existing one and compare the results of both with Scientist.
In essence, you create the new functionality next to the old one. Add scientist to it so it runs both the new and old functionality and deploy your application, this way you get an overview of the result and if the new functionality gives the same results.
We came across something similar when we wanted to change the datamodel in our current application. We wanted to move to a new way of storing information in PostgreSQL and doing this would hit most our core functionality. We decided to use Scientist, to test the result of retrieving data from our new datamodel against the logic for retrieving data from our old datamodel.
In the next part I’ll show how Scientist can be used with a simple case example.
I created a small project in Java where I mimic a database which retrieves data and returns us a Product.
I retrieve a product based on its ID. We want to retrieve products using 2 code paths (V1 and V2).
Since we’re using Java we can use the Java wrapper of Scientist — Scientist4J.
We setup Scientist by declaring an Experiment (sounds really cool, I know).
Experiment<Product> experiment = new Experiment<>("product", true);
We now say that we want to run an Experiment where the returned value is of type Product. What we have to do now is to add the 2 code paths that will return a Product.
Supplier<Product> currentCodePath = () -> productRest.getProductsV1(1);
Supplier<Product> newCodePath = () -> productRest.getProductsV2(1);
Product experimentResult = experiment.runAsync(currentCodePath, newCodePath);
I declared Suppliers like this so I could pass a value to the methods
The return value of this will be the currentCodePath, this is also the result we can send back to the calling code.
Running this will give the currentCodePath result back to experimentResult, since this is our still existing code and it will “throw” away (after registering metrics about it) the result of the newCodePath.
All we care about at this stage are the metrics. These give us insight about the results of the new code path. This way we can check if it is working according to our expectations (meaning the new code path gives us the exact same result as the existing one).I created a unit test where I run these seperate code paths a 100 times.
This gives us the following Metrics:
On top we see the Mismatches, the values that aren’t equal at the end of running both codepaths, I added some cases where the return values aren’t equal to get a good overview of what happens when it’s not equal.
Under that we see the Counters, these measure the amount of calls, amount of expectations and the amount of mismatches.
Timers give us insight in the time it takes the paths to run, we have the time it takes the candidate value (the new code path) and the control value (the current code path). It gives us a clear overview of the amount of time the calls take.
The full unit test (which mimics the behavior of a client calling an endpoint a 100 times)
Here’s an overview of the “database” code that returns value, as you can see, in the V2 when the ID is 2 I return a different result, to show that creating a new codepath might create this regression.
For the full example project you can visit: https://github.com/MBlokhuijzen/ScientistExample