It was released under the GitHub account of the company I work for, Riskified, as its core was developed as part of a task I was given there. I have taken it as a passion project to go ahead and generalize it well enough so it can be useful to others and be released as a gem; I believe it might serve other Ruby developers out there looking to optimize the runtime of their code efficiently.
In this post, I will explain what Timeasure does, what parts it’s comprised of and how I designed it. Along the way I will analyze some bits of the source code and shed some light on a few interesting metaprogramming techniques that are pretty awesome to be familiar with.
In fact, I actually built a simple website on Heroku in order to describe and demonstrate Timeasure’s abilities and interface — you may have a look here (don’t miss the live demo part!). This post, however, will tend to the slightly more technical aspects of the project.
(Please note: When I refer to Timeasure’s source code in this post, I will be doing so by a link on the word “code” in parenthesis.)
A Need Gets Born
At Riskified, we deliver instant responses to our customers at high throughput. We need to be able to ensure that X% of our transactions finishes within a certain time frame. This is a relatively easy question to answer: working with NewRelic, one can get quite an accurate picture regarding the runtime of specific transactions.
The problems begin when you are not happy with the runtime you are getting: that means you need to roll up your sleeves and optimize your code. But then, how do you know where to put your efforts? For that, NewRelic’s APM supplies only partial help, such as telling you how much time did calls to external APIs take within a certain transaction; alas, often your own code is the bottleneck.
NewRelic do provide their MethodTracer, but for some needs this solution might feel a bit rigid: it is highly coupled to NewRelic and does not allow flexibility with handling the tracked information. Furthermore, it requires declaring the tracked methods at the bottom of the class in which those methods reside, which felt strange to me. All in all, it makes sense that tracking methods’ runtime should be something that Ruby developers can do easily and at its core this need does not have much to do with NewRelic specifically. So we’ve decided to implement this ability on our own.
At this point I had a good idea of what I want to achieve: some easy to use interface on classes and modules that will capture the runtime of each call upon tracked method that I would declare, within a scoped transaction. At the end of the transaction, I want to be able to grab the accumulated data easily and send it to some analytics tool. In Riskified’s case, we chose to send it to NewRelic’s Insights, since it is an excellent real-time analytics tool for custom data. Their graphs are based on easily typed queries and they look quite beautiful. But this part is entirely configurable, and if you fancy to send this data somewhere else — be it your logger, database or some other analytics service — that’s up to you.
Timeasure is here to help you with gathering measurements regarding those “main offenders”: those methods that make your code run slower than it could have, often not for a very good reason.
Profiling in Production & The ruby-prof Gem
It is worth mentioning that I believe there is great benefit in monitoring code runtime in production, rather than in development or staging environments. Real data and real-life situations are often unpredictable; having constant monitoring on certain parts of a system (and being able to throw in monitoring for specific changing areas easily!) can help greatly in detecting and analyzing problems and undesired behavior. For this reason, the highly-capable ruby-prof is not really suitable for the job as it seems to add significant runtime to the whole process. Also, I find that ruby-prof’s interface is console-bound: it supplies terrific printers to summarize a profiling session, but it does not give an easy and intuitive way to extract the collected data into a variable that can be sent to an external analytics tool. For these reasons, Timeasure was designed to work well in production environments.
Crafting The Interface
As an intro to the what Timeasure grew out to be, let’s go back the basics of measuring methods runtime and take it from there. The easiest way to capture the runtime of a method is to track the time before and after the body of the method. We can then report it to some profiler (more on this in the next part of this mini-series!) and return the method return value. To demonstrate, for a given method in a class like this:
We could carry it out like this:
Obviously, this is quite intrusive — we added quite a lot of code for something that is essentially a side effect of the original method.
We could wrap the majority of it in a helper method and make the implementation a little more terse:
This is a good solution, but while it is indeed shorter, it still enforces the body of the method to be aware of the fact that we wish to measure its runtime. Furthermore, it indents our code for a reason that is peripheral to that of the original method and thus creating what can be called “unnecessary git noise”. Also, it adds risk of breaking stuff — if the core functionality of the method didn’t change, perhaps it is better not to touch the code itself but to achieve the desired side effects through some framework. This calls for a slightly more sophisticated solution. Let’s reach it with imagining the “dream interface” — what if we could do this instead:
This interface feels great: it separates the concern of declaring which methods to track from the methods themselves, and the tracked methods remain untouched and unindented! Of course, This implementation requires some more powerful abilities from
Timeasure. Let’s dive right in.
The Powers of Module#prepend
In order to make this pleasing interface available, we need to “catch” calls to the tracked methods before the actual method gets called. As Paolo Perrotta points out in Metaprogramming Ruby 2, this is a known challenge in Ruby — it’s called method wrapping. In the past it was achieved by a technique called
alias_method_chain that was sharpened up by the Rails development team for several years. Perrotta dedicates a whole chapter to this topic (“The Rise and Fall of alias_method_chain”), and mentions that there is something clumsy and fragile in using this technique. Luckily, since Ruby 2.0 there exists
Module#prepend to solve these method wrapping problems.
Module#prepend offers is a simple-yet-innovative interference in an object’s ancestors chain: it prepends a module to come up first in the chain. To understand that, consider this normal ancestors chain for our
2.5.0 :022 > Foo.ancestors
=> [Foo, Object, Kernel, BasicObject]
Foo itself is first and so it will be the first to try and supply response to any incoming message. But if we prepend some module named
Baz to it, we could achieve this:
Foo.new.bar now gives this output:
2.5.0 :042 > Foo.new.bar
I am wrapping bar before it actually happens!
I am wrapping bar after it happens!
We can see that the
puts commands from
Baz#bar both occurred, but that the return value of the call was the original return value of
Foo#bar. Have a look at
Foo‘s ancestors chain now:
2.5.0 :018 > Foo.ancestors
=> [Baz, Foo, Object, Kernel, BasicObject]
What was obvious before is not so obvious now:
Foo is no longer the first place for the method lookup to begin in for method calls upon instances of
Baz is. And when
super in line 4, the original
Foo is right there with the original
bar method. Quite impressive on Ruby’s side, you’d have to admit.
This actually echoes quite strongly with the problem of how to achieve the desired interface for
Timeasure: instead of the
puts commands in
Baz#bar we could track
t1 (that is, the time before and after the actual method execution), then report both to the profiler and eventually return the method return value transparently to the caller. The last remaining bit for this to actually work is being able to define methods on the prepended module dynamically through the class macro
Interceptors: Some External Inspiration
At this point I toyed around with how to make the class macros declaration actually work while casting the minimal hassle upon the using developer. I looked around a bit and found this great blog post by Hashrocket that is concerned with a very similar problem to the one I was trying to solve.
If you choose to skip this recommended post, let me emphasize the key points in Hashrocket’s solution for you. Each class or module could include
Timeasure, which is a module. Ruby supplies
Module#included as a hook for you to define actions to run after a module was included in a class. In our case,
Timeasure.included would do two things, essentially:
- It would equip the including class with the class macros (
- It would prepend an interceptor module to the including class, to catch the calls to instances of the including class first, just as was demonstrated in the previous section.
To achieve the first goal, we would employ the Rails’ idiom of include-and-extend; See it here on Timeasure’s source code: The
base_class (the including class or module) gets
ClassMethods extended to it, thus making
ClassMethods methods available at the class level — these would be our class macros.
For the second goal, we prepare two interceptors (code), one for instance methods and another one for class methods, and prepend these to the including class and to its singleton class, accordingly. (For more about class methods and singleton classes, see my previous story).
Then, in a given class, we could
include Timeasure and use
tracked_class_methods as demonstrated earlier:
With the right implementation, this would allow for
Foo‘s ancestors chain to looks like this:
2.5.0 :034 > Foo.ancestors
=> [Timeasure::FooInstanceInterceptor, Foo, Timeasure, Object, Kernel, BasicObject]
And for the interceptor’s instance methods to include
2.5.0 :036 > Foo.ancestors.first.instance_methods
When these class macros get called, they add new methods to the matching interceptor (remember, each class holds its own instance interceptor and class interceptor) with the same names as the declared tracked methods (code). The new methods that get dynamically added to the interceptors are not concerned with the content of the original methods at all. This is why it is possible to declare tracked methods at the top of the class. All that those new methods do is call
super as a code block (code). The name of the called method and its class are passed as arguments automatically, so the profiler can keep track of which measurement belongs to which method.
Timeasure.measure would keep track of the times and will call
yield which here would simply called the
super — it would climb up to the original method from the original class (code). The result of
super will be stored in a temporary variable, after which
Timeasure.measure would create a
Timeasure::Measurement object with the tracked data (code) and report it to the profiler (code). Eventually, it would return the original method return value (code). That’s it — quite a journey! Method runs transparently, their code remains untouched, and time is tracked. Success.
Here is a top-level summary of Timeasure’s work:
Thank you for reading all the way down here! I hope you found it interesting and perhaps you’ve learned some new things along the way. In the next part of this mini-series I will expand a bit about Timeasure’s profiler and maybe describe its configuration mechanism and how to use the tracked data. If you have any questions, feel free to use the comments.
I encourage you to try and use the Timeasure gem yourself! If you find any issues or have some feature request, you can open an issue on GitHub or contact me directly.
If you like my project, please give it a ⭐star⭐ on GitHub.
If you found this story interesting or useful, please support it by clapping it👏 .