Is Spring Cache abstraction fast enough for you?

Nikita Salnikov-Tarnovski
2 min readJan 29, 2019

--

Recently I have spent most of my time tuning the performance of one particular Plumbr codebase. It is quite a tight loop, reading data from Kafka, performing several computations and then writing data to a file. After several rounds of optimisations an unexpected code path started appearing in profiler output.The method in question was unexpected in the profile because it was already optimised :) At the beginning it went to DB asking for some relatively stable data. As it is clearly suboptimal, this method’s invocation was wrapped in a cache using Spring Cache. Thus it was a surprise seeing it contributing the substantial portion to the tight loop I was optimising. So I started digging and this blog post was born.

Let us take a look at minimal example: https://github.com/iNikem/spring-cache-jmh. We have a trivial method there:

@Cacheable("time")
public long annotationBased(String dummy) {
return System.currentTimeMillis();
}

It is doing some work, which in here is just asking for a current time, and it is annotated with Spring’s @Cacheable annotation. This will result in Spring wrapping this method in a proxy and caching the result of method invocation using method’s input parameters as a key cache. Very straightforward and convenient optimisation: you see a slow method, slam one annotation on it, configure your cache provider ( ehcache.xml in my case) and you can pat yourself on the back.

Compare it with the same end-result but using cache manually:

public long manual(String dummy) {
Cache.ValueWrapper valueWrapper = cache.get(dummy);
long result;
if (valueWrapper == null) {
result = System.currentTimeMillis();
cache.put(dummy, result);
} else {
result = (long) valueWrapper.get();
}
return result;
}

Much more going on, actual useful work is almost lost in accidental complexity of infrastructure-related boilerplate. Why should anyone prefer manual work to Spring’s magic? The answer lies in this JMH benchmark and its results:

Benchmark                       Mode  Cnt    Score    Error  UnitsCacheBenchmark.annotationBased  avgt    5  245.960 ± 27.749  ns/opCacheBenchmark.manual           avgt    5   16.696 ±  0.496  ns/opCacheBenchmark.nocache          avgt    5   44.586 ±  9.091  ns/op

As you can see, a custom solution for the specific problem runs 15 times faster than general purpose one. That does not mean “Spring is slow!”, it does mean “Spring is doing much more work to support the wide range of general-purpose use-cases”. Note, that we still speak about several hundreds of nanoseconds! It should be negligible time in most scenarios! But in rare cases, when your actual profiling data shows that Spring Cache abstraction adds too much of overhead, don’t be afraid to make your hands dirty and roll out custom solution, specifically tailored to your needs.

And the last note about the nocache line from above. In this particular case the actual work our method does is so small, that adding caching actually slows it down. A perfect, albeit synthetical, example of premature optimisation: don’t optimise anything until the actual measurement proves the need for this. And then don’t forget to measure again after the optimisation :)

--

--