Rock, Paper, Scissors
September 10, 2018
I am trying to find a good solution for serializing my in-memory cache to give it the persistent state required by Android in the event of forced stops, crashes, and low memory events. I use a frugal data structure in memory and then persist it to my “disk cache”, reconstituting it as needed.
For this I wanted the leanest, fastest solution I could find. I decided to try five different ways to get my cache back into memory: SharedPreferences, BinaryPreferences, Paper, FlatBuffers, and Room. I do concede that naked Sqlite would be more efficient than Room, but for modern Android, Room is part of the stack quite often and performs well enough that I don’t need to resort to boilerplate. My cache consists of an int and a primitive int array.
Here is a brief summary of each solution to start:
1.SharedPreferences: keeps an in memory cache itself and is built in. It is synchronized, and quite useable with the commit() method and a singleton, injected, named instance.
2.BinaryPreferences: is a library created to be faster and lighter on top of SharedPreferences. Has nice extra features like encryption and inter-process ability.
3.Paper: A library that uses random file access on flash storage (very fast) and saves each object for given key in a separate file. Every write/read operation writes/reads the whole file. The Kryo library is used for object graph serialization and to provide data compatibility support.
4.FlatBuffers: Google’s fast and memory efficient serialization solution. It is a great alternative to JSON but requires me to serialize its buffer bytes manually to write it. I used internal storage for this in this example because it is available and private.
5.Room: an ORM layer on top of Sqlite that enforces Strictmode.
While only Room enforces the requirement of a background thread, it is recommended by Paper, and of course is the right way to perform non UI tasks. To make it fair and to isolate each case I used the same threading and an onClick event to eliminate GC events between or any other spurious/meaningful interaction with the OS that might interfere. I assert after each load that the contents are correct.
I was surprised to find this outcome, at first:
Further experiments included running from onCreate sequentially, taking the average of 5–60 runs, programmatically setting off onClick repeatedly, and manually running each one from its button.
I initially used Java and ExecutorService, then refactored to Kotlin and coroutines. I did find that I could eliminate the crazy one-off big times I was seeing after this change.
What I discovered was that each of these must have an in memory cache mechanism because the first run for any of them could be much greater than its average time after a warm up cache save. After about three button clicks they all pretty much run at around 5ms.
My solution, if you recall, requires the first run. I am already using a memory cache and using this part just to persist it. I needed the best performer the first time.
To my delight there was a clear winner. Over the course of my runs I saw all but one implementation go over 100ms, especially on the first click. Paper survived as the sole cache that did not ever get that slow.
In addition, Paper took just four lines to get up and running. While the other solutions are brilliant, some required me to either convert my primitive array to a String, or write to internal storage myself. Flatbuffers especially had a small curve of learning for their IDL and building and using the flatc compiler.
I think any of these is a good choice in general; you can see that they are equivalent once they are passed the initial hurdle:
The source code is here.
*Note the code does not run before O == 26 because I made use of Java.nio methods. I could have, of course, made it backward compatible.