Hacking Redis To Save Money

Throwing redis at all my problems

paydro
7 min readJan 4, 2016

There’s a solution to a scaling problem at 8tracks that I think is unique and worth a story. While this scaling solution is being phased out for a better solution (features are added, product wants something else, yada yada yada), it was wonderful given the constraints of the time. It deals with redis (v2.6 and v2.8), set intersections, masters, slaves, and expiration. Awww yeah. Let’s get started.

Meet 8tracks’ Explore Page:

Tags at the top, playlists at the bottom — http://8tracks.com/explore

The Explore Page is how our users find what to listen to. It has several main components:

  1. The search bar at the top. Internally we call it autocomplete.
  2. The tag cloud underneath the search bar. Internally we call it explore not to be confused with capitalized Explore Page.
  3. The list of playlists or results after the tag cloud. Internally we call this browse.

The feature I want to discuss is browse. When a user searches with autocomplete or clicks on tags in explore, the browse section of the page is updated with playlists matching the tags selected.

Results for selecting “The Black Keys” and “blues”

Then the user selects a playlist and rocks. the. eff. out. Yeah!

Well, back when I started in Sept 2011, this was implemented using solr (which replaced the previous solution using sphinx). Basically, we used search engines to run faceted searches and displayed the results.

At the time, scaling out the solr server wasn’t fun. To be honest I didn’t know how to scale it out. As the service grew, solr wasn’t cutting it for us. The server wasn’t keeping up to our queries. Even with the large amount of solr documentation, we couldn’t figure out the reason. We could upgrade the server with more memory and CPU, but that would only work for a limited time. We needed another solution and one that wasn’t a black box so we could debug it. We’re a ruby shop and no one on the team knew java so debugging solr was extremely painful.

Luckily for 8tracks, I was hootin’ and hollerin’ about redis. And, with redis sets, implementing faceted searches was cake.

All we needed to do was load up a redis server with sets named after our tags (e.g., blues, The Black Keys, rock, etc.) and fill them with playlist ids. When the user selects a tag or two, run set intersections to find the resulting list of playlists. Boom! Faceted search! No problem. (If you need implementation details a google search for “redis faceted search” works quite well.)

But programming the feature was the easy part. It’s managing the data within the context of redis that’s hard. Let’s break the data down into two types:

  1. Indexed data — data imported from the primary database (MySQL in our case). We’ll import this with a simple script that selects all playlists and their respective tags and inserts them into redis sets. The keys for these sets will be based on the tags associated to the playlist. For instance, given playlist_id 1 with tags chill, instrumental, and beats the import will create three keys. In those keys, we’ll add the playlist_id 1. This continues until we’ve indexed all of the playlists in 8tracks. Updates happen when a DJ on the site updates their playlist. The data’s lifetime is considered long term with regards to redis. This makes more sense when you look at our second data type.
  2. Intersection data — data that exists from redis intersections. This is the faceted search implementation I talked about earlier. This data doesn’t need to be kept around forever and behaves more like cached data. At 8tracks we keep it around for an hour. This was deemed enough time for the site to still feel “fresh” and “new” while keeping the number of intersections on the server down to a minimum. Implementation-wise, the keys had an expiration. We’re satisfied if the intersection data does not exist — we can easily run an intersection at will as they’re plenty fast. We do need to keep it cached for a while otherwise users paginating through results will see weird results when the data is updated.

With this in mind we can discuss the setup for redis. Now, due to the indexed data and how long it took to move data from the primary database to redis, the first thing I thought of was a single redis box with the maxmemory config set to roughtly 60–65%. There were two reasons for this:

  1. Indexed data took long enough to import that having a backup made recovery much faster. Once a the server dies, we can boot another and load the backup to return the service to normal. Then we could run the import script again to ensure the data is fresh.
  2. I set memory to 60–65% of server memory due to the nature of redis snapshots. Redis forks the main process and writes the data to disk in the child process. This strategy uses linux’s copy-on-write optimization for processes. With that in mind, keeping the maxmemory of redis at 60–65% allows the snapshot process enough time to write the data to disk while the server handles any write requests from the application and stay within the memory limits of the server. We don’t want the Out of Memory Killer to come out. This only makes sense because we’re sending a significant amount of writes to the server. This means more memory used when redis is snapshotting data. Users can select up to five tags and we will try to return results. We have ~1 million different tags in our system. Imagine what your average 18–24 year old (our most popular user segment) will do what that!

I initially setup a server with only 15GB of memory (i.e., ~9.75GB redis memory). Measurements of application performance with statsd and graphite indicated most popular intersections (indie + chill, chill + electronic, sleep + study, etc) were being re-intersected at a much higher rate than expected (1 an hour).

So I booted another box. A bigger box. A more costly box. And while we were burning this cash to save us I sat around to re-architect the problem.

Returning to data again, I realized that I don’t have to keep both data types on the same machine if I used master and slaves. Instead, I created a master redis server that had only the indexed data. The application sent all indexed data to the master machine. This machine was setup similarly to the redis setup I mentioned above. We had it setup for snapshotting and with only 60–65% of server memory.

I then setup a second server with much more memory and made it a slave of the first master server. I made sure to set slave-read-only to no so that I could write to this machine. See what I’m getting at? Now I can intersect data on this machine to my heart’s content! Since it’s a slave of the master it had all of the indexed data, but since I don’t care to backup this server I can use all the memory on the machine and disable redis snapshots.

Now I removed the costly, larger, all-in-one redis server and replaced it with my new fangled master-slave servers.

SUCCESS!!?…

Almost. The slave server eventually returned stale data. Browsing by the latest playlists on 8tracks saw week-old playlists. That’s weird…

Also, the server was completely full on memory and became full way too fast for my liking.

I booted that costly, larger, all-in-one redis server again and went back to the drawing board.

I dug around the code for a day or so and found out the intersection data keys weren’t being expired on the slave server! The slave machines don’t properly expire their keys like a master server does. Balls.

I was so close! This would have allowed us to scale the intersection data horizontally if we needed and scale the much slower growing indexed data vertically. At least for a good long while anyways. And long enough for me to focus on other areas of the service.

I was very close to giving up on this solution until the CTO suggested we hack redis. I was so shocked! How can I touch such a beautiful piece of software?! How dare he tell me to do such blasphemy?!!

Then I realized we were losing money by not hacking redis. Fair enough. So I did it with this and this.

Now the solution started falling into place. I setup one 15GB server for the master server and another 30GB server for the slave server. The master server had snapshots and S3 backup scripts while the slave server behaved like a cache server. The master server was setup with the maxmemory-policy set to volatile-ttl to ensure that keys without TTLs weren’t removed — remember this server had indexed data or data without expiration. The slave server was setup with maxmemory-policy set to volatile-ttl. Similar to the master setup, we did not want to remove keys without TTLs (i.e., indexed data) and we wanted to ensure all keys with TTLs (intersection data) were available for removal when the server ran out of memory.

Indexed data and intersection data were now separated enough that I could react fast enough to ensure a decent quality of service when things broke down.

As time went on and new features were developed I added on two more slaves to the mix. One server was specifically designated as the replacement for when the master server went down. The second server was a slave of the slave. That’s right, slave of a slave. That way if the slave (where intersection data lives) died, then we’d have a hot backup ready to swap in.

We’ve since moved away from this setup in favor of other technologies since we wanted other features out of the system. We moved to elasticsearch for a while then finally switched to algolia.com (we love algolia!!). But for a while, the seas were calm.

And now you can use the same redis fork if you want to as well! Apply those commits (or checkout these branches) and run the normal install (configure, make, make install).

You’re welcome to figure out how to make it work in redis v3.0+. I don’t think it would be difficult as the redis codebase is very easy to navigate.

--

--