3 Ways We Use Redis to Make Gaming Awesome

Saar Berkovich
Dec 17, 2018 · 7 min read

At Snipe, we use Redis as a Swiss Army knife. We first deployed Redis as an experiment to deal with growing caching needs but ended up keeping it to do a whole lot of other things. It has become a pillar in our software design and architecture.

Redis is versatile. Arguably the most versatile data store out there (with Elasticsearch being an honorable mention). It is a multi-purpose solution, and it’s also fast.

To demonstrate just how versatile Redis is, I decided to write about three different problems we solved with Redis effectively, where each of them relies on a different, advanced capability of Redis: Lua Scripting, Modules, and Pub/Sub.


Lua scripting in Redis

Lua is a simplistic scripting language that was created with the premise of being embedded in other programs as a means of expendability by third-party developers. If you’ve ever played World of Warcraft, or dabbled in modding for WoW (I am guilty in both of these), you’re probably already familiar with Lua. While helping gamers beat C’Thun is a noble cause, the goal of Lua scripting in Redis is to execute logic on data inside of Redis.

While you could easily grab the data from Redis and execute logic on your own program, being able to execute logic straight on Redis provides several benefits. The key benefit which enables the user case we are about to discuss is atomicity.

Rate limiting in distributed systems using Redis

Our work at Snipe largely relies on gathering video game data from game developers through their official third-party developer API. Most of these APIs enforce a hard limit on the number of requests a third-party developer is allowed to make over a measured amount of time (known as a rate limit). According to the game developers, the rate limits are in place to protect their systems from DoS attacks (intentional or not).

For third-party developers, adhering to these rate limits can be a lot like fighting C’Thun in vanilla WoW (it sucks). It’s quite simple if you can afford running a singular single-threaded application. However, in a distributed system, or at scale, things get messy. In our case, we have a fleet of worker programs tasked at collecting data for analysis. In addition to that, our user-facing fronts, such as the mobile app and smaller endeavors, need to be able to perform API requests on demand, such as updating player profiles (stats), and assessing who players play against in live games.

We chose to solve this problem by having rate limiting handled on a system-scope at one, atomic, point, implemented on Redis (in WoW terms, we promoted a new raid officer).

We modified this, into a script that keeps track of rate limit by incrementing an expiring counter (which is actually just a key on Redis), similar to the token bucket algorithm. The script is being called each time we want to make a request to an API. It returns the number of seconds needed to wait before the request can be made (which will be 0 if the request can be made).

Executing this simple script uses Redis key KEYS[1], as a counter, to assure that we do not make more than ARGV[1] requests within a period of ARGV[2] milliseconds. This pseudo-code outlines the logic on the client:

This is basically all we need to adhere to rate limits, though partners like Riot Games require a more specific treatment. We also cut our allowed rate limit by 3%-5% to compensate for things like clock synchronization, and to prevent concurrency control issues.


Redis as a JSON store

Redis Modules allow developers to extend Redis by creating custom commands and data structures. Unlike a Lua script, modules are implemented in the form of a C shared library, allowing for greater flexibility and performance, at the cost of development time. Module ReJSON, developed at RedisLabs, adds JSON as a data type in Redis, as well as a set of commands to interact with JSONs or parts of them (using JSONPath-like syntax).

While others data stores like MongoDB and Elastic handle JSONs more robustly than ReJSON, they have an overhead (both in performance and maintenance) that in many cases may be excessive.

In our case, League of Legends game data relies on a handful of big JSONs, known as static data. Static data is essential for in-game data processing in LoL, as it contains data on champion abilities, summoner spells, mappings of internal numeric item ids to human-friendly names, and more.

In addition, these JSONs change often (once in two weeks at most). Being able to keep this data in a centralized location, means it can be accessed in it’s most up-to-date form from within anywhere in our system. Keeping it in Redis with ReJSON means that a query from a remote server in the same VPC will return a response in 3ms or less (when other JSON stores would take at least 30ms).

As per our use case, we have a script that runs a couple of times a day and checks if a new LoL patch was released (new patch usually means modified static data). If it was, the script fetches the new data from Riot’s API and updates our data by performing a JSON.SET ReJSON command. Whenever any program in our network needs this data, it retrieves it with a JSON.GET command. An example of something we need to do pretty much everywhere is converting champion id to champion name/key:
> json.get static_championFull ["keys"]["150"]
“\”Gnar\””

The official list of Redis Module can be found here. Additional honorable mentions (also developed at RedisLabs) include Redis-ML and RedisGraph, the latter of which we’re currently experimenting with in our real-time matchmaking service (still in development). There’s also a module that does rate limiting (we found it a bit too sophisticated for our needs).


Redis Pub/Sub

On top of the data storage capabilities, Redis also has a built-in Publish/Subscribe engine, which enables it to act as a message broker.

In short, clients can SUBSCRIBE to a channel, when other clients PUBLISH a message to said channel, it will be delivered to all of the subscribers. Clients can also PSUBSCRIBE to glob patterns. So, when a publisher publishes a message into the hummus.mushrooms channel, the message will be delivered to the subscribers of channels such as hummus.* and hummus.mushrooms.

Our real-time AI-based matchmaking queue (currently under development for Fortnite) utilizes WebSockets to implement a duplex connection with clients. The WS protocol integrates very well with modern web technology stacks, however since it is stateful by design (unlike HTTP), the socket stays bound to a specific server instance throughout its lifecycle. This poses a problem at scale, as only the server instance the client has happened to connect to (usually through a load balancer and/or a reverse proxy) will be able to communicate with that client. We solved this problem by implementing an endpoint-agnostic messaging architecture using Redis Pub/Sub.

When a player wishes to join the queue, the client connects to a WS server (one of N instances, through a network load balancer), that WS server sends the stats and details of said player to our matchmaking service and SUBSCRIBEs to a channel that is designated to receive updates for that specific client. When the matchmaking service wants to send an update to the client (like, when a match with a player of similar play style and skill was found), it PUBLISHes a message to that channel, triggering a callback on the WS server, which in turn notifies the client.

Being able to publish to a channel by name is key here — it provides the matching service with a “static address” to send updates to in a fire-and-forget manner. The channel is conveniently named after a unique ID given to the client.

To tolerate (possible) failures of WS server nodes, we make an extra step before publishing messages - the matchmaking service runs a PUBSUB NUMSUB command, which returns the number of subscribers to the given channel(s). If the command returns 0, error handling comes into effect, as there isn’t anybody out there (sorry, Pink).

Several in-depth articles were written on the subject of using Pub/Sub architecture to scale WebSocket servers, for further reading I recommend this Hackernoon post and this Heroku tutorial.


Final Note

To tie in with the intro, the purpose of this post is to illustrate how Redis can be used to solve distinctly different software engineering problems. Some of these problems can be effectively solved by other tools. For instance, for an individual in need of a robust message broker, I would probably suggest checking out RabbitMQ prior to Redis. Having said that, being able to use an existing Redis cluster to solve a problem has an undeniable advantage over deploying a new tool, which involves setting up, maintaining, and (sometimes) learning a new system.

Redis is developed by Salvatore Sanfillipo and hundreds of developers coming from various organizations and backgrounds. It is no wonder it’s as versatile as it is given that, considering most others data stores are developed in enterprises. Open Source is driven by innovation, engineering departments are driven by sales.

Snipe.gg

Snipe leverages data to learn a gamer’s preferences and connect them to the teammates, streamers, and games they’ll love.

Saar Berkovich

Written by

Software Engineer, fascinated by all things technology; passionate about video games, music, science, traveling, and building stuff.

Snipe.gg

Snipe.gg

Snipe leverages data to learn a gamer’s preferences and connect them to the teammates, streamers, and games they’ll love.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade