Game Recommendation at Casumo

Kristian D'Amato
CasumoTech

--

Recommendation, recommendation, recommendation. You can almost hear ex-Microsoft man Ballmer shaking fists to the rhythm of those words. Recommendation is a hot topic, all the more so because it bridges UX and machine learning. Recommendation, most likely, brings in more revenue than other ML forays into the field of user experience. All the big players do it. Amazon does it. Spotify does it. Netflix too.

Studies show that 44% of consumers say they will likely become repeat customers if they receive a personalised experience. 40% that they have purchased something more expensive upon receiving recommendations. And lest we’re accused of leaning on fluffy user feedback, these figures are supported by hard data: the food & beverage industry reports a 400% increase in click-to-open rate with predictive content personalisation, the sports and recreation industry a 270% increase. Personalisation has been touted the “next big thing” in user experience, and while recommendation is only one gun pointing in the direction of personalisation, it packs quite the punch.

Casino Game Recommendation

At Casumo we’re in the online casino business. Games, unlike books, movies or clothes, are not consumable. Few movie-watchers watch a movie twice, even if they like it. They will want something similar, sure, but they will want it as soon as they’ve finished the first. In that sense movies are “consumable”. Games, on the other hand, are eternally playable. If a player likes a game, why distract them with something else?

The upshot is that online casinos are very different from movie streaming or the retail business. In fact, games are closer to music than movies. An artist whose tunes I like, personally, I will want to hear more of. And a mention of music in an article about recommender systems cannot leave out that biggest of music apps, Spotify. Personally I think Spotify’s recommendations are the one feature that glue the product together, without which it would just be a massive, impersonal library. I find myself switching between “free” listening where I let the app play its suggestions — with minimal interaction on my side (except for liking this or that song, skipping over songs that don’t rock my musical boat) — and full-on “controlled” sessions where I actively search for and play albums that I care for. Whether I do one or the other is perfectly mood-dependent.

And I believe online gaming is somewhat similar. Players will always have preferences. Whether they’re thematic in nature (Netflix users: think “gritty” and “dark”), or more of an irrational liking based on past experience, preferences will be present, and possibly deterministic to a greater degree than even in music selection.

Recommender systems for games will have to take that into account — a strongly deterministic preference for a small set of games, that is. Perhaps a better restatement is that in online gaming recommendation has more of a suggestive role than in other industries, more of a complementary role. Individual game sessions typically run for a long time compared to logged-in duration, and it’s quite common for a user to play a single game for days. In short, casino games are “stickier” than movies or even music. Yet variety has a beneficial effect, and recommendation will play a key role in providing exactly that when players have exhausted a game or when they’re exploring.

Why

When it gets down to the nitty-gritty, how, specifically, would recommendation help us? Talk about variety is somewhat vague, but it can easily be quantified, and we have some metrics prepared for just that. For instance, we knew that players played an average of about 1.8–2.2 games on their first day after registration. Could this figure be pushed up, if ever so slightly?

Then there are other concerns less amenable to quantification. Our games browser, for instance, required painstaking curation of games lists. If that work could be partly automated it would be a boon to our CRM team. Even more importantly, some games were only discoverable via direct search — rendering them nearly unreachable except for the most intrepid of players. The problem with this, aside from the obvious, is that unpopular games stayed unpopular, and popular games popular. Then there was the fact that we have zero exploitation of machine learning in our client-facing part of the product. We needed to fix all this.

Baby Steps

For a first iteration we wanted to start small. Later, when we feel confident it’s working, we hook up all the bells and whistles.

There are many approaches to recommendation. This cannot be overemphasized: there are many, many possible approaches. Most fall under one of two categories: collaborative filtering or content-based recommendation. Then there are hybrid systems that combine both. Collaborative filtering uses behavioural similarity to come up with a good prediction — if everyone who liked game X also liked game Y, then a new customer who likes X is likely to enjoy Y — while content-based recommendation uses descriptors to look for similar games — which games have the same set of features as some game X?.

So our first tack was to look at historical data to answer the general question “If a player likes game X, which other games will she like?”. The question as posed is clearly in the collaborative-filtering camp. We had excluded content-based recommendation at the outset because our game metadata needed pruning. A task for the future, perhaps. In any case, we didn’t concern ourselves too much with labels — and simply worked on a prototype.

How

All right, so we had a treasure trove of player session data. What to do with it? Easy, once you interpret “like” as “play”. Rewritten, the question becomes “If a player plays game X, which other games will she play?”. And historical data speaks volumes about that. Of course, it does so in generalities. Players will always have personal tendencies that are not reflected in group statistics, but it’s a start.

Grabbing our session data, we worked out play probabilities for each player-game pair. From there it’s a short step to work out what I called coprobabilities: products of these probabilities. These numbers have two nice properties: first, they’re zero if one of the probabilities is zero; second, the higher the individual probabilities, the higher the co-probability. In effect, coprobabilities are a measure of the vague notion “how often are two games played by the same individual”. When averaged out over the entire player base, they inform us about general behaviour. The average man’s preferences, so to speak.

Edge cases needed taking care of, naturally. Players with small session counts, for instance, or newly introduced games with unstable session information. I won’t bore you with the details, but our first implementation made use of graph databases (read: Neo4j), which implicitly takes into account some of these cases, but in future we will want to explore matrix methods — because at the heart of it this is one huge matrix operation.

In Vivo

It worked. A rough look at the numbers spat out by the algorithm confirmed the sanity of this approach. Games tended to be bunched together — slot games with slot games, table games with table games — and there were other affinities that qualitatively made sense: some thematic association, some popular games that always ranked high in coprobability. Then there were some surprising associations, not too discouragingly outrageous, but which would provide an element of surprise and improve game reachability. We left things as is, and shipped the product.

--

--