How Wildlife Leverages Data to Produce Tailor-Made Offer Recommendations

Bruno Federowski
Wildlife Studios Tech Blog
5 min readMar 6, 2020

Wildlife’s user base is not only huge but also diverse, spanning casual players who log in once or twice in their lifetime as well as hardcore VIPs who spend thousands of dollars to one-up rival clans. Making sure our games cater to the needs of each of them is a challenge that drives several of our design choices, from matchmaking and metagame tuning to deciding what feature to introduce next.

Experienced product managers (PMs) can tap into their knowledge of our games, clients and competition, as well as data-driven analysis from our team of product data scientists, to help guide some of those decisions. Over time, however, we have found that we can outsource several of those responsibilities to algorithms, freeing our PMs to focus on open-ended questions that require more human input.

While they decide whether Wildlife should spend months revamping player-versus-player (PvP) play or designing several new single-player missions, they can be assured that models will automatically generate offers tailored for each user every holiday.

It wasn’t always like this. Just a few years ago, whether a user saw a $100 or a $1 offer depended only on a spreadsheet that a PM came up with based on their intuition. How we moved from such artisanal processes to a service-like platform that we can potentially plug any of our games into illustrates both the power of product-oriented algorithms and how data science and product go hand in hand at Wildlife.

Personalizing offers

It is easy enough for a player to open an in-game store and purchase a pack of currency in any of our games, but transactions like that only account for a fraction of all in-game purchases. Users often prefer to wait until we provide them with discounts or bonuses in one of several offers, be it a daily deal or a holiday-themed offer. In some of our games, offers are the main source of revenue, so it is not a surprise that we invest a great deal of effort to make sure that each user is offered the biggest bang for their buck.

Our first attempts were intuition-based. Has a player ever made a purchase in our game? Then that probably means they are more willing to spend again than someone who hasn’t. Is their typical purchase very expensive? Maybe we should try offering them a discount for an even bigger deal. Starting from hypotheses such as those, a PM would segment players based on several in-game variables and then choose a specific kind of offer for each break. We would then A/B test that segmentation and iterate on it.

It didn’t take long for data scientists to suggest a more systematic strategy. After convincing PMs of two of our biggest games (Sniper 3D and Castle Crush), they set to work on an initial model. In a matter of weeks, Wildlife was reaping the benefits of the new framework.

As it worked then, the offers framework was somewhat rudimentary and game-specific. Before every holiday, a subset of the players would receive a random selection of a discrete set of n potential offers with different discounts and price points. The data scientist would use that data to train several models to predict the probability that a given user makes a purchase when faced with each given offer, and then select the offer that yielded the highest expected value — as measured by the probability of purchase multiplied by the price point.

The models were trained by manually running a Python notebook on data collected through SparkSQL from our data warehouse whenever we decided to run a new offer. Offer recommendations were written at the level of individual users as a PostgreSQL table that would be uploaded to the game’s backend servers.

That approach had several limitations. For one, uploading such a heavy table had to be done in batches so as to not overload the game’s backend servers, which meant that offers had to be conducted only sporadically. Given that our data pipelines run daily, we also weren’t able to generate predictions for users who installed the game after the table was deployed. Yet the results were outstanding, with model-based offers outperforming PM-designed offers by as much as 15 percent.

Heads began turning. Soon, we had established a squad of backend and machine learning engineers, data scientists and a product owner fully dedicated to building onto the offer optimization approach.

Taking the next step

The formation of that squad underscores how data science projects tend to develop at Wildlife: individual data scientists are granted autonomy and encouraged to undertake any project or proof of concept they suspect could generate substantial value. We try to keep the number of steps between conceptualization and deployment as small as possible, with little bureaucracy or centralized decision-making.

Once we have proved that a project has value, we may decide to take a more formal approach going forward. Developing a first version of an offer’s framework is one thing, but ensuring that it can be applied to any of our games on a daily basis requires resources from other areas of the company, communication between teams and better organization and prioritization. That’s where the squad comes in.

User-level features are now stored in a DynamoDB feature store, with some registered in batches and some close to real-time. Our models, once contained in a simple Python notebook with tepid versioning tools, became scripts whose iterations are automatically logged through MLflow. All game developers have to do is make sure that a game’s backend servers register a given user’s ID as a request to an API, hosted in a Kubernetes cluster with auto-scaling, which will then access that user’s features, send them over to the predictive object and, in a matter of milliseconds, shoot back a personalized offer recommendation.

That means we can now run offers in whatever frequency we want, potentially use real-time features and generate recommendations for new users. Even more, we can use that same framework to run whatever predictive projects we come up with in the future.

Building the capability to generate live in-game predictions in all our games has been a valuable pursuit that can potentially open many doors going forward. Just as some data scientists took it upon themselves to first develop algorithmic offer recommendation, we now have people looking into whether that same framework can be used to recommend specific pictures to users of our coloring apps or to predict whether a user will convert or not and then calibrate how frequently we show them ads.

Wildlife believes that it should create conditions that allow data scientists to move fast, innovate and continuously build on each others’ work. As we continue to expand our capabilities as both a developer and a publisher, that conviction has only become stronger and stronger.

--

--

Bruno Federowski
Wildlife Studios Tech Blog

I'm a data scientist in Wildlife Studios' Product Data Science team.