Personalizing the content of our trivia games

An introduction to recommender systems and the challenges of personalizing the player experience.

etermax tech
etermax technology
Published in
7 min readFeb 9, 2023

--

By

, Data Scientist at etermax

Most digital platform users are exposed to recommender systems for content personalization, consciously or unconsciously.

Recommender systems encompass the array of tools used to segment users and offer them product or content recommendations based on their preferences. Nowadays, content personalization ranges from the order of social media feeds like Instagram and TikTok [1,2] to Spotify editorial playlists where the content varies slightly from user to user [3]. Using content recommender systems reduces the information overload users are exposed to, offering a short but much more personalized experience.

In this article, we will introduce recommender algorithms, go over the type of information needed to build a recommender system, and briefly describe how we can use them in games like Trivia Crack to organize the content server. We will take the application of these systems in the game Trivia Crack Adventure as an example.

Data, data, data

Recommender systems use Machine Learning algorithms that identify possible preferences based on user feedback of a variety of items or content pieces. For games with themed questions such as Trivia Crack Adventure, it is interesting to find the most relevant themes or topics for users based on their player experience.

The most basic and indispensable piece to set in motion a recommender system is the availability of user data. What allows us to identify users’ preferences is not state of the art algorithms or great distributed processing devices, but data. There are different ways to gather preference signals — or feedback — from users. Depending on the type of signal, they can be classified into:

Explicit Feedback: Obtained when users demonstrate directly their preference for a content item or piece. This type of feedback is usual in services where users can like or dislike items, or rate the ones they already consumed according to a scale. However, a great number of users don’t give explicit feedback unless they feel highly satisfied or unsatisfied. For this reason, explicit feedback is neither massively available nor the first choice for many apps.

Implicit Feedback: It arises from user actions and interactions. It includes signals such as the history of seen and consumed items, time spent interacting with a piece of content, and the number of times the user chose items from a certain category, among others. This type of feedback does not require additional participation from users as it’s built from a careful selection of the available signals.

Our trivia games are not the exception to the rule, and the usage rate of the liking features is very low. However, we still have plenty of information on the interactions between users and trivia questions. To identify and quantify users’ affinity with topics, we came up with an affinity rate which considers the following factors for each user-topic pair:

  • Number of answered questions
  • Success rate
  • Average difficulty of answered questions
  • Number of trivia questions per topic
  • Popularity of the questions

This rate allows us to deduce users’ preferences based on their knowledge, which gives us plenty of opportunities to personalize their experience.

Problem: recommendation of trivia topics for trivia users

Generating recommendations using collaborative filtering

There are two major approaches, that can be combined, to generate recommendations: content-based filtering and collaborative filtering.

Content-based filtering relies on building item profiles containing their main characteristics as well as user profiles based on the items that received positive feedback. This information is used to recommend to users similar items to those they previously liked. This kind of recommender system is a highly effective means to understand user preferences and make recommendations based on their affinity to other items. However, it’s not the best approach if our goal is to recommend items or content that’s very different from what the user has consumed. In games like Trivia Crack, users answer random questions and we have to identify content that might be interesting to them even before they answer questions belonging to these topics. Because of this, we don’t choose a content-based filter for the first iteration of a recommender system.

Collaborative filtering, on the other hand, stems from the premise that we can identify the items a user might like based on the preferences of other users. A user’s previous preferences are collected, and recommendations are made based on items other users with the same interests have chosen. In trivia games, we can identify possible topics of interest for a new user on the basis of topics liked by similar users. For example, if a user showed affinity for questions about Ancient Egypt we could certainly recommend topics many other users with affinity to ancient egypt have liked as well. At the same time, we can recommend topics only some users who showed interest in Ancient Egypt liked, such as Cats or Flags.

Topic recommendations with collaborative filtering

One of the main advantages of collaborative filtering is that it allows us to anticipate users’ preferences for items that differ greatly from those they usually consume, facilitating — and sometimes forcing — guided exploration. An additional benefit is that we don’t need to have previous knowledge of the users’ preferences.

In practice, signals of affinity are not always in the form of likes or dislikes. To our topic recommender, feedback comes in the form of the affinity score previously described. Generally, most users only interact with a small selection of items, so the user-item interaction chart or matrix tends to be mostly empty. Mathematically speaking, the interaction matrix takes the form of a large sparse matrix , proportional to the number of users and items.

There are many collaborative filtering algorithms to obtain recommendations from an interaction matrix. Model-based algorithms represent complex user-product interactions through smaller matrices. By the use of methods of matrix factorization [4], the information contained in the interaction matrix is compressed in smaller matrices preserving the most important characteristics. This way, affinity patterns of hundreds of thousands of users and items can be represented at a reduced computational cost.

We seek to make content recommendations for the entire base of active users in our applications, as long as it’s possible to identify topics these users like. The number of items is in the hundreds, so putting into production a recommender system is not computationally expensive.

Validating the recommendations

Given the lack of explicit feedback from users about the content they are keen on, evaluating the performance of a content recommender system is difficult. Regarding precision and recall, the affinity metric we developed ourselves, defined above, should be used, running the risk of overadjusting our recommendations to this criterion. However, the use of collaborative filtering allows us to verify that the recommended content makes sense theme-wise, at least at first instance.

There are plenty of questions that can’t be answered by running tests with the training data. Do users enjoy answering questions that belong to similar topics? Are we improving user experience in the medium to long term? How are we impacting the economy of the game and the business? We can verify that the personalization of content enhances user experience and doesn’t impact the business negatively running A/B tests [5]. Luckily, our applications have a healthy number of users that allowed us to run A/B tests before implementing personalization features on a large scale. The implementation of the server of personalized topics in the app Trivia Crack Adventure resulted in an increase of the day 7 retention rate of around 6%, without negatively impacting the revenue metrics.

Upcoming challenges

On e-commerce and streaming platforms, users search and choose the content or items they wish to consume. On most game modes of etermax trivia games, users have to answer random questions. It’s a competition after all! Users then become passive consumers and only react to the content they are served. Identifying and interpreting implicit feedback from users is key to building better models of affinity with the content.

In addition, methods based on collaborative filtering present some difficulties when it comes to making predictions for new users. This is known as the cold start problem. In the gaming industry, retention rates are relatively low, so it’s extremely important to make a good first impression and do it quickly. Given the great variety of question topics in Trivia Crack and Trivia Crack Adventure, it’s necessary to optimize the initial experience to make sure users are exposed to a broad range of topics.

Another challenge is taking personalization to the next level. How can we personalize the set of questions a user answers during a trivia game? Extrapolating recommendations from a couple hundred topics to hundreds of thousands of questions would bring many technical challenges. At a game level, it’s also necessary to strike a balance between thematic personalization and the difficulty level of the trivia questions to make sure our games remain challenging.

We hope to share how we faced these challenges and our progress in content personalization throughout this year.

--

--