Recommendation System: Harnessing Machine Learning for Enhanced Book Recommendations

Hardo Triwahyu
15 min readSep 10, 2023

TLDR: This is my learning project (PACMANN class) on implementing Funk SVD model to create Book Recommendation System from Kaggle Book Dataset (with the help of Surprise package). The RMSE result is ~3.4 (from rating scale 0–10), which means it is not that good, but still better than baseline (global mean) model. To see the full code, you can visit GitHub repo

1. Introduction

1.1. Problem Background

The last decade has seen an exponential rise in digital platforms dedicated to books. E-readers, online bookstores, and digital libraries have made it possible for readers to access millions of titles from the comfort of their homes. Gone are the days when one had to physically visit a bookstore or library to get a new book. Now, with a few taps on a screen, a reader can dive into a new world, learn a new skill, or explore a new topic.

However, this vast availability also brings with it a paradox of choice. When presented with endless options, making a decision becomes overwhelming. Readers often find themselves spending more time browsing through catalogs than actually reading.

Every reader has faced the dilemma of choosing their next read. With genres ranging from fiction to non-fiction, fantasy to historical, and thriller to romance, the choices are endless. Add to this the recommendations from friends, reviews from critics, and bestseller lists, and the decision becomes even more complex.

Moreover, the reading experience is deeply personal. A book that resonates with one person might not appeal to another. Hence, relying solely on generic recommendations or bestseller lists might not always result in a satisfying reading experience.

This is where personalized book recommendations come into play. By analyzing a reader’s past reading habits, preferences, and ratings, recommendation systems can suggest books that are tailored to the reader’s tastes. Instead of sifting through endless lists, readers get a curated list of books that they are more likely to enjoy.

In the context of our project, we aim to harness the power of machine learning to develop a recommendation system that provides personalized book suggestions. By leveraging the Surprise package and the SVD (funk) model, we hope to create a system that not only enhances the reading experience but also encourages readers to explore more books, thus fostering a deeper love for reading.

1.2. Business Challenge

In the competitive landscape of online book platforms, retaining readers and ensuring they make frequent purchases or engagements is paramount. With the vast number of books available, readers can easily get lost in the sea of choices. The overwhelming options can lead to decision fatigue, where a potential reader might abandon the platform without making a selection or, worse, might turn to another platform that offers a more streamlined, personalized experience.

Tailored book suggestions are not just a luxury, but they are now a necessity. Personalized recommendations can significantly reduce the time a reader spends searching for their next book, leading to quicker decisions and, consequently, more frequent purchases or engagements. Moreover, a well-tailored suggestion can introduce a reader to a new author, genre, or series that they might not have discovered on their own, leading to further engagements and a deeper trust in the platform’s recommendations.

Recommendation systems have proven to be game-changers in the digital content industry. From streaming services like Netflix and Spotify to e-commerce giants like Amazon, personalized recommendations drive user engagement and sales.

For online book platforms, the role of recommendation systems is even more crucial. Books are a commitment of time. Unlike a song or a video, which might last a few minutes, a book requires hours, if not days, of a reader’s time. Hence, making the right recommendation is essential. A poor recommendation can lead to a reader abandoning a book midway, leading to dissatisfaction with the platform’s suggestions and, potentially, with the platform itself.

On the other hand, a good recommendation can lead to:

  • Increased Sales: A satisfied reader is more likely to make additional purchases based on the platform’s suggestions.
  • Higher Engagement: Readers who trust the platform’s recommendations are more likely to engage with other features, such as reviews, ratings, and discussions.
  • Loyalty: Over time, consistent and accurate recommendations foster loyalty. Readers will return to the platform, knowing they will find books tailored to their tastes.
  • Word of Mouth: Satisfied readers are more likely to recommend the platform to friends and family, leading to organic growth.

In short, as the digital book industry continues to grow, the platforms that can offer the most personalized and accurate recommendations will stand out from the competition. Addressing this business challenge is not just about enhancing the user experience; it’s about ensuring the platform’s long-term success and growth.

1.3. Solution Overview

With so many digital books out there, recommendation systems help us find the best ones. These systems, powered by intricate algorithms and vast datasets, aim to simplify the reader’s journey from indecision to discovery. Let’s delve deeper into the proposed solution for our book recommendation challenge.

Concept of Recommendation Systems

Source: Matrix Factorization with Funk SVD

At its core, a recommendation system is a subclass of information filtering systems that seeks to predict the preference or rating a user would give to an item. These systems operate by analyzing patterns and relationships in historical data to provide users with personalized content suggestions. The ultimate goal is to present users with items (in our case, books) that they are likely to be interested in, even if they haven’t explicitly expressed an interest in them.

There are various approaches to building recommendation systems, such as:

  • Content-Based Filtering: This method recommends items by comparing the content of the items and a user profile, with content being described in terms of several descriptors that are inherent to the item (e.g., a book’s author, genre, or synopsis).
  • Collaborative Filtering: This method makes automatic predictions about the preference of a user by collecting preferences from many users (collaborating). The underlying assumption is that if a user A agrees with user B on an issue, A is more likely to have B’s opinion on a different issue.
Source: Collaborative based Recommendation system Using SVD (Medium.com)

Diving deeper into the recommendation system’s landscape, we identify two primary approaches:

  • Non-personalized: Recommendations based on general popularity.
  • Personalized: Recommendations tailored to individual preferences, often using methods like collaborative filtering.

Use of SVD (Funk) Model and Surprise Package

For our book recommendation system, we propose using the Singular Value Decomposition (SVD) model, particularly the Funk SVD variant. Traditional SVD is a matrix factorization technique that decomposes a matrix into three other matrices. In the context of recommendation systems, it can be used to discover latent features underlying the interactions between users and items.

Funk SVD, on the other hand, is a variant that is optimized for sparse datasets, which are common in recommendation systems where not every user has rated every item. It works by minimizing the squared difference between the known ratings in the dataset and the predictions it makes.

To implement and evaluate the SVD model, we will utilize the Surprise package. Surprise (Simple Python Recommendation System Engine) is a Python library specifically designed for building and analyzing recommendation systems. It offers various tools and algorithms to make the process efficient and user-friendly.

Leveraging User Data and Book Ratings

The backbone of any recommendation system is the data it relies on. For our solution, we will harness the power of user data and book ratings. By analyzing the patterns in which users rate different books, the system can discern preferences and tastes. For instance, if a user has highly rated several science fiction novels, the system can infer a preference for that genre and recommend other titles within it.

In summary, our solution here will try to predict the rating a user would assign to a book they haven’t yet read and then recommend the best top predicted rated book.

2. Methodology

2.1. Framing Type

The backbone of any recommendation system is the data it relies on. The main part of our recommendation system is the book ratings data. We’ll look at how users rate books to understand what they like. For example, if someone likes many science fiction books, our system will suggest more of those.

Recommendation systems can be framed in various ways depending on the nature of the problem:

  • Classification: Categorizes items into predefined classes. For instance, classifying books as ‘liked’ or ‘not liked’ based on user interactions.
  • Regression: Predicts a continuous value. For example, forecasting the rating a user might give to a particular book.
  • Ranking: Orders items based on some criteria, like predicting the sequence in which a user might prefer a list of books.

Given that ratings are continuous values (e.g., 1 to 10), our problem naturally aligns with the regression approach, we’ll focus on the regression approach, aiming to predict the exact rating a user would assign to a book. This method provides a more nuanced understanding of user preferences.

2.2. Model Type

When it comes to recommendation models, there are two primary approaches:

  • Non-personalized Recommendations: These are generic suggestions based on overall popularity or general trends. Every user receives the same recommendations.
  • Personalized Recommendations: These are tailored to individual users based on their past interactions, preferences, and behaviors.

Given our goal to provide readers with book recommendations that resonate with their unique tastes, we’ll be focusing on the personalized recommendation model.

2.3. Funk SVD Mechanics

Singular Value Decomposition (SVD) is a matrix factorization technique that breaks down a matrix into three other matrices. It’s commonly used in recommendation systems to uncover hidden patterns in user-item interactions. However, traditional SVD struggles when dealing with sparse matrices, which are common in recommendation systems where not every user has rated every item.

Enter Funk SVD, named after Simon Funk, who popularized it during the Netflix Prize challenge. Unlike traditional SVD, Funk SVD is designed to handle sparsity by focusing on only the observed ratings and predicting missing values. Here’s a deeper dive:

  • Latent Factors: These are underlying patterns or characteristics inferred from the data. In books, latent factors might include genres or themes.
  • User-Item Interactions: Users provide explicit data about their preferences when they rate books. Funk SVD uses this data to determine latent factors.
  • Predicting Ratings: By understanding latent factors and their relation to user preferences, Funk SVD predicts how a user might rate an unseen book.

Major Differences and Steps in Funk SVD:

  • Not Truly SVD: Despite its name, Funk SVD isn’t a form of Singular Value Decomposition. Traditional SVD decomposes the entire user-item matrix, while Funk SVD focuses only on the observed values.
  • Gradient Descent: Funk SVD uses gradient descent to minimize the error for only the observed values, making it more suitable for sparse datasets.
  • Regularization: To prevent overfitting, especially given the sparsity of data, regularization terms are often added to the error term during the optimization process.

Funk SVD use this function to predict a rating from a user on an item:

Where to find optimal parameters, we will find famous optimization algorithm, Gradient Descent. To perform this, we have to find optimal parameters when derivative of cost function with respect to each parameter is 0. Here’s the derivative form:

In essence, while both traditional SVD and Funk SVD aim to capture latent factors from user-item interactions, Funk SVD is specifically optimized for the challenges posed by recommendation system datasets.

2.4. Evaluation Metrics

Metrics play a pivotal role in gauging the effectiveness of recommendation systems. They provide a quantifiable measure of how well the system’s predictions align with actual user behaviors. For our regression-based recommendation system, two metrics stand out:

  • RMSE (Root Mean Square Error): This metric provides the square root of the average of the squared differences between the predicted and actual ratings. It’s widely used because it penalizes large errors more than smaller ones, ensuring that our model’s predictions are as close as possible to the actual ratings.
  • MAE (Mean Absolute Error): MAE calculates the average of the absolute differences between the predicted and actual ratings. It’s straightforward and gives a direct interpretation of how much, on average, the predictions deviate from the actual values.

Both RMSE and MAE are crucial for our system, as they’ll help us understand its accuracy and refine it for better performance.

2.5. Modelling Workflow

Building an effective recommendation system involves a series of structured steps to ensure that the model is both accurate and scalable. Here’s a breakdown of our proposed workflow:

  1. Data Preparation and Preprocessing: Before diving into modeling, it’s essential to ensure that the data is clean and ready for analysis. This involves handling removing unused columns, missing values, which can be done through various methods such as imputation or removing records with missing values.
  2. Dataset Splitting: To evaluate the model’s performance accurately, the dataset is divided into training and testing sets. The training set is used to train the model, while the testing set is reserved to evaluate its performance.
  3. Model Training and Evaluation: Once the data is prepared, the next step is to train the Funk SVD model. After training, the model’s predictions on the test set are evaluated using RMSE and MAE. This performance is then compared to a baseline model, which could be a simple non-personalized recommendation system, to gauge the effectiveness of our personalized approach.
  4. Hyperparameter Tuning: Machine learning models often have hyperparameters that can be adjusted for optimal performance. This step involves an iterative process of tweaking these hyperparameters, training the model with the adjusted parameters, and then evaluating its performance.
  5. Final Model Training: After identifying the best hyperparameters, the model is trained one final time using these optimal settings to ensure the best performance.
  6. Decision Process: Once the model is trained, it can predict ratings for books for a specific user. However, to make actionable recommendations, there’s a need for a decision mechanism. This typically involves ranking the books based on the predicted ratings and then recommending the top-rated ones to the user.
  7. Model Deployment: The final step is to make the model available for real-time recommendations. Using tools like Streamlit, the model can be deployed as a web application, allowing users to receive book recommendations on-demand.

This structured workflow ensures a systematic approach to building, evaluating, and deploying the recommendation system, maximizing its accuracy and utility for end-users.

3. Results and Discussion

3.1. Hyperparameter Tuning

In machine learning, there are two types of parameters: learnable and non-learnable. Learnable parameters are adjusted during the training process to minimize the error, such as weights in neural networks. On the other hand, non-learnable parameters, like the hyperparameters discussed here, are not learned from the data and need to be set before training. They are often determined through experimentation and tuning processes.

Hyperparameters play a pivotal role in determining the performance of machine learning models. Unlike model parameters, which are learned during the training process, hyperparameters are set before training and influence the learning process itself. They can significantly impact the model’s accuracy, speed, and complexity. For the SVD model, several hyperparameters need to be optimized for best results.

Given the large dataset size (1,149,780 rating records), we employed Randomized Search CV (cross validation) for hyperparameter tuning. This technique is preferred over a full grid search when dealing with large datasets, as it samples a fixed number of hyperparameter combinations from the provided range, making the tuning process more efficient.

Source: Scikit-learn

Here’s a breakdown of the hyperparameters experimented in this trial:

  • lr_all: This is the learning rate for all parameters. A smaller value makes the optimization more robust, but requires more epochs. Values considered are: 0.005, 0.002, 0.001, 0.0005
  • n_factors: The number of latent factors, or the size of the embeddings. It determines the dimensionality of the user and item embeddings. Values considered are: 20, 50, 75, 125, 150, 250
  • reg_all: Regularization term for all parameters. Regularization helps prevent overfitting by adding a penalty to the loss function. Values considered are: 0.005, 0.01, 0.015, 0.02, 0.03, 0.05
  • n_epochs: The number of iterations over the entire dataset. It determines how many times the model will adjust its weights to minimize the error. Values considered are: 10, 20, 30, 50, 70, 100

3.2. Results and Discussion

After an extensive hyperparameter tuning process that spanned approximately 113 minutes, we identified the optimal set of hyperparameters for our Funk SVD model:

  • Learning Rate (lr_all): 0.0005
  • Number of Latent Factors (n_factors): 50
  • Regularization Term (reg_all): 0.01
  • Number of Iterations (n_epochs): 50

Performance Metrics:

  • Baseline Model: Achieved an RMSE of 3.853195.
  • Funk SVD Hyperparamter Tuning Model: Improved the RMSE to 3.455943.

When we further evaluated the model’s performance on the training set versus the test set, we noted:

User to User CF: The RMSE during tuning was 3.455943, and on the test set, it was 3.44983.

An illustrative prediction from the model is as follows:

For a user with ID 9 and a book with ID 10, the model estimated a rating of approximately 2.85. This prediction was possible, indicating that there were no constraints preventing the estimation.

The optimal hyperparameters, while yielding a notable improvement over the baseline, resulted in an RMSE of approximately 3.455943, which in recommendation systems, is just satisfactory. Let’s dive deeper into the implications of each chosen hyperparameter and the overall performance:

  • Learning Rate (0.0005): A smaller learning rate ensures that the model updates its weights gradually, preventing drastic changes that could lead to overshooting the optimal solution. However, the trade-off is that convergence might be slower, and there’s a risk of getting stuck in local minima. A more adaptive learning rate or other optimization algorithms might be explored in future iterations to address this.
  • Number of Latent Factors (50): While 50 latent factors capture a decent amount of patterns in user-item interactions, it’s a balance between model simplicity and its ability to generalize. It’s possible that certain intricate patterns or niche preferences aren’t captured fully. Experimenting with a different number of factors, or even integrating other data sources, could provide a richer representation of user preferences.
  • Regularization Term (0.01): Regularization helps in preventing overfitting, especially in models with a large number of parameters. The chosen value ensures that the model doesn’t become too reliant on any single feature. However, there’s always a balance to strike. Too much regularization might make the model too generic, while too little could lead to overfitting. This parameter, in conjunction with others, might need further fine-tuning.
  • Number of Iterations (50 epochs): While 50 epochs seem to be a reasonable number for training, there’s a possibility that the model hasn’t fully converged or, conversely, that it’s overtrained. A deeper dive into the training curve, observing the error rates as epochs increase, could provide insights into the optimal number of iterations.

The current performance, with an RMSE of ~3, indicates there’s room for improvement. While the model does a decent job, it’s essential to acknowledge that achieving a lower RMSE would lead to more accurate and satisfactory recommendations. Future work should involve more extensive hyperparameter tuning, exploring other recommendation algorithms, and possibly integrating additional data sources or features to enhance the recommendation quality.

For the last step, we saved the trained model as a pickle file and then load it to be deployed in a simple Streamlit app. This app will ask for a user ID and then will recommend top 5 books for that user.

3.3. Performance Measurement

The success of a recommendation system is closely tied to its performance metrics, which should align with business objectives. Our primary goal is to enhance user engagement by accurately predicting their book preferences.

  • Business Objectives and Metrics: The system’s aim is to suggest books that users will genuinely enjoy. Given our earlier results, where the RMSE was around 3.455943, there’s a clear indication that while the model is on the right track, there’s room for improvement. An accurate system would have a lower RMSE, indicating closer predicted ratings to actual user ratings.
  • Continuous Monitoring: As user preferences evolve and the platform’s book collection grows, the recommendation system must adapt. Regularly monitoring RMSE ensures the system remains effective. Any significant deviation in these metrics signals a need for model refinement.

In summary, while our model shows promise, its RMSE suggests further optimization is needed. Continuous monitoring and feedback will be crucial for long-term success.

4. Conclusion and Recommendations

4.1. Conclusion

In this learning project, we embarked on the journey of creating a book recommendation system to address the challenge of overwhelming book choices on digital platforms. Through the application of the Funk SVD model and a systematic methodology, we identified optimal hyperparameters that significantly influenced the model’s performance. While the achieved RMSE of around 3.455943 indicates a decent predictive capability, it also underscores the need for further optimization. The significance of the chosen hyperparameters, particularly the learning rate, number of latent factors, regularization term, and number of iterations, played a pivotal role in the model’s current performance.

4.2. Future Works Recommendations

Given the constraints of this being a learning project, with limited computational power and time, the results achieved are commendable. However, for a more comprehensive and potentially more accurate recommendation system, several avenues can be explored in the future:

  • Incorporate Additional User Data: The current model solely relies on user ratings. Integrating other user data, such as age or location, could provide richer context and enhance recommendation accuracy.
  • Advanced Models and Techniques: With more computational power, exploring more complex models or ensemble techniques might yield better results.
  • Feature Engineering: Delving deeper into the data to create new features or transform existing ones can often lead to improved model performance.
  • Feedback Loop: Implementing a feedback mechanism where users can provide direct input on the accuracy of recommendations can offer invaluable insights for model refinement.

4.3. References

--

--

Hardo Triwahyu

Just an ordinary man with extraordinary life’s goal