Matrix Factorization: Pictures + Code (PyTorch) — Part 1

Daniel Lam
4 min readMar 20, 2023

--

TLDR:
Problem:
Given a dataset of users, movies, and ratings. Can we create a model that predicts movie ratings for users?
Dataset: ml-latest-small.zip from https://grouplens.org/datasets/movielens/
Solution (2 parts):
1) Basic matrix factorization
2) Advanced matrix factorization (bias terms, offset, weight initialization, sigmoid_range) https://medium.com/@datadote/matrix-factorization-advanced-pictures-code-part-2-3072450879c1
Code: “01_matrix_fact_simple.ipynb” https://github.com/Datadote/matrix-factorization-pytorch

Steps:
1) Describe the problem and explore dataset
2) Preprocess dataset for training and validation
3) Create matrix factorization model
4) Train model
5) Check results
6) Next steps

1) Describe the problem and explore dataset

Problem: Given a dataset of users, movies, and ratings. Can we create a model that predicts movie ratings for users?
Dataset: ml-latest-small.zip from https://grouplens.org/datasets/movielens/
Data consists of users, movies, ratings, timestamps, titles, and genres.

DATA_DIR = './data/ml-latest-small/'
dfm = pd.read_csv(DATA_DIR+'movies.csv')
df = pd.read_csv(DATA_DIR+'ratings.csv')
df = df.merge(dfm, on='movieId', how='left')
df = df.sort_values(['userId', 'timestamp'], ascending=[True, True]).reset_index(drop=True)
df.head(3)

2) Preprocess dataset for training and validation

i) Convert columns into categorical with defaultdict(LabelEncoder)
This remaps column range to [0, len(unique(column))]. For example, the raw max movieId is 193609, but there are only 9723 unique movieIds. This remapping (193609 -> 9723) is important for reducing memory usage in the model’s embeddings.

d = defaultdict(LabelEncoder)
cols_cat = ['userId', 'movieId']
for c in cols_cat:
d[c].fit(df[c].unique())
df[c] = d[c].transform(df[c])
df.head(3)
After label encoding, userId and movieId have different values from earlier

ii) Split data into train/validation. Create MovieDatasets + dataloaders
Each user has minimum 20 ratings. Use most recent 5 ratings for validation. Use remaining data for train.

df_train = df.groupby('userId').head(-5).reset_index(drop=True)
df_val = df.groupby('userId').tail(5).reset_index(drop=True)

class MovieDataset(Dataset):
def __init__(self, df):
super().__init__()
self.df = df[['userId', 'movieId', 'rating']]
self.x_user_movie = list(zip(df.userId.values, df.movieId.values))
self.y_rating = self.df.rating.values
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
return self.x_user_movie[idx], self.y_rating[idx]

BS = 8192
ds_train = MovieDataset(df_train)
ds_val = MovieDataset(df_val)
dl_train = DataLoader(ds_train, BS, shuffle=True, num_workers=4)
dl_val = DataLoader(ds_val, BS, shuffle=True, num_workers=4)

xb, yb = next(iter(dl_train))
print(xb)
print(yb)

3) Create Matrix Factorization model

i) collaborative filtering idea
Dataset can be represented as a table of users and movies. Can we find similar users, and predict ratings for unseen movies?

ii) PyTorch implementation
Each user and movie (item) is put through an nn.Embedding layer. This layer creates the vector representation. Then the user_emb and item_emb vectors are multiplied together and summed (equivalent to a dot product).

Basic Matrix Factorization with Vector Dimensions
class MF(nn.Module):
""" Matrix factorization model simple """
def __init__(self, num_users, num_items, emb_dim):
super().__init__()
self.user_emb = nn.Embedding(num_embeddings=num_users, embedding_dim=emb_dim)
self.item_emb = nn.Embedding(num_embeddings=num_items, embedding_dim=emb_dim)
def forward(self, user, item):
user_emb = self.user_emb(user)
item_emb = self.item_emb(item)
element_product = (user_emb*item_emb).sum(1)
return element_product

n_users = len(df.userId.unique())
n_items = len(df.movieId.unique())
mdl = MF(n_users, n_items, emb_dim=32)
mdl.to(device)
print(mdl)

4) Train model

AdamW optimizer and mean-squared loss (MSE) are used to train the model.

LR = 0.2
NUM_EPOCHS = 10

opt = optim.AdamW(mdl.parameters(), lr=LR)
loss_fn = nn.MSELoss()
epoch_train_losses, epoch_val_losses = [], []

for i in range(NUM_EPOCHS):
train_losses, val_losses = [], []
mdl.train()
for xb,yb in dl_train:
xUser = xb[0].to(device, dtype=torch.long)
xItem = xb[1].to(device, dtype=torch.long)
yRatings = yb.to(device, dtype=torch.float)
preds = mdl(xUser, xItem)
loss = loss_fn(preds, yRatings)
train_losses.append(loss.item())
opt.zero_grad()
loss.backward()
opt.step()
mdl.eval()
for xb,yb in dl_val:
xUser = xb[0].to(device, dtype=torch.long)
xItem = xb[1].to(device, dtype=torch.long)
yRatings = yb.to(device, dtype=torch.float)
preds = mdl(xUser, xItem)
loss = loss_fn(preds, yRatings)
val_losses.append(loss.item())
# Start logging
epoch_train_loss = np.mean(train_losses)
epoch_val_loss = np.mean(val_losses)
epoch_train_losses.append(epoch_train_loss)
epoch_val_losses.append(epoch_val_loss)
print(f'Epoch: {i}, Train Loss: {epoch_train_loss:0.1f}, Val Loss:{epoch_val_loss:0.1f}')

5) Check results

Let’s do some sanity checks. The model’s rating range is [-8.3, 9.8], which is out of actual rating range [0.5, 5]. Some prediction ratings look close to the actual rating.

user_emb_min_w = mdl.user_emb.weight.min().item()
user_emb_max_w = mdl.user_emb.weight.max().item()
item_emb_min_w = mdl.item_emb.weight.min().item()
item_emb_max_w = mdl.item_emb.weight.max().item()
print(f'Emb user min/max w: {user_emb_min_w:0.3f} / {user_emb_max_w:0.3f}')
print(f'Emb item min/max w: {item_emb_min_w:0.3f} / {item_emb_max_w:0.3f}')
print(f'Preds min/max: {preds.min().item():0.2f} / {preds.max().item():0.2f}')
print(f'Rating min/max: {yRatings.min().item():0.2f} / {yRatings.max().item():0.2f}')
print(preds.detach().cpu().numpy()[:6])
print(yRatings.detach().cpu().numpy()[:6])

6) Next steps

In part 2, we will improve this matrix factorization model by adding user & item bias terms, offset, weight initialization, and sigmoid_range. For part2, https://medium.com/@datadote/matrix-factorization-advanced-pictures-code-part-2-3072450879c1

--

--

Daniel Lam

Machine learning. Pictures + code | MS EE | linkedin.com/in/dnylam/ | Creator of leetracer.com/screener - "LeetCode with Spaced Repetition"