In Depth: Parameter tuning for KNN

Mohtadi Ben Fraj
4 min readDec 25, 2017

--

In this post we will explore the most important parameters of Sklearn KNeighbors classifier and how they impact our model in term of overfitting and underfitting.

This classifier implements a k-nearest neighbors vote.

We will use the Titanic Data from kaggle. For the sake of this post, we will perform as little feature engineering as possible as it is not the purpose of this post.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

Load train data

# get titanic & test csv files as a DataFrame
train = pd.read_csv(“input/train.csv”)
print train.shape
> (891, 12)

Check for missing values

#Checking for missing data
NAs = pd.concat([train.isnull().sum()], axis=1, keys=[‘Train’])
NAs[NAs.sum(axis=1) > 0]
> Age 177
Cabin 687
Embarked 2

We will remove ‘Cabin’, ‘Name’ and ‘Ticket’ columns as they require some processing to extract useful features

# At this point we will drop the Cabin feature since it is missing a lot of the datatrain.pop(‘Cabin’)
train.pop(‘Name’)
train.pop(‘Ticket’)
train.shape> (891, 9)

Fill the missing age values by the mean value

# Filling missing Age values with mean
train[‘Age’] = train[‘Age’].fillna(train[‘Age’].mean())

Fill the missing ‘Embarked’ values by the most frequent value

# Filling missing Embarked values with most common value
train[‘Embarked’] = train[‘Embarked’].fillna(train[‘Embarked’].mode()[0])

‘Pclass’ is a categorical feature so we convert its values to strings

train[‘Pclass’] = train[‘Pclass’].apply(str)

Let’s perform a basic one hot encoding of categorical features

# Getting Dummies from all other categorical vars
for col in train.dtypes[train.dtypes == ‘object’].index:
for_dummy = train.pop(col)
train = pd.concat([train, pd.get_dummies(for_dummy, prefix=col)], axis=1)
labels = train.pop(‘Survived’)

For testing, we choose to split our data to 75% train and 25% for test

from sklearn.model_selection import train_test_splitx_train, x_test, y_train, y_test = train_test_split(train, labels, test_size=0.25)

Let’s first fit a decision tree with default parameters to get a baseline idea of the performance

from sklearn.neighbors import KNeighborsClassifiermodel = KNeighborsClassifier()model.fit(x_train, y_train)> KNeighborsClassifier(algorithm=’auto’, leaf_size=30, metric=’minkowski’,
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
weights=’uniform’)
y_pred = model.predict(x_test)

We will AUC (Area Under Curve) as the evaluation metric. Our target value is binary so it’s a binary classification problem. AUC is a good way for evaluation for this type of problems

from sklearn.metrics import roc_curve, aucfalse_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
roc_auc
> 0.51680529621456472

n_neighbors

n_neighbors represents the number of neighbors to use for kneighbors queries

neighbors = list(xrange(1,30))train_results = []
test_results = []
for n in neighbors:
model = KNeighborsClassifier(n_neighbors=n)
model.fit(x_train, y_train)
train_pred = model.predict(x_train) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
train_results.append(roc_auc)
y_pred = model.predict(x_test) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
test_results.append(roc_auc)
from matplotlib.legend_handler import HandlerLine2Dline1, = plt.plot(neighbors, train_results, ‘b’, label=”Train AUC”)
line2, = plt.plot(neighbors, test_results, ‘r’, label=”Test AUC”)
plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)})plt.ylabel(‘AUC score’)
plt.xlabel(‘n_neighbors’)
plt.show()

Using n_neighbors=1 means each sample is using itself as reference, that’s an overfitting case. For our data, increasing the number of neighbors improves the test scores

p in L_p distance

This is the power parameter for the Minkowski metric. When p=1, this is equivalent to using manhattan_distance(l1), and euliddean_distance(l2) for p=2. For arbitrary p, minkowski distance (l_p) is used

distances = [1, 2, 3, 4, 5]train_results = []
test_results = []
for p in distances:
model = KNeighborsClassifier(p=p)
model.fit(x_train, y_train)
train_pred = model.predict(x_train) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_train, train_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
train_results.append(roc_auc)
y_pred = model.predict(x_test) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred)
roc_auc = auc(false_positive_rate, true_positive_rate)
test_results.append(roc_auc)
from matplotlib.legend_handler import HandlerLine2Dline1, = plt.plot(distances, train_results, ‘b’, label=”Train AUC”)
line2, = plt.plot(distances, test_results, ‘r’, label=”Test AUC”)
plt.legend(handler_map={line1: HandlerLine2D(numpoints=2)})plt.ylabel(‘AUC score’)
plt.xlabel(‘p’)
plt.show()

In most cases, the choice is always between l1 and l2 but it’s interesting to see the results of higher minkowski distances. For our data, using l1 seems to be better than l2 and other l_p distances

The inDepth series investigates how model parameters affect performance in term of overfitting and underfitting. You can check parameter tuning for tree based models like Decision Tree , Random Forest and Gradient Boosting

--

--