K-Nearest Neighbors

Srishti Sawla
4 min readJun 8, 2018

K-Nearest Neighbors is one of the most basic algorithms used for Classification.

KNN is a non parametric algorithm (meaning, it does not make any underlying assumptions about the distribution of data)belonging to supervised learning community. KNN algorithm can also be used for regression problems.The only difference will be using averages of nearest neighbors rather than voting from nearest neighbors.

Intuition behind the algorithm :

In K-NN algorithm output is a class membership.An object is assigned a class which is most common among its K nearest neighbors,K being the number of neighbors.Intuitively K is always a positive integer.Thus if K = 1.The object is assigned a class of its nearest neighbor.

Two questions arise if we try to understand K-NN :

1.How do we decide the value of K?

2.Which value is the nearest value i.e which distance metrics can be used?

How do we Decide Value of K?

Following are the different boundaries separting the two classes

--

--

Srishti Sawla

My blogs are many times graced by a touch of ChatGPT magic - because sometimes even Data Scientists need a bit of artificial inspiration ;)