Learning Similarity with Siamese Neural Networks

Enosh Shrestha
2 min readOct 23, 2019

--

Siamese Neural Networks (SNNs) are a type of neural networks that contains multiple instances of the same model and share same architecture and weights. This architecture shows its strength when it has to learn with limited data and we don’t have a complete dataset, like in Zero / One shot learning tasks.

Fig 1: Traditional Neural Networks

Traditionally, a neural network learns to predict over multiple classes. This poses a problem when we need to add / remove new classes to the data. In this case, we have to update the neural network and retrain it on the whole dataset. Also, deep neural networks need a large volume of data to train on. SNNs on the other hand learn a similarity function. Thus, we can train it to see if two images are the same (which we will do here). This enables to classify new classes of data without training the network again.

We will realize the model in Fig 2 using Keras to classify the models in Fashion MNISTdataset.

Fig 2: SNN model

First we need to generate random pairs for training and testing purposes. The image below is an example of the training pairs. The model should output 1 if the images belong to the same class otherwise they should output 0.

Fig 3: Training pairs

The model definition is as follows:

The model we trained to minimize binary cross-entropy loss, which is defined as:

Training this model on ~900 pairs of training data, I got a 76% accuracy on the test data.

Github: https://github.com/eroj333/learning-cv-ml/blob/master/SNN/Learning%20Similarity%20Function.ipynb

Next: https://medium.com/@enoshshr/triplet-loss-and-siamese-neural-networks-5d363fdeba9b

--

--