Discovering SOM, an Unsupervised Neural Network

Gisely Alves
Neuronio
Published in
3 min readDec 27, 2018

The Portuguese version of this article is available in Descobrindo SOM, uma Rede Neural com aprendizado não supervisionado

Self Organizing Map(SOM) is an unsupervised neural network machine learning technique. SOM is used when the dataset has a lot of attributes because it produces a low-dimensional, most of times two-dimensional, output. The output is a discretised representation of the input space called map.

One interest thing over SOM is that its system is based on competitive learning. The neurons (or nodes) compete to decide which one will respond (be activated) over a set of inputs and this neuron is called winner. SOM can be implemented having lateral inhibition connections, the capacity of the winner neuron to reduce the activity of its neighbouring by giving a negative feedback for them. Other concept that SOM works on is the topographic map. The information kept from an input is represented by some neighbouring neurons and they can interact with short connections. An output neuron of a topographic map is a feature of the input data.

Example of lateral inhibition of our neurons producing a optical illusion

How SOM works?

The points in input space have a correspondent points in output space. In the Kohonen Networks, a kind of SOM, there is a single layer with two dimensions and the input points are fully connected with neurons on this layer.

At the start of Self Organization process the weights are initialized with random values. After that, for the competition, all neurons will compute the discrimination function below, over the input features. The neuron with smallest value will be the winner.

D = dimension of the inputs; x = the inputs; w = the weights

This function will show what neuron is most similar to the input vector.

When a neuron is fired its neighbours will be more excited than far way neurons. This process is called topological neighbourhood and it's calculated as below:

Where S is the lateral distance between the neurons, I(x) is the index of the winner neuron and the σ is the number of neighbours and this number decreases with time. The topological neighbourhood number will decrease, tending to zero as the distance to the winner increases.

With t being the amount of epochs and η(t) the learning rate at the time, the weights are updated with this formula:

As we can see, the weights are moved according the topological neighbourhood, causing the distant neurons to have minor updates. This will produce an effect like winner neuron pulling the other neurons.

The SOM algorithm stops when the feature map stops changing.

Hands On!

In this article we'll code a SOM that learns colors. Here we are using the MiniSom library with a simple implementation of Self Organizing Maps.

Training a SOM network with this library was very simple, with only 4 lines it was possible to train our model and produce the output below, showing the network learned colors.

--

--