AN ANALOG CMOS IMPLEMENTATION OF A KOHONEN NETWORK WITH LEARNING CAPABILITY

With the increase in the demand for AI-based embedded systems, developing specialized hardware for the implementation of a neural network or an algorithm has been trending recently. One such early paper in that domain is “An Analog CMOS Implementation of Kohonen Network with Learning Capability” [1] published in 1994 in VLSI for Neural Networks and Artificial Intelligence, Springer.

For those who are new to Kohonen Network, (also referred to as Self Organizing Maps), here is the basic algorithm.

1. Initialize the weights Wij randomly. Initialize the learning rate α

2. Provide one input vector X in the input space

3. Determine which cell stores Wij most similar to X i.e., select the cell that has the minimum square of Euclidean distance between X and Wij

4. Declare that unit as the “winner”

5. For all the cells within a specific neighborhood of the winner (the term used by the author s — “activity bubble”) update the weights with the following equation

Wij(new) = Wij (old) + α(X — Wij (old)) …………………………………….(1)

6. Update Learning Rate α

Fig. 1. Structure of one Kohonen Cell [1]

The above figure shows a typical Kohonen cell. The Synapse is the most crucial part as it contains Wij. The authors state that one of the most challenging parts of system hardware development is getting all the synapse cells to be fabricated in a similar way (which he mentions is 20–30% different for each cell). Let’s say if a parameter is dependent on voltage, and if there is a slight difference between the feedbacks of two cells, then the computed outputs are scaled differently.

But in the case of Kohonen Network, this doesn’t pose a problem as the signal scaled to various references as long as they are just internal to cells and all the information exchange between cells is with reference to common ground.

Fig. 2. A Chip Network with 3 Cells (Can be scaled to any number of cells) [1]

Fig. 2. depicts typical schematics of a block of chip

The basic element of the schematics is the cells which include the synapse

Fig. 3. Synapse Circuit [1]

Fig. 3. shows the circuit of a synapse in the chip. Two important tasks of this circuit are to store weights and update the weights according to eq (1).

Memory — They use an integrator as low leakage capacitive memory to store the values for several minutes with a reasonable loss of charge. When the switch S1 is closed writing to memory is done by integrating the input current and Im. The memory is latched by open S1. The switches are implemented MOS.

Learning — Temporary memory and S2 and S3 are required to carry out the entire algorithm. Given the Ix (input current), the current value of Im, the next value of Im (used to write to memory) can be determined by rewriting eq (1) as:

Im(k+1) = αIx(k) + (1-α)Im(k)………………………………….(2)

Updating the memory is done by stored Im(k) and switching between Im(k) and Ix(k) by using S3.

Output — The Wij stored in synapse and input is used to calculate the output and the Euclidean distance between X and Wi.

Limitations To the Circuits:

  1. Charge Injection
  2. Charge Leakage

References —

[1] O. Landolt, “An Analog CMOS Implementation of a Kohonen Network with Learning Capability,” VLSI for Neural Networks and Artificial Intelligence, pp. 25–34, 1994.

--

--