Back-Propagation Learning Algorithms for Analog VLSI Implementation

There are many VLSI implementations of Neural Networks in both digital and analog mode. The selection strongly depends on the application. Analog VLSI NNs are most suited for biological models and to exploit innovative circuit solutions.

Advantages of AVNN:

  1. real-time processing
  2. low power consumption
  3. compact circuits

Disadvantages:

  1. Imperfections and non-idealities of analog circuits.

To obtain an efficient AVNN, synaptic multipliers are implemented using analog circuits operating in the strong or weak regions, though they feature strong non-linearities.

BACK PROPAGATION ALGORITHM

Usually, networks work on input-output mapping on the training set through learning. The learning occurs in an iterative method, i.e. after every iteration step, the synaptic weights are adjusted to minimize the error.

Implementation of BP Algorithm on VLSI:

Implementation of the BP algorithm on VLSI is not an easy task as it requires high computational accuracy, particularly in the learning phase. To solve this problem there were two approaches:

  1. Try to attain good performance searching for near optimum value for learning rate.
  2. Analyze features of VLSI analog circuits to derive the constraint-driven formulation of the BP algorithm.

In these ways, the precision requirements of the analog implementations are mitigated. This makes use of simple circuits possible for the same at the expense of slower learning procedures.

An AVNN system, hierarchically, can be divided into several components: system, module, macrocell, basic cell, device. Usually, the design process follows a top-down flow that explores the hierarchy. In an efficient implementation, the non-linearities in the network strongly affect the performance of the system. The non-linearities in the neuron and synapse circuits are no longer viewed as imperfections but as specific features of the device.[1] Authors focus their attention on behavioral simulation.

CONSTRAINT-DRIVEN DESIGN APPROACH:

The authors described the constraint-driven Back Propagation equation for implementation which is as follows:

To calculate the sum- squared error over all the pattern p:

where, w (overbar) is synaptic vector weight from neuron i to neuron j.

O(.) is the output function

T(.) is a target output function

At each iteration step, the value of weights are modified moving weights in the opposite direction of gradient:

where, η is learning rate or step size

𝛿pj is error contribution from neuron j after the presentation of pattern p

opi is output of neuron i

On the above basis, the updated formulation of the BP algorithm is:

where α is the momentum factor

To implement all this, to make the circuit that updates the synaptic weights, memorize them and perform multiplication with the input pattern, can be characterized as follows:

  1. the weight factor of the feedforward multiplication is controlled by a voltage.
  2. the voltage is updated dynamically, it is stored in a capacitor.

CIRCUIT BEHAVIOUR:

As shown in Fig 1. the circuit is made of two complementary transconductance amplifiers, each made of differential and current mirror stage. For each synaptic row, there is one current mirror stage. Synaptic inputs are chip input voltages Vin and signal ground voltage Vref is set at 2.5V.

The transconductance of T1 and T2 depends on the current in bias transistors M1 and M2. This current Ibji flows through Mb1 or Mb2 in the bias stage, is controlled by voltage Vbji stored dynamically on the gate capacitance.

Fig 1. The Synaptic Multiplier

The operation performed by neuron j and the N synapses connected to it is summarized in Fig 2. For each synaptic multiplier, the input voltage Vini is multiplied by the weight value Gmji to produce output current Iji. These currents are summed up to give input to neurons.

Fig 2. Macro model of jth neuron and N synapses.

Conclusion:

Authors proposed and evaluated the CMOS analog circuit implementation of Back-propagation algorithm. This can also be implemented for non linear networks.

References:

[1] M Valle, D Caviglia, and G Bisio, “Back-Propagation Learning Algorithms for Analog VLSI Implementations” VLSI for Neural Networks and Artificial Intelligence, pp. 35–44, 1994.

--

--