Spike Time Dependent Plasticity

Lahiru Sampath
Analytics Vidhya
Published in
4 min readDec 12, 2019

What is Spike Time Dependent Plasticity ?

Back-propagation by gradient descent is the underlying learning rule of artificial neural networks (ANN). This rule relies upon loss function and tries to minimize the overall loss of the network by using derivative of loss as a feedback. Even though intuition for ANNs came from natural neural networks in our brain, there are many difference between those two and learning rules are one of them. Underlying learning rule of natural neural networks are way different from back-propagation.

Mostly neural networks with back-propagation are focused toward a fixed target. It means, once the neural network is trained for one particular task, it overwrites old weights and forget old patterns it has been trained for. In ANN, this phenomena are called catastrophic forgetting. Natural neural networks are not vulnerable to catastrophic learning and they are capable of learning continuously with new inputs (mostly throughout the lifetime). To improve ANNs to this such level, a dynamic learning rule is required.

Natural neural networks have a simple learning rule where weight dynamic of one particular connection/synapse is depending only on the activities of connection neurons, but not on the activity of whole network.

Neural synapse

Hebbian Rule

One of the simplest and oldest learning algorithms claimed to be present in natural neural networks. Suppose there is a connection from neuron A to neuron B. In brain, this connection, neuron A (sender) and neuron B (receiver) are called synapse, presynaptic neuron and postsynaptic neuron respectively. According to Hebbian learning rule, if two neurons are fired together, synaptic weight between those two neurons can be increased.

In mathematics, Hebbian rule is expressed as follows.

Here, w­ij[n] is the weight on synapse from presynaptic neuron i to j at time step n. η is the learning rule coefficient and xi[n] and yj[n] are the outputs of neurons i and j in nth time step.

But, some properties of Hebbian rule are making it unsuitable as a biological learning rule.

Most dominant factor is not having a proper way to decrease synaptic weight. When two neurons fire together, synaptic weight can only be increased. Even though some studies have come up with extended versions of Hebbian rule with approaches to reduce synaptic weight, even they are not biologically plausible. Another problem of Hebbian is, not considering the temporal different between presynaptic spike and post-synaptic spike. It is not possible for both neurons to fire at the same time. There should be a temporal different between those two and it is not considered as an important factor with this learning rule.

Spike Time Dependent Plasticity

STDP rule emerged with answers to those problems. Here, synaptic weight update is depending on the sign and size of the temporal difference of pre and post spikes. According to STDP, repeated firing of presynaptic spikes before postsynaptic spikes can lead to increase of synaptic weight in the long term and repeated firing of postsynaptic neurons before emission of presynaptic spikes can lead to decrease of synaptic weight.

updated weight w.r.t. temporal difference in STDP rule

Δt = ti — tj is the temporal difference between the post and presynaptic spikes and F(w) describes the dependence of the update on the current weight of the synapse.

Some biological studies suggest that, if the temporal difference to too small, change in synaptic change will be positive (increase of weight) regardless of the order of pre and post spikes. As a result of that connection strength between frequently firing neurons are always enhanced.

However, implementing this STDP rule is considered to be computationally expensive task because to update synaptic weight, we have to keep track of spike times of each neurons to calculate temporal differences. ‘Synaptic traces’ is a much more efficient approach used by researchers to implement the same effect of STDP. Refer more on this topic if you wish to use STDP as the learning rule of your neural network.

Hope this is helpful.

Thank you.

--

--