Spiking Neuron

Swayanshu Shanti Pragnya
Analytics Vidhya
Published in
4 min readAug 30, 2021

In this video, I have explained the role of Neurons in the human brain. Illustrated the performance differences of Artificial Neuron and Biological Neuron. Thanks to the paper written by Lei Deng ( https://www.sciencedirect.com/science...) for examining the performance of Spiking Neural Networks and Artificial Neural Networks.

Spiking neurons are inspired by the biological brain. A biological neuron is divided into three essentially distinct parts known as dendrites, soma, and axon. Dendrites, in general, function as the ‘input method,’ collecting signals from other neurons and transmitting them to the soma. The soma is the ‘central processing unit’ that performs a critical non-linear processing step: An output signal is generated if the total input arriving at the soma exceeds a certain threshold. The axon, the ‘output,’ takes over the output signal and sends it to other neurons. A synapse is a connection between two neurons [1][5].

Spiking neuron models are mathematical characterizations of the attributes of specific cells in the nervous system that generate sharp electrical potentials across their cell membrane that last about one millisecond, known as action potentials or spikes. Spiking neurons are considered a major information processing unit of the nervous system because spikes are transmitted along the axon and synapses from the sending neuron to many other neurons [1].

Figure 1 Ram’on y Cajal’s drawing of a single neuron [2]

Figure 1 Part A, there’s a clear distinction between dendrite, soma, and axons. Neuronal action potentials are shown in the inset of this page (schematic). It is a short voltage pulse with a duration of about 1–2 ms and a magnitude of about 100 mV. Part B, Presynaptic neurons j and I send signals to each other, and the other way around. Dashed circles denote synapse points on the brain map. The axons at the lower right end lead to other neurons (figure schematic) [2].

Figure 2 The membrane potential is represented in the y-axis, and the time in milliseconds is represented in the x-axis [3].

Generally spiking neurons imitate biological neurons so they have some sort of spikes or action potential and we can note as Voltage. Now we will be looking at d(Voltage)/ dt. When voltage crosses a threshold some spike or action potential occur. After that the membrane potential reset or go to some refractory period. Now in that period if we won’t provide a higher threshold then we might not see any spike.

Let's understand the difference between artificial neural network (ANN) and spiking neural network (SNN)-

Figure 3 (a) ANNs and (b) SNNs [4]

Figure 3(a) depicts the model of a typical artificial neuron. where x, y, w, and b are input activation, output activation, synaptic weight, and bias, respectively, and j is the index of input neuron. ϕ(·) is a nonlinear activation function, e.g. ϕ(x) = ReLU(x) = max(x, 0). Neurons in ANNs communicate with each other using activations coded in high-precision and continuous values and only propagate information in the spatial domain (i.e. layer by layer). From the above equation, it can be seen that the multiply-and-accumulate (MAC) of inputs and weights is the major operation in ANNs. Part (b) shows a typical spiking neuron, which has a similar structure but different behavior compared to the ANN neuron. By contrast, spiking neurons communicate through spike trains coded in binary events rather than the continuous activations in ANNs. The dendrites integrate the input spikes and the soma consequently conducts nonlinear transformation to produce the output spike train [4]. This behavior is usually modeled by the popular LIF model.

SNNs represent information in spike patterns, and each spiking neuron experiences rich dynamic behaviors. Specifically, besides the information propagation in the spatial domain, the current state is tightly affected by the past history in the temporal domain. Therefore, SNNs usually have more temporal versatility but lower precision compared to ANNs mainly with spatial propagation and continuous activations. Since a spike only fires when the membrane potential exceeds a threshold, the entire spike signals are often sparse and the compute can be event-driven (only enabled when a spike input arrives). Furthermore, because the spike is binary, i.e. 0 or 1, the costly multiplication between the input and weight can be removed [4].

For the above reasons, SNNs can usually achieve lower power consumption compared to ANNs with intensive computation.

References:

[1] Gerstner W, Kistler WM (2002). Spiking neuron models : single neurons, populations, plasticity. Cambridge, U.K.: Cambridge University Press. ISBN 0–511–07817-X. OCLC 57417395.

[2] Ram`on y Cajal, S. Histologie du syst`eme nerveux de l’homme et desvert´ebr´e. A. Maloine, Paris.

[3] A. V. T. M. F. R. M. R. E. V. E. B. Maxence Bouvier, “Spiking Neural Networks Hardware Implementations and Challenges: a Survey,” Neural and Evolutionary Computing, arXiv.org, p. 35, 2020.

[4] L. Deng, Y. Wu, X. Hu et al., Rethinking the performance comparison between SNNS and ANNS. Neural Networks (2019), doi: https://doi.org/10.1016/j.neunet.2019.09.005.

[5] https://neuronaldynamics.epfl.ch/online/Ch1.S1.html

--

--

Swayanshu Shanti Pragnya
Analytics Vidhya

M.S in CS Data Science and Bio-medicine(DSB)|Independant Researcher | Philosopher | Artist https://www.linkedin.com/in/swayanshu/