Spiking Neural Network Architectures

NeuroCortex.AI
16 min readDec 12, 2023

--

This is the third part of the five-part series on spiking neural networks.

Part 1 here: What is Spiking Neural Network?. This article is the first in a series… | by NeuroCortex.AI | Oct, 2023 | Medium

Part 2 here: What makes Spiking neural network tick? | by NeuroCortex.AI | Nov, 2023 | Medium

Artistic visualization of a neuron and interconnections

The mathematics of spiking neural networks (SNNs) involves modeling the dynamics of individual neurons and the interactions between neurons. SNNs are computational models inspired by the way biological neurons communicate through discrete electrical events called spikes or action potentials. The mathematics of SNNs typically includes the following key components:

  1. Neuron Models:
  • Leaky Integrate-and-Fire (LIF) Model: Describes the integration of incoming currents by a neuron, leading to a spike when a threshold is reached.
  • Izhikevich Model: A computationally efficient model capable of reproducing various spiking patterns observed in real neurons.
  • Hodgkin-Huxley Model: A biophysically detailed model that considers the dynamics of ion channels in the neuronal membrane.

2. Synaptic Interactions:

  • Synaptic connections are modeled to transmit signals between neurons.
  • Synaptic weights determine the strength of connections and influence the post-synaptic neuron’s membrane potential.

3. Spiking Dynamics:

  • Neurons generate spikes when their membrane potential exceeds a certain threshold.
  • Spike timing, frequency, and patterns encode information in SNNs.

4. Network Topology:

  • The arrangement of neurons and their connections form the network topology.
  • Different network architectures, such as feedforward, recurrent, or sparsely connected networks, can be considered.

5. Mathematical Equations:

  • The behavior of neurons is often described by ordinary differential equations or difference equations, capturing the dynamics of membrane potential, synaptic interactions, and spike generation.

6. Learning Rules:

  • Synaptic plasticity rules govern how synaptic weights change over time, enabling learning in the network.
  • Spike-timing-dependent plasticity (STDP) is a common learning mechanism based on the relative timing of pre- and post-synaptic spikes.

7. Simulation and Analysis:

  • Numerical simulations are used to study the behavior of SNNs over time.
  • Analytical tools, such as phase-plane analysis, are employed to understand the dynamics of the network.
Artificial Neural Network and Neuronal Dynamics

SNN architectures:

Spiking Neural Networks (SNNs) were developed in computational neuroscience to replicate the behavior of organic neurons. As a result, the Leaky-Integrate-and-Fire (LIF) model was developed, which characterizes neuronal activity as integrating incoming spikes and poor dispersion (leakage) to the environment.

Spiking Neural Networks lack a general linear structure. In this respect, it lacks a distinct layer besides the input and output layers. Instead of crisp layers, they use more complex structures like loops or multi-directional connections to convey data between neurons. They necessitate distinct types of training and learning algorithms due to their complexity. To adjust to the spike behavior, techniques such as back-propagation must be modified.

Typical Spiking Neural Networks architecture is:

Typical Spiking Neural Networks architecture

Let’s sum up the general idea of Spiking Neural Networks:

  1. The value of each neuron is the same as the current electrical potential of biological neurons
  2. According to its mathematical model, a neuron’s value might fluctuate. For instance, if a neuron receives a spike from an upstream neuron, its value could increase or decrease
  3. The value of a neuron will drop below its average as soon as it crosses a certain threshold, at which point it will transmit a single impulse to every downstream neuron connected to the original one
  4. The neuron will thus have a refractory phase that is comparable to a biological neuron’s. The value of the neuron will eventually revert to the average

Despite an SNN being unique at its concept, it is still a neural network, so SNN architectures can be divided into three groups:

  1. Feedforward Neural Network is a classical NN architecture that is widely used across all industries. In such an architecture the data is transmitted strictly in one direction — from inputs to outputs, there are no cycles, and processing can take place over many hidden layers. The majority of the modern ANN architectures are feedforward;
  2. Recurrent Neural Network (RNN) is a bit more advanced architecture. In RNNs connections between neurons form a directed graph along a temporal sequence. This allows the net to exhibit temporal dynamic behavior. If an SNN is Recurrent, it will be dynamical and have a high computational power;
  3. In Hybrid Neural Network some neurons will have feedforward connection whereas others will have recurrent connection. Moreover, connections between these groups might also be either feedforward or recurrent. There are two types of Hybrid Neural Networks that can be used as SNN architecture:
  4. Synfire chain is a multilayer net that allows impulse activity to propagate in the form of a synchronous wave of transmission of spike trains from one layer to the other and back;
  5. Reservoir computing can be used to build a Reservoir SNN that will have a Recurrent Reservoir and output neurons.
Comparing Convolution Neural Network and Spiking Neural Network
Akida NSoC Architecture
Akida Neuron Fabric

The “Akida” device has an on-chip processor complex for system and data management and is also used to tell the neuron fabric (more on that in a moment) to be in training or inference modes. This is a matter of setting the thresholds in the neuron fabric. The real key is the data to spike converter, however, especially in areas like computer vision where pixel data needs to be transformed into spikes. This is not a computationally expensive problem from an efficiency perspective, but it does add some compiler and software footwork. There are audio, pixel, and fintech converters for now with their own dedicated place on-chip.

The CIFAR 10 benchmark they are rating their performance and efficiency on isn’t common but the chart below highlights more than just an apples to apples comparison. It shows that the other neuromorphic architecture here, True North, has great performance and efficiency but is expensive given the process technology and while Intel’s “Loihi” is not shown, it is likely in that same camp. The real story for Brainchip is the price for relative accuracy and for more inference shops, a 10% difference can be a big deal, according to Beachler.

The CIFAR 10 benchmark they are rating their performance and efficiency

Brainchip’s “Akida” chip is aimed at both datacenter and training and inference but realistically, as the company’s head of product, Bob Beachler tells us, inference at the edge is the prime target. This includes vision systems in particular but also financial tech applications where users cannot tolerate intermittent connectivity or latency from cloud.

The company’s Akida Development Environment will be released in Q3 of this year with the FPGA based accelerator available at the end of 2018. By 2019 the company want to roll out its SoC samples with the Akida Acceleration Card available at the end of 2019 with an undisclosed number of chips per board.

Spike Based Neural Codes

Artificial spiking neural networks are designed to do neural computation. This necessitates that neural spiking is given meaning: the variables important to the computation must be defined in terms of the spikes with which spiking neurons communicate. A variety of neuronal information encodings have been proposed based on biological knowledge:

  • Binary Coding:

Binary coding is an all-or-nothing encoding in which a neuron is either active or inactive within a specific time interval, firing one or more spikes throughout that time frame. The finding that physiological neurons tend to activate when they receive input (a sensory stimulus such as light or external electrical inputs) encouraged this encoding.

Individual neurons can benefit from this binary abstraction because they are portrayed as binary units that can only accept two on/off values. It can also be applied to the interpretation of spike trains from current spiking neural networks, where a binary interpretation of the output spike trains is employed in spike train classification.

  • Rate Coding:

Only the rate of spikes in an interval is employed as a metric for the information communicated in rate coding, which is an abstraction from the timed nature of spikes. The fact that physiological neurons fire more frequently for stronger (sensory or artificial) stimuli motivates rate encoding.

It can be used at the single-neuron level or in the interpretation of spike trains once more. In the first scenario, neurons are directly described as rate neurons, which convert real-valued input numbers “rates” into an output “rate” at each time step. In technical contexts and cognitive research, rate coding has been the concept behind conventional artificial “sigmoidal” neurons.

  • Fully Temporal Codes

The encoding of a fully temporal code is dependent on the precise timing of all spikes. Evidence from neuroscience suggests that spike-timing can be incredibly precise and repeatable. Timings are related to a certain (internal or external) event in a fully temporal code (such as the onset of a stimulus or spike of a reference neuron).

  • Latency Coding

The timing of spikes is used in latency coding, but not the number of spikes. The latency between a specific (internal or external) event and the first spike is used to encode information. This is based on the finding that significant sensory events cause upstream neurons to spike earlier.

This encoding has been employed in both unsupervised and supervised learning approaches, such as SpikeProp and the Chronotron, among others. Information about a stimulus is encoded in the order in which neurons within a group generate their first spikes, which is closely connected to rank-order coding.

SNN Architecture

Architecture of Spiking Neural Network

Here is a simple implementation of spiking neuron using snnTorch library

$ pip install snntorch
import snntorch as snn
from snntorch import spikeplot as splt
from snntorch import spikegen
from matplotlib import pyplot as plt
def leaky_integrate_and_fire(mem, x, w, beta, threshold=1):
spk = (mem > threshold) # if membrane exceeds threshold, spk=1, else, 0
mem = beta * mem + w*x - spk*threshold
return spk, mem
# set neuronal parameters
delta_t = torch.tensor(1e-3)
tau = torch.tensor(5e-3)
beta = torch.exp(-delta_t/tau)

print(f"The decay rate is: {beta:.3f}")
num_steps = 250

# initialize inputs/outputs + small step current input
x = torch.cat((torch.zeros(10), torch.ones(240)*0.5), 0)
mem = torch.zeros(1)
spk_out = torch.zeros(1)
mem_rec = []
spk_rec = []

# neuron parameters
w = 0.4
beta = 0.819

# neuron simulation
for step in range(num_steps):
spk, mem = leaky_integrate_and_fire(mem, x[step], w=w, beta=beta)
mem_rec.append(mem)
spk_rec.append(spk)

# convert lists to tensors
mem_rec = torch.stack(mem_rec)
spk_rec = torch.stack(spk_rec)
Visualization of input current and Membrane potential growth over time steps. The last graph shows the action potential firing in regular intervals

Spiking neurons and linking synapses are described by configurable scalar weights in an SNN architecture. The analogue input data is encoded into the spike trains using either a rate-based technique, some sort of temporal coding or population coding as the initial stage in building an SNN.

A biological neuron in the brain (and a simulated spiking neuron) gets synaptic inputs from other neurons in the neural network, as previously explained. Both action potential production and network dynamics are present in biological brain networks.

Spiking Neural Network Learning

Learning Rules in SNN’s

Learning is achieved in practically all ANNs, spiking or non-spiking, by altering scalar-valued synaptic weights. Spiking allows for the replication of a form of bio-plausible learning rule that is not possible in non-spiking networks. Many variations of this learning rule have been uncovered by neuroscientists under the umbrella term spike-timing-dependent plasticity (STDP).

Its main feature is that the weight (synaptic efficacy) connecting a pre-and post-synaptic neuron is altered based on their relative spike times within tens of millisecond time intervals. The weight adjustment is based on information that is both local to the synapse and local in time. The next subsections cover both unsupervised and supervised learning techniques in SNNs.

  • Unsupervised Learning

Data is delivered without a label, and the network receives no feedback on its performance. Detecting and reacting to statistical correlations in data is a common activity. Hebbian learning and its spiking generalizations, such as STDP, are a good example of this. The identification of correlations can be a goal in and of itself, but it can also be utilized to cluster or classify data later on.

STDP is defined as a process that strengthens a synaptic weight if the post-synaptic neuron activates soon after the pre-synaptic neuron fires, and weakens it if the post-synaptic neuron fires later. This conventional form of STDP, on the other hand, is merely one of the numerous physiological forms of STDP.

  • Supervised Learning

In supervised learning, data (the input) is accompanied by labels (the targets), and the learning device’s purpose is to correlate (classes of) inputs with the target outputs (a mapping or regression between inputs and outputs). An error signal is computed between the target and the actual output and utilized to update the network’s weights.

Supervised learning allows us to use the targets to directly update parameters, whereas reinforcement learning just provides us with a generic error signal (“reward”) that reflects how well the system is functioning. In practice, the line between the two types of supervised learning is blurred.

How to train a SNN?

Spiking neuron networks (SNNs) are the computational units of third-generation artificial neural networks (ANNs) that mimic biological neural networks more indistinguishable than ANNs. The most significant characteristics of spiking neural networks are energy efficiency and biological plausibility. SNNs integrate the concept of time for their operating model, which performs in a time-dependent spiking manner. Commonly utilized learning methods for spiking neural networks are categorized as unsupervised learning and supervised learning. A few notable advantages of spiking neural networks over conventional artificial neural networks are brain similarity, computational power, and energy efficiency in event-driven neural processing.

Spike coding methods, spike neural network architectures, and simulation of spiking neural networks are developed for implementing SNN models. Various functions carried out by spiking neural networks are data/pattern classification, estimation, prediction, signal processing, and robotic control applications. In neuroscience, researchers have been investigated simulating brain-scale SNNs to study brain functions. Spike neural networks are applied in different fields of proficiency, and its future scopes will focus on incorporating reinforcement learning in SNNs, also employed in edges devices for the Internet of Things (IoT) and reservoir computing.

Unfortunately, as of today, there are no effective supervised interpretable learning methods that can be used to train an SNN. The key concept of SNN operations does not allow the use of classical learning methods that are appropriate for the rest of NNs. Still, scientists are searching for an optimal method. This is the reason why training an SNN might be a tough task. Nevertheless, you can apply the following methods to SNNs training:

● Unsupervised Learning

o Spike-timing-dependent plasticity (STDP)

o Growing Spiking Neural Networks

o Artola, Bröcher, Singer (ABS) rule

o Bienenstock, Cooper, Munro (BCM) rule

o Relationship between BCM and STDP rules

● Supervised Learning

o SpikeProp

o Remote Supervised Method (ReSuMe)

o FreqProp

o Local error-driven associative biologically realistic algorithm (LEABRA)

o Supervised Hebbian Learning

● Reinforcement Learning

o Spiking Actor-Critic method

o Reinforcement Learning through reward-modulated STDP

Training the SNN

You have seen how to model the neuron for our network, studied its response, and analyzed the encoding methods to get our desired input. Now comes the most important process: the training of the network. How does the human brain learn, and what does it even mean?

Learning is the process of changing the weight that connects neurons in a desirable way. Biological neurons, though, use synapses to communicate information. The weight change is equivalent to the connection strength of the synapse, which can be altered through time with processes that cause synaptic plasticity. Synaptic plasticity is the ability of the synapse to change its strength.

Synaptic Time Dependent Plasticity (STDP)

“Neurons that fire together, wire together”. This phrase describes the well-known Hebbian learning and the Synaptic Time Dependent Plasticity (STDP) 18 learning method that we will discuss. STDP is an unsupervised method that is based on the following principle.

A neuron adapts its pre-synaptic input spike (the input weight of a previous neuron) to match the timing of an input spike with the output spike.

A mathematical expression will help us understand.

Spike-Timing-Dependent-Plasticity (STDP) models or how to understand memory.

Let’s think of an input weight w1​. If the spike that comes from w1​ arrives after the neuron has emitted the spike, the weight decreases because the input spike has no effect on the output one.

On the other hand, a spike arriving before the fire of a neuron strongly affects the timing of the neuron’s spike, and so its weight increases to temporally connect the two neurons through this synapse (weight w1​).

Due to this process, patterns emerge in the connections of the neurons, and it has been shown that learning is achieved.

SpikeProp

SpikeProp is one of the first learning methods to be used. It can be thought of as a supervised STDP learning method.

The core principle is the same: it changes the synapse weight based on the spike timing. The difference with STDP is that while STDP measures the difference in presynaptic and postsynaptic spike timing, here we focus only on the resulting spikes of the neuron (postsynaptic) and their timing.

Since it is a supervised learning method, we need a target value, which is the desired firing time of the output spike. The loss function and the proportional weight change of the synapse are dependent on the difference in the resulting spike timing and the desired spike timing.

In other words, we try to change the input synapse weight in such a way that the timing of the output spike matches the desired timing for the resulting spike.

If we take a look at the equation of the loss function, we notice its similarity with the STDP weight update rule.

The fact that we consider the difference of the output with the desired spike timing, allows us to: a) create classes to classify data and b) use the loss function with a backpropagation-like rule to update the input weight of the neurons.

or

The term δj​ is a key-term for the weight update and is equal to the following equation:

The adaptation of the delta term can give us a backpropagation rule for SpikeProp with multiple layers.

This method illustrates the power of SNNs. With SpikeProp we can code a single neuron to classify multiple classes of a given problem. For example, if we set the desired output timing for the first class at 50 ms, for the second at 75ms, a single neuron can distinguish multiple classes. The drawback of this is that the neuron will only have access to the first 50ms of information to decide if an input belongs to the first class.

Implementation in Python

The steadily increasing interest in Spiking Neural Networks has led to many attempts in developing SNN libraries for Python. Only to mention a few, Norse, PySNN and snnTorch have done an amazing job in simplifying the process of deep learning with the use of spiking neural networks. Note that they also contain complete documentation and tutorials.

Now, let’s see how we could create our own classifier for the well-known MNIST dataset. We will use the snnTorch library by Jason Eshraghian 20 for our purpose since it makes it easy to understand the network’s architecture. SnnTorch can be thought of as a library extending the PyTorch library.

How to build a Spiking Neural Network?

Sure, working with SNNs is a challenging task. Still, there are some tools you might find interesting and useful:

  • If you want a software that helps to simulate Spiking Neural Networks and is mainly used by biologists, you might want to check:
  • GENESIS
  • Neuron
  • Brian
  • NEST

If you want a software that can be used to solve not theoretical but real problems, you should check:

Anyway, if you simply want to touch the sphere you should probably use either TensorFlow or SpykeTorch. Still, please be aware that working with SNNs locally without specialized hardware is very computationally expensive.

Tensorflow

You can definitely create an SNN using Tensorflow, but because the deep learning framework was not initially created to work with SNNs you’ll have to write a lot of code yourself. Please check the related notebooks on the topic that feature the basic SNN simulation and will help you to get started:

SpykeTorch

  • SpykeTorch is a Python simulator of convolutional spiking neural networks from the PyTorch ecosystem. Hopefully, it was initially developed to work with SNNs, so you will be able to use a high-level API to do your task effectively.
  • Despite the incomplete documentation, the simulator has a great tutorial for a smooth start.

Conclusion:

Spiking Neural Networks (SNNs) represent a unique and promising architecture in the field of artificial neural networks, drawing inspiration from the way biological neurons communicate through spikes or action potentials. As we conclude our discussion on SNN architecture, several key points emerge:

  1. Biological Inspiration: SNNs are designed to mimic the neural processes observed in the brain more closely than traditional artificial neural networks. By incorporating the concept of temporal dynamics and spike-based communication, SNNs aim to capture the efficiency and parallelism seen in biological systems.
  2. Event-Driven Processing: Unlike traditional neural networks that operate in a continuous manner, SNNs are event-driven. Neurons in SNNs generate spikes in response to input stimuli, leading to a more power-efficient and asynchronous form of computation. This event-driven nature makes SNNs suitable for tasks with temporal dependencies and dynamic input patterns.
  3. Sparse and Efficient Representation: SNNs often exhibit sparse activation patterns, meaning that only a subset of neurons is active at any given time. This sparsity contributes to computational efficiency, reduces power consumption, and facilitates the processing of large-scale neural networks.
  4. Temporal Information Processing: SNNs excel at handling temporal information due to their ability to encode time into the neural dynamics. This makes them well-suited for tasks involving sequential data, such as speech recognition, video analysis, and time-series prediction.
  5. Learning Mechanisms: Training SNNs involves unique challenges compared to traditional neural networks. Spike-timing-dependent plasticity (STDP) is a common learning rule used in SNNs, where the timing of spikes plays a crucial role in adjusting synaptic weights. This biologically inspired learning mechanism enables SNNs to adapt to changing input patterns and learn from temporal relationships.

In conclusion, Spiking Neural Networks represent a fascinating approach to neural computation, taking inspiration from the intricacies of biological neural systems. While there are challenges to overcome, the potential benefits in terms of efficiency, temporal processing, and neuromorphic applications make SNNs an exciting area for continued exploration and advancement in the field of artificial intelligence.

--

--