INTEGRATING NEUROMORPHIC COMPUTING WITH ARTIFICIAL INTELLIGENCE, PART 1: SNN ALGORITHMS

DeepSouth AI
4 min readJan 17, 2024

Neuromorphic computing is integrated into artificial intelligence systems through various methods, including Spiking Neural Networks (SNNs), genetic algorithms, and reservoir computing. While each method offers unique applications and benefits, SNNs are often considered the primary approach for training neuromorphic systems due to their effectiveness, broad applicability, and similarity to human brain learning processes.

SPIKING NEURAL NETWORKS

Commonly abbreviated as SNN, Spiking Neural Networks represent a class of algorithms designed to train artificial intelligence systems. These algorithms operate by adjusting the states and parameters of artificial neurons and synapses. This process facilitates learning new behaviors through achieving a new state of homeostasis. SNNs capitalize on the inherent plasticity of Artificial Neural Networks (ANNs). Plasticity, in this context, refers to the capability of a neural network to rapidly adapt its predictions in response to new information, a critical feature for the adaptability and robustness of AI systems. Some widely-used SNN algorithms in artificial intelligence include:

  • Spike-Timing-Dependent Plasticity (STDP)
  • Backpropagation-based direct training schemes
  • Supervised temporal learning
  • ANN-to-SNN conversion strategies

Spike-Timing-Dependent Plasticity (STDP)

Spike-Timing-Dependent Plasticity is a bio-plausible, unsupervised learning mechanism that leverages the temporal difference between pre- and post-synaptic neuronal spikes to instantaneously modulate neural synaptic weights. This method is both simple and rapid, effectively capturing the temporal correlations of spikes between adjacent neural layers.

STDP finds particular use in applications where timing is crucial, such as in pattern recognition for forecasting systems, visual recognition in augmented reality, auditory processing, gesture recognition, and systems requiring memory formation and recall, like chat engines.

The general learning equation for STDP, which may vary depending on the neural network architecture and specific learning rule, is typically expressed as:

ΔWij​ = η ⋅ (Apos ​⋅ f(Δt) + Aneg ​⋅ g(Δt))

Where:

  • ΔWij​ is the change in synaptic weight from neuron i to neuron j.
  • η is the learning rate, dictating the magnitude of weight updates.
  • Apos​ and Aneg​ are the positive and negative amplitude factors.
  • ft) and gt) are functions defining the potentiation and depression based on the time difference (Δt) between the pre- and post-synaptic spikes.

Backpropagation-based direct training schemes

These methods train neural networks by adjusting the weights based on the error obtained from forward propagation. This error, or loss, is then propagated backward through the network layers, allowing for iterative weight adjustments that progressively lower error rates, enhancing the model’s reliability.

Backpropagation is particularly versatile due to its broad applicability across various scenarios, requiring no complex parameters other than the input data for training.

These training schemes excel in problems involving labeled data and the optimization of differentiable objective functions. Consequently, they are extensively employed in classification and regression tasks, image and signal processing, natural language processing (NLP), generative modeling, pattern recognition, recommendation systems, anomaly detection, and time-series analysis.

The learning equation for backpropagation involves calculating the gradient of a loss function with respect to the neural network’s weights and updating these weights to minimize the loss:

ΔWij​ = −η ⋅ (​∂L / Wij​)​

Where:

  • ΔWij​ is the change in synaptic weight from neuron i to neuron j.
  • η is the learning rate, dictating the magnitude of weight updates.
  • L is the loss function, measuring the discrepancy between predicted outputs and actual targets.
  • ∂ is a partial derivative.

The weight update equation is as follows:

ΔWij​ = −η ⋅ (​∂L / Wij​) = ηδjai

Where:

  • δj​ is the error term for neuron j, derived from the activation function’s derivative and the error at the output layer.
  • ai​ represents the activation of neuron i from the preceding layer.

Supervised temporal learning

This model utilizes labeled temporal data, which consists of sequences of observations recorded over time intervals. ‘Supervised training’ implies that the model is trained with input-output pairs, where the desired output at specific times is known. The training aims to equip the model with the ability to forecast outcomes or make classifications at future time points by recognizing patterns in the historical data.

The importance of the temporal order in data makes this model particularly suitable for applications like time series prediction, speech recognition, natural language processing (NLP), vision systems, and predictive modeling in healthcare.

A typical supervised temporal learning equation would adjust a loss function that accounts for the sequential aspect of the data. Gradient-based optimization algorithms are then used to iteratively update the model’s parameters. While the specific learning equation may vary with the model type, a general form is:

L(θ) = ∑ (T, t=1) ​loss(yt​, y^​t​)

Where:

  • y^​t​ is the predicted output at time t,
  • yt​ is the actual target output at time t,
  • “loss” represents an appropriate loss function, such as mean squared error or cross-entropy, which quantifies the discrepancy between the predicted outputs and the actual targets.

ANN-to-SNN conversion strategies

ANN to SNN conversion is a technique that involves initially training an Artificial Neural Network (ANN) and subsequently transferring its learned weights to a Spiking Neural Network (SNN). This indirect training strategy requires the computational elements of the ANN to correspond to those in the SNN. Training the ANN involves certain constraints to ensure compatibility with the SNN structure, such as the elimination of neuron biases and batch normalization layers.

Common strategies employed in ANN-to-SNN conversion encompass a variety of approaches, including spike encoding, temporal encoding, setting spiking thresholds, temporal pooling, modeling axonal delays, adjusting learning rates, injecting noise, implementing quantization techniques, and developing hybrid models. ANN-to-SNN conversion strategies are especially useful in scenarios where SNNs offer distinct advantages over traditional ANNs, such as their ability to process information in a way that more closely mimics biological brain processes.

--

--

DeepSouth AI

DeepSouth AI is an artificial intelligence system that utilizes neuromorphic computing.