First steps in spiking neural networks

Let’s write a simple spiking neural network using only NumPy and Pandas for a classic machine learning task using Gaussian receptive fields

Andrey Urusov
23 min readJun 25, 2023
Как это работает

Introduction

Today artificial intelligence (AI) is becoming increasingly common in our lives. Every day we hear about new achievements of neural networks, which can already independently write scripts and create TV shows. However, not everyone knows about spiking neural networks (SNN), which are an alternative to classical neural networks. SNNs are becoming more popular due to their advantages, such as energy efficiency, scalability and the ability to interact with a dynamic asynchronous environment unlike the some classical neural networks face. In this article, we will not delve into the reasons for these problems. The goal of this article is to briefly introduce the reader to the topic of SNN and provide a detailed explanation of how to create a simple spiking neural network in Python using only NumPy and a little bit of Pandas. This article will be useful for those who are just beginning to take an interest in SNNs.

The Basics

At the first, I want to briefly describe the main features of SNNs before moving on to writing code.

One approach to developing methods for training spiking neural networks is to use models based on biological principles of brain function. Such models allow for specific features of spiking neural networks to be taken into account and efficient methods of local learning to be developed, one of which is the synaptic plasticity method (an alternative to the backpropagation method in classical neural networks). Several examples of neuron models used in spiking neural networks can be given: one of the simplest and most popular models is the leaky integrate-and-fire (LIF) neuron model, more advanced Izhikevich’s neuron model or moredetailed Hodgkin-Huxley neuron model — the most realistic model of a biological neuron — are rarely used for computational experiments with spiking neural networks (especially large ones) due to their high computational complexity. The main feature of SNN is the use of spikes as the main means of transmitting information between neurons, which have only the time of their generation as an attribute — this sets them apart from traditional neural networks. The functioning of a neuron in this paradigm is limited only by the generation and reception of spikes. The moment of spike generation by a neuron depends on the spikes received from other neurons through special connections (synapses). The second feature is the asynchronicity of SNN: the functioning of different neurons is not explicitly synchronized with each other. Thanks to these two features, all three problems of traditional neural networks are solved.

I will briefly revisit some points later, so don’t worry if something is not clear yet.

Now I will explain in more detail what I am using in my neural network model. I propose an original solution to one of the most well-known machine learning problems — Iris Species — using a spiking neural network. I will use one of the most popular models of spiking neurons — the leaky integrate-and-fire (LIF) neuron model. To transform the data, I will use phase encoding with Gaussian receptive fields. The notebook will provide a general understanding of how spiking neural networks work overall, and one of the many ways to encode input information for use in SNNs. I will not delve too deeply into the theoretical part and will try to convey the essence of the approach as concisely as possible. The list of sources prepared by me at the end of the article will help those who want to delve deeper into the topic.

The presented code and transformed dataset are available separately on Kaggle (code, dataset) and GitHub (code & dataset).

Now, let’s get started!

The simplest model SNN

In general, our simplest network will have the following architecture:

The general scheme of our network

The dataset contains three types of flowers: Setosa, Versicolor and Virginica. Each class contains 4 features: “Sepal Length”, “Sepal Width”, “Petal Length”, “Petal Width”. The goal of iris flower classification is to predict the flower types based on their specific features. We will encode the values of each feature using 10 presynaptic neurons, for a total of 40 neurons. In the output layer, we will have 3 postsynaptic neurons for each type of flower.

But how does it work?
Neurons are connected to each other by synapses with a certain weight, and tuning these weights is our main task, which we will solve below. The main essence of the model: presynaptic neurons (input information) generate spikes that stimulate postsynaptic neurons -> membrane potential (Vm) of postsynaptic neurins is rising -> postsynaptic neurons generate their spikes from this (output information). The postsynaptic neuron that fires first on a given input layer (period 10ms) is the predicted type of color that our impulse network.

Let-s see a little vizualization example of a one training/testing period corresponding to a set of input data (in the case of the Iris dataset — one of 150). The dependency of the membrane potential of post-synaptic neurons on the incoming spikes from pre-synaptic neurons in relation to their weights. If the membrane potential reaches a threshold value, the post-synaptic neuron fires a spike:

Example of a one training/testing period

Now let’s go over it again, but in a bit more detail, and then we can start writing the code.

The change in the postsynaptic neuron’s membrane potential depends on the timing of spikes from presynaptic neurons and the weight of the synapse between them. If there are no spikes coming from the presynaptic neurons, the postsynaptic neuron’s membrane potential tends towards the resting potential (in our case, this is 0). The spike itself is an elementary event in the universe of SNN.

When the membrane potential of postsynaptic neuron reaches a threshold value, the postsynaptic neuron fires a spike. All of this happens during each training and testing period, which we take to be 10ms — the training period being the time during which all active presynaptic neurons corresponding to the values of the four input data features generate their spikes (there are a total of 150 such periods in the dataset). The postsynaptic neuron can either generate a spike or not. The postsynaptic neuron that fires a spike before any of the other postsynaptic neurons is considered to have fired during that period, while the others have not. After generating a spike, the postsynaptic neuron’s potential falls back to the resting potential and does not change again until the end of the current training period (it cannot fire more than once on a single set of input data).

Great! We’ve covered all the basics. Now let’s move on to the first part, which is encoding the original dataset to obtain the spike timing of the presynaptic neurons, or rather their latency. I would like to note that this ready-made dataset is available here.

Gaussian receptive fields

Import all the necessary packages for work:

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import warnings
warnings.filterwarnings("ignore")

Load the original dataset from any convenient source:

URL = '/kaggle/input/iris/Iris.csv'
df = pd.read_csv(URL)
df = df.iloc[:,1:]

Look at the first few lines of the dataset:

df.head()

Make a copy of the dataframe, delete the ‘species’ column, leaving only the quantitative part of the detaset:

df_ = df.drop(columns=['Species']).copy()

Build a data histogram:

df_.plot.hist(alpha = 0.4, figsize = (12, 4))
plt.legend(title = "Dataset cilumns:" ,bbox_to_anchor = (1.0, 0.6),
loc = 'upper left')
plt.title('Iris dataset', fontsize = 20)
plt.xlabel('Input value', fontsize = 15)
plt.show()

The data is distributed fairly evenly, which is why we will encode each feature with the 10 presynaptic neurons with same size, resulting in a total of 10 x 4 = 40 presynaptic neurons.

Let’s write a function that generates 10 Gaussians for each input feature so that:

  • means of each Gaussian are evenly distributed between the extreme values of the range, including the boundaries for each feature ( “Sepal Length”, “Sepal Width”, “Petal Length”, and “Petal Width”);
  • the height of each Gaussian is equal to 1 is the maximum excitation value of the presynaptic neuron, from which late we will calculate the spike generation latency by presynaptic neuron.
def Gaus_neuron(df, n, step, s):

neurons_list = list()
x_axis_list = list()
t = 0

for col in df.columns:

vol = df[col].values
min_ = np.min(vol)
max_ = np.max(vol)
x_axis = np.arange(min_, max_, step)
x_axis[0] = min_
x_axis[-1] = max_
x_axis_list.append(np.round(x_axis, 10))
neurons = np.zeros((n, len(x_axis)))

for i in range(n):

loc = (max_ - min_) * (i /(n-1)) + min_
neurons[i] = norm.pdf(x_axis, loc, s[t])
neurons[i] = neurons[i] / np.max(neurons[i])

neurons_list.append(neurons)
t += 1

return neurons_list, x_axis_list

Select the parameters for uniform coverage of the range of possible values of each feature by Gaussians and apply the function written above:

sigm = [0.1, 0.1, 0.2, 0.1]
d = Gaus_neuron(df_, 10, 0.001, sigm)

Now visualize Gaussians for our dataset for each input feature:

fig, (ax1, ax2, ax3, ax4) = plt.subplots(4)

fig.set_figheight(8)
fig.set_figwidth(10)

k = 0

for ax in [ax1, ax2, ax3, ax4]:

ax.set(ylabel = f'{df_.columns[k]} \n\n Excitation of Neuron')

for i in range(len(d[0][k])):

ax.plot(d[1][k], d[0][k][i], label = i + 1)

k+=1

plt.legend(title = "Presynaptic neuron number \n in each input column" ,
bbox_to_anchor = (1.05, 3.25), loc = 'upper left')
plt.suptitle(' \n\n Gaussian receptive fields for Iris dataset',
fontsize = 15)
ax.set_xlabel(' Presynaptic neurons and\n input range of value feature',
fontsize = 12, labelpad = 15)

plt.show()

Now let’s examine the encoding logic in detail using the first five data points of the “SepalWidthCm” feature as an example: we will draw dotted vertical segments for the first five values of the “SepalWidthCm” feature and locate their intersections with the Gaussian presynaptic neurons. These points will be marked with red dots:

x_input = 5
fig, ax = plt.subplots(1)

fig.set_figheight(5)
fig.set_figwidth(15)

ax.set(ylabel = df_.columns[1])

for i in range(len(d[0][1])):
ax.plot(d[1][1], d[0][1][i])

for n in range(x_input):

plt.plot(np.tile(df_['SepalWidthCm'][n], (10,1)),
d[0][1][np.tile(d[1][1] == df_['SepalWidthCm'][n], (10,1))],
'ro', markersize=4)

plt.vlines(x = df_['SepalWidthCm'][n], ymin = - 0.1, ymax = 1.1,
colors = 'purple', ls = '--', lw = 1, label = df_['SepalWidthCm'][n])

plt.text(df_['SepalWidthCm'][n] * 0.997, 1.12, n + 1, size = 10)


plt.legend(title = "First five input:", bbox_to_anchor = (1.0, 0.7),
loc = 'upper left')

plt.suptitle('Gaussian receptive fields for Iris dataset. \n \
A detailed description of the idea using the example of the first five value "SepalWidthCm"',
fontsize = 15)

ax.set_xlabel('Input value X ∈ [x_min, x_max] of column', fontsize = 12, labelpad = 15)
ax.set_ylabel('Excitation of a Neuron ∈ [0,1]', fontsize = 12, labelpad = 15)

plt.show()

Now output the numerical values of the intersection points of the input of each value with each Gaussian (presynaptic neuron):

np.set_printoptions(formatter={'float_kind':'{:f}'.format})
five_x = np.zeros((5, 10))

for n in range(x_input):
five_x[n,:] = d[0][1][np.tile(d[1][1] == df_['SepalWidthCm'][n], (10,1))]

five_x

Let’s find the latency of each presynaptic neuron as 1 — (excitation of the presynaptic neuron), provided that the excitation of the neuron is greater than 0.1, otherwise we consider the presynaptic neuron to be unexcited at this iteration:

five_x = np.where(five_x > 0.1, 1 - five_x, np.nan)
five_x[five_x == 0] = 0.0001
five_x

Now let’s visualize the process of spike occurrence taking into account the latency. Black dots on the graph — the moment the spike is emitted by the presynaptic neuron:

fig, ax = plt.subplots(5, figsize=(10, 8))

for i in range(5):
ax[i].scatter(x = five_x[i], y = np.arange(1, 10 + 1), s = 10, color = 'black')
ax[i].hlines(xmin = 0, xmax=1, y=np.arange(1, 11, 1),
colors = 'purple', ls = '--', lw = 0.25)
ax[i].yaxis.set_ticks(np.arange(0, 11, 1))
ax[i].set_ylabel(f'x{i+1} = {df_.iloc[i,1]}\n (period {i+1}) \n\n № \npresynaptic neuron',
fontsize = 7)
ax[i].set_xlim(0, 1)
ax[i].set_ylim(0, 10 * 1.05)
ax[i].tick_params(labelsize = 7)

ax[i].set_xlabel('Spike Latancy')
plt.suptitle(' \n\nInput after applying latancy coding \nusing the Gaussian receptive fields method',
fontsize = 12)
plt.show()

Now we will do this for all values of all features:

def Lat_Spike(df, d, n):

for i in range(len(df.columns)):

k = len(df.iloc[:, i])
st1 = np.tile(d[1][i], (k, 1))
st2 = df.iloc[:, i].values.reshape(-1, 1)
ind = (st1 == st2)
exc = np.tile(d[0][i], (k, 1)).reshape(k, n, len(d[0][i][0]))[
np.repeat(ind, n, axis=0).reshape(k, n, len(ind[0]))].reshape(k, n)
lat_neuron = np.transpose(np.where(exc > 0.1, 1 - exc, np.nan))

if i == 0:
lat_neuron_total = lat_neuron
else:
lat_neuron_total = np.concatenate((lat_neuron_total, lat_neuron), axis = 0)

lat_neuron_total[lat_neuron_total == 0] = 0.0001

return lat_neuron_total

fin = Lat_Spike(df_, d, 10)

Visualize the moments of spike generation by presynaptic neurons for the first value of each of the four features:

fig, ax = plt.subplots(4, figsize=(10, 6))

for i in range(4):

ax[i].scatter(x = fin[i * 10:10 * (1 + i), 0], y = np.arange(1, 10 + 1), s = 10, color = 'r')
ax[i].hlines(xmin = 0, xmax = 1, y=np.arange(1, 11, 1),
colors = 'purple', ls = '--', lw = 0.25)
ax[i].yaxis.set_ticks(np.arange(0, 11, 1))
ax[i].set_ylabel(f'col_{i + 1}: {(df_.columns)[i]} \n x1 = {df_.iloc[0, i]} \n (period {1})\n\n № \npresynaptic neuro', fontsize = 6)
ax[i].set_xlim(0, 1)
ax[i].set_ylim(0, 10 * 1.05)
ax[i].tick_params(labelsize = 7)

ax[i].set_xlabel('Spike Latancy')
plt.suptitle('\nFirst input in each column \n after applying latancy coding using the Gaussian receptive fields method', fontsize = 10)
plt.show()

All right, now present the results in the form of a DataFrame. By rows — presynaptic neurons, by columns — the number of the input data set:

Final_df = pd.DataFrame(fin)
Final_df

Let’s visualize the obtained latencies for the first three learning periods, considering one period to be equal to 10 ms, therefore, we will scale the latency of each presynaptic neuron (total time interval by three periods is 30 ms):

fig, ax = plt.subplots(1, figsize=(10, 6))
h = 3

for i in range(h):
ax.scatter(x = (i+Final_df.iloc[:,i].values)*10, y = np.arange(1, 41), s = 6, color = 'black')

plt.vlines(x = (i) * 10, ymin = 0, ymax = 40,
colors = 'purple', ls = '--', lw = 0.5)
ax.tick_params(labelsize = 7)

ax.yaxis.set_ticks(np.arange(1, 41, 1))
ax.xaxis.set_ticks(np.arange(0, (h+1)*10, 10))
ax.set_xlabel('time (ms)')
ax.set_ylabel('№ presynaptic neuron')
plt.suptitle(' \n\nSpikes of presynaptic neurons for first 30 ms', fontsize = 10)
plt.gca().invert_yaxis()
plt.show()

Great! We’ve encoded our dataset successfully!

Now let’s move on to training our LIF neuron!

LIF neuron model

Part one. Subsample size of 60: the first 20 values for each flower type.

The data in the original data set are distributed sequentially by iris types: a total of 150 data sets of which the first 50 records belong to Iris-setosa, 50–100 — Iris-versicolor, 100–150 — Iris-virginica. I writen a function that will select the data set we need from the portfolio, in which there will be an equal number of instances of each type:

def model_data(ind, ind_type, lat_ne, start, end):

train_stack = np.vstack((lat_ne[ind_type[ind, 0] + start:ind_type[ind, 0] + end],
lat_ne[ind_type[ind, 1] + start:ind_type[ind, 1] + end],
lat_ne[ind_type[ind, 2] + start:ind_type[ind, 2] + end]))
train_stack = np.where(train_stack > 0, train_stack, 0)

return train_stack
The part of the original DataFrame that is currently being used in encoded form. Part I

At the first stage, we need to somehow increase the weights of active synapses, forming a set of weights for each postsynaptic neuron for further correction using STDP (Spike-timing-dependent plasticity) at the second stage.

How can we do this?
There can be many options, here are the basic principles of weight gain that I will use for the first part of the training set:
1. All weights are initially equal to 0.1
2. For each postsynaptic neuron in the training period corresponding to its flower type, we increase the weights of all active presynaptic neurons by a constant that is the same for all, adjusted depending on the delay of the specific presynaptic neuron: less delay -> presynaptic neuron generated a spike earlier -> receives a greater weight. The general formula is as follows: delta_weight = +Const * (1 — latency)
3. For each postsynaptic neuron in the training period not corresponding to its flower type, we impose a penalty on all active presynaptic neurons that have a weight that is not the default (weight > 0.1). That is, we “penalize” presynaptic neurons that fire for different types of flowers, reducing their weights -> reducing their contribution to the stimulation of the membrane potential of postsynaptic neurons of other flower types. Similarly to point 2, the size of the penalty is equal to the same constant, adjusted for the delay of the active neuron in this training period. The general formula is as follows: delta_weight = -Const * (1 — latency)
4. The “penalty” from point 3 is evenly distributed among non-active presynaptic neurons in this period that have a weight that is not the default (weight > 0.1). The general formula is as follows: delta_weight = -Const * (1 — latency) / N, where latency is the delay of the fired presynaptic neuron, N is the number of “silent” presynaptic neurons with a weight greater than the default
5. Adjusted weights cannot be less than the default value: if the weight decreases, then 0.1 is the lower limit. We do not have “braking” presynaptic neurons with negative connection weights
6. In each iteration (each period), connections with default weights are not corrected in any way

Phew! If I confused you a little, the picture below will help you understand:

An example of initial modification of default weights of a mini-network consisting of four presynaptic neurons and one postsynaptic neuron over two training periods

Let’s write the code:

lat_ne = np.transpose(Final_df.values)
ind_type = np.array(([0, 50, 100], [50, 100, 0], [100, 0, 50]))
list_weight = np.zeros((3,40))

for ind in range(3):

train_stack = model_data(ind, ind_type, lat_ne, 0, 20)
tr_ar = np.where(np.transpose(train_stack) > 0, 2 * (1 - np.transpose(train_stack)), 0)
tr_ar[:, 20:] = tr_ar[:, 20:] * (-1)
tr_ar = pd.DataFrame(tr_ar)
tr_ar[20] = tr_ar.iloc[:,:20].sum(axis = 1) + 0.1
tst_ar = np.float64(np.transpose(np.array(tr_ar.iloc[:,20:])))

for i in range(1, len(tst_ar)):

tst_ar[0][((np.round(tst_ar[0], 4) > 0.1) & (tst_ar[i] == 0))] += - np.float64(
np.sum(tst_ar[i][np.round(tst_ar[0], 4) > 0.1]) / len(tst_ar[0][((
np.round(tst_ar[0], 4) > 0.1) & (tst_ar[i] == 0))]))
tst_ar[0][np.round(tst_ar[0], 4) > 0.1] += tst_ar[i][np.round(tst_ar[0], 4) > 0.1]
tst_ar[0][tst_ar[0] < 0.1] = 0.1

list_weight[ind, :] = tst_ar[0]

list_weight
Three sets of weights of postsynaptic neurons: Setosa, Versicolor and Virginica after first part of train. Each set of length 40 is the weights of each presynaptic neuron

We have obtained our first set of weights! Let’s write a function for our neuron that takes into account the synaptic weights and spike timings of presynaptic neurons. Then we will see how the membrane potential of each postsynaptic neuron behaves.

But first, let’s introduce the formula for changing the membrane potential of a postsynaptic neuron. Simply put: a spike with the weight of the synapse arrives at the postsynaptic neuron -> the membrane potential of the postsynaptic neuron increases, and if no spikes arrive, the potential drops to its minimum level [Vmin] (in our case, this is 0) — this logic is visible in the following formula:

-this formula is a slightly modified and simplified version of formula from [12], but all the logic of the classic LIF neuron is preserved here. In my formula, the resting potential is absent because I assume it to be equal to 0. Input parameters such as tau, dt, Vmin can be chosen to approximate the model to the biological neuron model. In this case, I do not have such a task, so the parameters are chosen to be as clear and functional as possible. Now let’s implement this logic in code:

def LIF_SNN(n, l, data, weight, v_spike):

V_min = 0
V_spike = v_spike
r = 5
tau = 2.5
dt = 0.01
t_max = 10
time_stamps = t_max / dt
time_relax = 10
v = np.zeros((n, l, int(time_stamps)))
t_post = np.zeros((n, l))
t_post_ = np.zeros((n, int(l / 3)))
v[:, :, 0] = V_min

for n in range(n):
for u in range(l):

t = 0
f0 = (np.round(data[u][np.newaxis].T, 3) * 1000).astype(int)
f1 = np.tile(np.arange(1000), (40, 1))
f2 = np.where(((f1 == f0) & (f0 > 0)), 1, 0)
f2 = f2 * weight[n][np.newaxis].T
spike_list = np.sum(f2.copy(), axis = 0)

for step in range(int(time_stamps) - 1):
if v[n, u, step] > V_spike:
t_post[n, u] = step
v[n, u, step] = 0
t = time_relax / dt
elif t > 0:
v[n, u, step] = 0
t = t - 1

v[n, u, step + 1] = v[n, u, step] + dt / tau * (-v[n, u, step] + r * spike_list[step])
t_post_[n, :] = t_post[n, n * int(l / 3):n * int(l / 3) + int(l / 3)]

return v, t_post_, t_post

Function for visualizing spike moments of postsynaptic neurons:

def spike_plot(spike_times, one_per, n, cur_type):

fig, (ax1, ax2, ax3) = plt.subplots(3, figsize = (25, 10))#, dpi = 70)

if one_per:
k, t, a = 1, n, 0
cur = cur_type
else:
k, t, a = len(spike_times[0]), 0, 1
cur = 1

spike_times[spike_times == 0] = np.nan
di = {0: 'blue', 1: 'red', 2: 'black'}
di_t = {0: 'Iris-setosa', 1: 'Iris-versicolor', 2: 'Iris-virginica'}
p = 0

for ax in [ax1, ax2, ax3]:
for i in range(k * t, k + t):
ax.vlines(x = spike_times[p, i] / 100 + i * a * 10, ymin = 0.0, ymax = 1.1,
colors = di[p], ls = '-', lw = 3)
ax.set_ylabel(f'Neuron {p + 1} \n {di_t[p]}', fontsize = 15)

if one_per:
ax.axvspan(0, int(k * 10), color = di[cur - 1], alpha = 0.05, label = di_t[cur - 1])
ax.margins(0)
else:
ax.axvspan(0, int(k * 10 / 3), color = di[0], alpha = 0.05, label = di_t[0])
ax.axvspan(int(k * 10 / 3), int(k * 10 * 2 / 3), color = di[1], alpha = 0.05, label = di_t[1])
ax.axvspan(int(k * 10 * 2 / 3), int(k * 10 * 3 / 3), color = di[2], alpha = 0.05, label = di_t[2])
ax.set_xlim(0, k * 10)
ax.margins(0)

p += 1


if one_per:
plt.suptitle(f' \n\n Moment of spike of postsynaptic neurons for train period {n}', fontsize = 20)
plt.legend(title = " Part of a type set:" ,bbox_to_anchor = (1, 1.9), loc = 'upper left',
fontsize = 15, title_fontsize = 15)
else:
plt.suptitle(f' \n\n Moment of spike of postsynaptic neurons on the used part of the dataset', fontsize = 20)
plt.legend(title = " Part of a type set:" ,bbox_to_anchor = (1, 2.1), loc = 'upper left',
fontsize = 15, title_fontsize = 15)

plt.xlabel('Time (ms)', fontsize = 15)
plt.show()

Function for visualizing membrane potential of each postsynaptic neuron:

def v_plot(v):

fig, (ax1, ax2, ax3) = plt.subplots(3, figsize = (25, 10))#, dpi = 70)
k = len(v[0,:,:])
di = {0: 'blue', 1: 'red', 2: 'black'}
di_t = {0: 'Iris-setosa', 1: 'Iris-versicolor', 2: 'Iris-virginica'}
p = 0

for ax in [ax1, ax2, ax3]:
for i in range(k):
ax.plot(np.arange(i * 10, (i + 1) * 10, 0.01), v[p, i, :], di[p], linewidth = 1)
ax.set_ylabel(f' Neuron {p + 1} \n {di_t[p]} \nV (mV)', fontsize = 15)

ax.axvspan(0, int(k * 10 / 3), color = di[0], alpha = 0.05, label = di_t[0])
ax.axvspan(int(k * 10 / 3), int(k * 10 * 2 / 3), color = di[1], alpha = 0.05, label = di_t[1])
ax.axvspan(int(k * 10 * 2 / 3), int(k * 10 * 3 / 3), color = di[2], alpha = 0.05, label = di_t[2])
ax.margins(0)

p += 1

plt.legend(title = " Part of a type set:" ,bbox_to_anchor = (1, 2), loc = 'upper left', fontsize = 15, title_fontsize = 15)
plt.xlabel('Time (ms)', fontsize = 15)
plt.suptitle(' \n Activity of postsynaptic neurons on the used part of the dataset \n (Membrane potential)', fontsize = 20)

Accuracy function. If multiple postsynaptic neurons generate spikes during one period, the postsynaptic neuron that generated the spike first is considered to have fired:

def accuracy_snn(spike_time, start, end, df, ind_type, ind):

type_dict = {'Iris-setosa': 1, 'Iris-versicolor': 2, 'Iris-virginica': 3}
target_type_total = np.array(df.replace({'Species': type_dict}).iloc[:, - 1])
target_type = np.vstack((target_type_total[ind_type[ind, 0] + start:ind_type[ind, 0] + end],
target_type_total[ind_type[ind, 1] + start:ind_type[ind, 1] + end],
target_type_total[ind_type[ind, 2] + start:ind_type[ind, 2] + end])).flatten()

spike_time_ = np.where(spike_time > 0, np.array(([1], [2], [3])), np.nan)
final_test = np.full([len(spike_time[0])], np.nan).astype(int)
for i in range(len(spike_time[0])):
try:
final_test[i] = spike_time_[:, i][spike_time[:, i] == np.min(spike_time[:, i][spike_time[:, i] > 0])][0]
except:
final_test[i] = 0

ac = np.sum(np.where(final_test == target_type, 1, 0)) / len(target_type)

return final_test, target_type, print('accur.:', np.round(ac * 100, 2), '%')

We adjusted and increased the weights on the first 20 instances of each type for each postsynaptic neuron, resulting in three sets of weights. Let’s examine the membrane potential profile of each postsynaptic neuron with these obtained weights on the same first part of the training set. At this stage, we will not limit the membrane potential to a threshold level, choosing it to be equal to 100:

train_stack = model_data(0, ind_type, lat_ne, 0, 20)
res = LIF_SNN(3, 60, train_stack, list_weight, 100)
v = res[0]

v_plot(v)

Overall, it looks good, with each postsynaptic neuron’s activity area clearly visible. The membrane potential profile of the first neuron looks the best, while neurons 2 and 3 are more responsive to “foreign” spikes that should not significantly change their potentials — this could lead to incorrect classification. Let’s look at the spike times and accuracy at this stage with a threshold voltage value of 0.25:

res = LIF_SNN(3, 60, train_stack, list_weight, 0.25)
spike_time = res[2]
spike_plot(spike_time, False, False, False)
accuracy_snn(spike_time, 0, 20, df, ind_type, 0)[2]

The accuracy is real good! Let’s examine a few periods where one of the postsynaptic neurons has false activations. We’ll try to understand what’s happening and how it affects accuracy.

Let’s look at the last false spike of the first postsynaptic neuron, which occurs in period 46 of the first part of training:

spike_plot(spike_time, True, 46, 3)

Together with the first incorrect spike, the second postsynaptic neuron spikes falsely, while the necessary third neuron remains silent. In this case, the second neuron will be recognized as having fired because it generated a spike earlier than the first — this is an error that reduces accuracy.

Let’s examine the second false spike of the third postsynaptic neuron, which occurs in period 24 of the first part of training:

spike_plot(spike_time, True, 24, 2)

Together with the third spike in this period, the “correct” second postsynaptic neuron fires. In this case, because the second neuron fired earlier than the false third neuron, it will be considered as having fired in the current period and no error will occur.

Overall, things are going well, so let’s smoothly transition to the second part of training.

Part two. Subsample size of 60: the second 20 values for each flower type.

The part of the original DataFrame that is currently being used in encoded form. Part II

At this stage, we are training on the next set of input data using local STDP learning. Before we proceed, let’s see what the result and accuracy would be if we applied the current weights to the second training set:

train_stack = model_data(0, ind_type, lat_ne, 20, 40)
res = LIF_SNN(3, 60, train_stack, list_weight, 100)
v = res[0]

v_plot(v)
res = LIF_SNN(3, 60, train_stack, list_weight, 0.25)
spike_time = res[2]
spike_plot(spike_time, False, False, False)
accuracy_snn(spike_time, 20, 40, df, ind_type, 0)[2]

Overall, the profile is similar to the profile of the first set. The accuracy has slightly decreased — a expected result.

Now it’s time to adjust the weights using classical STDP for the second part of the training set. A few words about the formula and meaning of the used STDP approach. In short: presynaptic neurons generate spikes -> the membrane potential of the postsynaptic neuron increases and reaches a threshold value -> the postsynaptic neuron generates a spike -> if the period has not ended, presynaptic neurons can continue to generate spikes that are no longer needed -> it is necessary to strengthen the weights of presynaptic neurons that fired before the postsynaptic neuron generated a spike and decrease the weights of those that fired after.

The logic of input for the used version of STDP

To apply STDP, we will record the spike timings of each postsynaptic neuron at a threshold value of 0.25, obtaining values for t_post. We already have the values for t_pre (latency) which we calculated earlier. We will calculate the weight change for each presynaptic neuron for each postsynaptic neuron using the following formulas:

Depending on the presynaptic neuron’s latency their the weights will be adjusted towards growth or reduction. The constants A+ and A- are selected individually for each task: if the training set is large, a small value can be chosen, and vice versa for a small training set (as we have now). A+ and A- are usually expressed in terms of each other:

res = LIF_SNN(3, 60, train_stack, list_weight, 0.25)
t_post = res[1]
A_p = 0.8
A_m = A_p * 1.1

for n in range(3):
for u in range(20):

t1 = np.round(train_stack[u + 10 * n] * 1000)
t2 = t1.copy()

t2[((t1 <= t_post[n, u]) & (t1 > 0))] = A_p * np.exp((t1[((t1 <= t_post[n, u]) & (t1 > 0))] - t_post[n, u]) / 1000)
t2[((t1 > t_post[n, u]) & (t1 > 0))] = - A_m * np.exp((t_post[n, u] - t1[((t1 > t_post[n, u]) & (t1 > 0))]) / 1000)

list_weight[n, :] += t2

list_weight[list_weight < 0] = 0
list_weight
Three sets of weights of postsynaptic neurons: Setosa, Versicolor and Virginica after second part of train

We have adjusted the weights, let’s see how the accuracy of the model has now changed on the second set of training instances:

res = LIF_SNN(3, 60, train_stack, list_weight, 0.25)
spike_time = res[2]
spike_plot(spike_time, False, False, False)
accuracy_snn(spike_time, 20, 40, df, ind_type, 0)[2]

Great! It has increased to its previous level. Now let’s check the classification accuracy on the entire training set (all first 40 instances of each class) using by this weights:

The part of the original DataFrame that is currently being used in encoded form. All train part

Let’s write the code:

train_stack = model_data(0, ind_type, lat_ne, 0, 40)
res = LIF_SNN(3, 120, train_stack, list_weight, 100)
v = res[0]

v_plot(v)
res = LIF_SNN(3, 120, train_stack, list_weight, 0.25)
spike_time = res[2]
spike_plot(spike_time, False, False, False)
accuracy_snn(spike_time, 0, 40, df, ind_type, 0)[2]

Overall, it’s very good that the quality almost didn’t decrease when the sample size was doubled. Although there are periods in which postsynaptic neurons sometimes fire at the wrong times, we still achieve high accuracy because the postsynaptic neuron that generated a spike earlier within one period is recognized as having fired.

Now let’s run the network on the test set (the last 10 instances of each class):

The part of the original DataFrame that is currently being used in encoded form. Test part

Let’s write the code:

train_stack = model_data(0, ind_type, lat_ne, 40, 50)
res = LIF_SNN(3, 30, train_stack, list_weight, 100)
v = res[0]
res = LIF_SNN(3, 30, train_stack, list_weight, 0.25)
spike_time = res[2]

v_plot(v)
spike_plot(spike_time, False, False, False)
accuracy_snn(spike_time, 40, 50, df, ind_type, 0)[2]

Accuracy is 100%!

There seems to be one false firing of the second postsynaptic neuron in period 27, let’s examine it more closely:

spike_plot(spike_time, True, 27, 3)

In this case, due to the fact that the third postsynaptic neuron fired before the false second one (8.50 mc vs 8.66 ms), it is considered to have fired in the current period and no error will occur.

Congratulations! we wrote a simple impulse neural model from scratch and learned how to encode data using Gaussian receptive fields using only NumPy and a little bit of Pandas.

References:

[1] Alexander Sboev, Danila Vlasov, Roman Rybka, Alexey Serenko, “Spiking neural network reinforcement learning method based on Abstract temporal coding and STDP”, Procedia Computer Science Volume 145, 2018, Pages 458–463

[2] Stefan Schliebs, Nikola Kasabov, “Evolving spiking neural networks: A Survey”, Article in Evolving Systems, June 2013 DOI: 10.1007/s12530–013–9074–9

[3] Sander M. Bohte, Joost N. Kok, Han La Poutre, “Error-backpropagation in temporally encoded networks of spiking neurons”, Neurocomputing 48 (2002) 17–37

[4] S. M. Bohte, H. La Poutre and J. N. Kok, “Unsupervised clustering with spiking neurons by sparse temporal coding and multilayer RBF networks”, IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 426–435, March 2002, doi: 10.1109/72.991428

[5] M. Kiselev, “Spiking neural networks — Information Representation, Learning, Memory” (manuscript)

[6] Eugene M. Izhikevich, “Simple Model of Spiking Neurons”, IEEE Transactions on Neural Networks, vol. 14, NO. 6, NOVEMBER 2003

[7] Dmitry Ivanov, Aleksandr Chezhegov, Mikhail Kiselev, Andrey Grunin, Denis Larionov, “Neuromorphic artificial intelligence systems”, Front. Neurosci., 14 September 2022, Sec. Neuromorphic Engineering, Volume 16–2022

[8] Laurent U Perrinet, Arnaud Delorme, Manuel Samuelides, Simon Jonathan Thorpe, “Networks of Integrate-and-Fire Neuron using Rank Order Coding A: How to Implement Spike Time Dependent Hebbian Plasticity”, June 2001, Neurocomputing 38–40:817–822, DOI:10.1016/S0925–2312(01)00460-X

[9] Senglan Li, Qiang Yu, “New Efficient Multi-Spike Learning for Fast Processing and Robust Learning”, April 2020Proceedings of the AAAI Conference on Artificial Intelligence 34(04):4650–4657, DOI:10.1609/aaai.v34i04.5896

[10] Gütig R, Sompolinsky H., “The tempotron: a neuron that learns spike timing-based decisions”, Nat Neurosci. 2006 Mar;9(3):420–8. doi: 10.1038/nn1643. Epub 2006 Feb 12. PMID: 16474393.

[11] Baktash Babadi, L. F. Abbott, “Stability and Competition in Multi-spike Models of Spike-Timing Dependent Plasticity”, Published: March 3, 2016, https://doi.org/10.1371/journal.pcbi.1004750

[12] Neuromatch Academy: Computational Neuroscience

[13] Zexiang Yi, Jing Lian, Qidong Liu, Hegui Zhu, Dong Liang, Jizhao Liu, “Learning rules in spiking neural networks: A survey", Neurocomputing, February 2023 DOI: 10.1016/j.neucom.2023.02.026

I hope you enjoyed reading my article, please subscribe or clap 👏 if you did. If you are interested in similar content, you can follow me here or on any other social network, such as Linkedin or write to me on Telegram(@ AndreyUrusov)

--

--