How to Build a Neuron: Exploring AI in JavaScript Pt 2

Eric Elliott
JavaScript Scene
Published in
7 min readJul 2, 2016

--

Dual Neuron — Scott Ingram (CC-BY-NC-2.0)

In this series, we’re discussing a topic that will transform the world we live in over the course of the next 25 years. We’re going to see lots of drones, self driving cars, VR, and AR devices changing how we get around, how we transport things, and how we see and interact with the world, and it will all be powered by AI and neural nets.

Need to catch up?

In part 1, we talked a little bit about what neurons are and how they work, and wrapped it up by showing a trivial example of how to sum synapse inputs and determine whether or not the neuron should fire, and finished off the article by suggesting a question: What about time?

Get the Source

From here on out I’ll be recording these adventures in a library called neurolib. All the tools discussed in this article are available by running:

git clone git@github.com:ericelliott/neurolib.git && cd neurolib
git checkout pt2

To run the unit tests:

npm test

Not Your Traditional Artificial Neural Net

If you’re at all familiar with traditional neural nets, you’re probably wondering when I’m going to start talking about gradient descent or Hidden Markov Models (HMM). The answer is, maybe someday.

For now, I want to explore how real neurons and synapses work in real brains, rather than jump straight into math abstractions. In other words, we’re going to take the simulation approach rather than the abstraction approach to neural networks. We won’t try to emulate the brain at the molecular level, but we will try to simulate the way that continuous values flow through the neural network over time, and maybe that will allow us to emulate the learning behavior of real neural networks to some degree.

A Brief Recap of Biological Neuroplasticity

To recap neurotransmission and plasticity: The presynaptic neuron fires, which releases neurotransmitters, which bond to receptors, which open ion channels, which may sum with other dendrite potentials to trigger an action impulse, which may further excite chemicals in the soma which may cause proteins to generate, which may bond to the synaptic wall, implanting more ion channels, and so on, and all of this is dependent on how closely together the presynaptic neuron and the postsynaptic neurons fire. (If you’re confused right now you may want to re-read part one).

In other words:

Cells that fire together wire together.
Cells that fire apart wire apart.

According to spike-timing-dependent plasticity (which is observed in real neuron experiments), one critical question is:

“Did cell x fire within 20ms of cell y?” Voltage matters too, so there’s also the related question: “What’s the voltage?”

In order to try to simulate spike timing dependent plasticity, we’re going to build a neuron that sums continuous values from synapses and emits pulses over time, synchronized to the clock.

The Virtual Clock

We’ll want to be able to adjust our timing resolution, so we’re going to use a strategy that’s employed by a lot of games and audio/video editors. Instead of measuring time in milliseconds or some other real time unit, we’re going to build a virtual clock and measure time in ticks.

Another great advantage of the virtual clock is that we can schedule events precisely in relation to each other, rather than counting on JavaScript’s wildly-unreliable `setTimeout()` mechanism. FYI, `setTimout()` schedules are regularly late by up to 50ms. That’s enough to completely drop random action potentials.

Our virtual clock will use `setTimeout()` by default under the hood, but since all our events will be synchronized to virtual ticks, we won’t miss important events, even if `setTimeout()` is late. Our clock will also feature automatic jitter correction, so even if `setTimeout()` does go haywire (and it will), our time scale will still be fairly accurately synchronized relative to real time.

That is, some of our events may be a few ms late here or there, but if we have a tick every 20ms and there are 5 ticks, the total duration should still be close to 100ms.

Because `setTimeout()` is so unreliable, I want to be able to swap it out for a different scheduler down the road. Maybe we can synchronize to the web audio API or `requestAnimationFrame` to optimize our timing in the future. FYI, the web audio API is accurate to 44.1 kHz and can fire an `ended` event that we could use for event scheduling. The web audio API also works in a separate thread, so the timing should be accurate even if we’re blocking the main thread. Theoretically, that should make our timing a few hundred times more accurate than is possible with `setTimeout()`.

Here’s a first draft:

We’re exposing clock parameters for convenience. They may come in handy later. The subscribe mechanism here is loosely inspired by the ES-Observables API, but we may need more control over the timing mechanics than we’d get if we simply used RxJS 5.x+ for this job. We may even learn something rolling our own from scratch.

Generating Action Potentials

Because we’re interested in continuous values over time, rather than treating neurons like digital circuits or statistical analysis, we’re going to simulate the behavior of real action potentials. Real action potentials have a sharp rising slope called the depolarization phase, a sharp falling slope called the repolarization phase, which continues past the resting potential for hyperpolarization, followed by a slower rising phase called the refractory phase.

We’re going to represent the action potential in 3 parts, depolarization, repolarization+hyperpolarization (together, the falling period), and the refractory period (return to resting potential).

To generate the potential, we’ll need a few adjustable parameters: `depolarizationMs`, `repolarizationMs`, `refractoryMs`, `min`, `max`, and `restingPotential`. In our simulation, we’ll use zero as the resting potential, which represents the biological resting potential, ~-60mV.

Since we’ve generated a clock that is independent of real time, we can use any time resolution we want to represent the data, which is represented as `ticksPerMs`.

The first step in generating a curve is to create smooth interpolations so we get rounded corners like you’d see in a real waveform. Interpolation is the process of filling in missing data within the range of known data points.

To do that, we’ll take a starting value, and ending value, and the current sample frame of the interpolation, traditionally called `mu`. I think a simple cosine interpolation should do the job:

Next, we’ll make a function called `createPhase()` that takes the length in ticks (frames), a start value, and an end value, and returns an array of values for each tick in the phase. `cosineInterpolate()` creates a single value. `createPhase()` creates a whole set of values from a starting point to an ending point:

With those utility functions in place, we can put together the complete action potential:

To save doing the math every time, we’ll memoize the function. We turn the parameters into a key and add the resulting array to the memo object. The next time the function runs, we look up the key in the memo object. If it exists, we return the stored array rather than generate a new action potential.

When I feed some typically cited time values into those parameters, at 44.1 khz (we’ll probably adjust the resolution for performance later), I get a curve that looks like this:

That looks roughly like the idealized action potential graphs I’ve been looking at. It should work well as a starting point.

Syncing Action Potentials to the Clock

Now that we have a function that generates an action potential as an array of sample values, we can sync that to the clock so each value will be emitted at the correct time.

Let’s create a data node that will map one observable to another. Using that, we’ll be able to map clock ticks to whatever values we want:

Now we’ll be able to connect a string of nodes, all synced to the same clock. We can use `createNode()` to synchronize array values to a clock using `nodeFromArray()`:

To propagate our action potentials down the axon, all we have to do is call `createActionPotential()` and pass it into `nodeFromArray()` with our clock signal.

Now let’s create an intentionally slow, low-resolution action potential and see what happens:

In this example, we’ve scaled each millisecond to 500ms using one tick per millisecond. This gives us pretty boxy resolution, but it lets us watch the timing in a way that we can see is relatively accurate.

When we graph those values, it looks like this:

Low-resolution action potential

Now that we have a clock-synchronized action potential, the next step is to run this signal through a synapse.

__

  1. Action Potential Initiation and Propagation in Rat Neocortical Pyramidal Neurons J Physiol. 1997 Dec 15; 505(Pt 3): 617–632.

Eric Elliott is the author of “Programming JavaScript Applications” (O’Reilly), and “Learn JavaScript with Eric Elliott”. He has contributed to software experiences for Adobe Systems, Zumba Fitness, The Wall Street Journal,ESPN, BBC, and top recording artists including Usher, Frank Ocean,Metallica, and many more.

He spends most of his time in the San Francisco Bay Area with the most beautiful woman in the world.

--

--