Making a Perceptron in JavaScript — because it’s inefficient

Wayward Verities
4 min readMar 20, 2023

--

When learning AI & machine learning python is the default. Once you get used to it you’ll love it. For learning however, it may not be the best language, because sometimes many things happen in one line of code. In JavaScript, we’ll have to painstakingly write our own loop-in-loops, so we know exactly what is happening.

Created with stable diffusion 1.5. Prompt: robot, network (2nd result)

I started by asking ChatGPT the following (yes including the typo):

What is the smalles and most simple machine learning model and how to make it in python

A Perceptron, also called a linear binary classifier, is the most simple neural network one can make.

Resources

Code walkthrough

Inputs and outputs

No numpy arrays for us, so simple ‘ol js array of arrays. We will be predicting the y value (e.g. 1) using the x value (e.g. [1, 0]).

const x_inputs = [
[0, 0],
[0, 1],
[1, 0],
[1, 1]
]

const y_outputs = [
0,
1,
1,
1
]

Settings, parameters

epochs are the number of rounds we are going to fine tune our machine learning model. The learning_rateis is by how much we are going to change the values in the model. log_epochs is whether or not all data for one epoch should be logged. This helped me understand each step of the process.

// Settings
const epochs = 5
const learning_rate = 0.01
const log_epochs = false

Building the Perceptron model

The goal here is to end up with values in the weights and the biases, so we get a formula like prediction = input1 * weight1 + input2 * weight2 + bias, similar to y=ax+b you learned in school. So we are going to loop through all in and outputs as many times as there are epochs. We start of by setting global variables let weights = [0, 0] and let bias = 0 .

let weights = [0, 0]
let bias = 0

In each epoch, for each in/output combination we are going to do a couple of things:

  1. Take the x inputs and the y output and place them in new variables x and y_target respectively, as we are going to try to predict the y value. Which will be very wrong in the beginning, but will be corrected as the number of epochs increases.
  2. Predict the y value with the weights and bias of that moment (prediction will be elaborated further later).
  3. Calculate the error which will be used to update the weights and bias.
  4. Update the weight and the bias
for (i = 0; i < epochs; i++) {
if (log_epochs) console.log(`⏩ Epoch ${i}`)

// Loop through each in/output every epoch
for (j = 0; j < y_outputs.length; j++) {
x = X_inputs[j]
y_target = y_outputs[j]

// Prediction can only be 0 or 1, error can be -1, 0 or 1
y_predicted = predict(x)
error = y_target - y_predicted

if (log_epochs) console.log(`${x} ⏩ Target y: ${y_target} ⏩ Predicted y: ${y_predicted} / error: ${error}`)

// Update the weights and the bias
weights = weights.map((weight, i) => {
return weight += x[i] * error * learning_rate
})
bias += learning_rate * error

if (log_epochs) console.log(`Weights ${weights} / bias ${bias}`)
}
if (log_epochs) console.log(`After epoch: ${weights} - ${bias}`)
}

As we are building a binary classifier, we need to predict either a 0 or a 1 and nothing in between. As stated before, we’ll take the weights/bias formula as it is in the current epoch to make the prediction. If it’s 0 or larger then 0, the prediction will be 1. If it’s smaller than 0, return 0.

// Uses global weights  and bias
function predict(x) {
const y_pred = x[0] * weights[0] + x[1] * weights[1] + bias

if (y_pred >= 0) {
return 1
} else {
return 0
}
}

Predicting with new values

After running all the code above, we have our model to work with. And it looks something like this: predicted y = x_input1 * 0.01 + x_input2 * 0.01 — 0.01 . Of course we’ll have to binary classify this again, so make it a 0 or 1. Say we have an input [0, 0] that’ll make 0*0.01+0*0.01–0.01=-0.01 or a 0 and a [0, 1] makes 0*0.01+1*0.01–0.01=0 so a 1 (still following?).

There are only 4 possible input combinations so we can test them all:

console.log({weights, bias})
console.log([0,0], predict([0,0]))
console.log([0,1], predict([0,1]))
console.log([1,0], predict([1,0]))
console.log([1,1], predict([1,1]))

The responses are 0 , 1 , 1 and1 . Exactly like our y values, so we have made a perfect prediction.

Conclusion

This exercise helped me get a feel for all details of a simple neural network instead of trying to get a top level grasp of it. I’ll continue attempting this exercise until I can execute it from memory.

By no means am I a deep learning expert and my js code is not as clean as it can be. This is just an exercise. Feel free to use it in any way you like.

--

--