“brown coral under the body of water with sun streaks in closeup photography” by Daniel Hjalmarsson on Unsplash

Perceptron in Python

Narendra L
Sep 5, 2018 · 5 min read
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline

import matplotlib
import numpy as np
import matplotlib.pyplot as plt

Lets write a basic implementation of perseptron https://towardsdatascience.com/what-the-hell-is-perceptron-626217814f53

The perceptron consists of 4 parts .

1 Input values or One input layer 2 Weights and Bias 3 Net sum 4 Activation Function

FYI: The Neural Networks work the same way as the perceptron. So, if you want to know how neural network works, learn how perceptron works.

But how does it work? The perceptron works on these simple steps

a. All the inputs x are multiplied with their weights w. Let’s call it k. b. Add all the multiplied values and call them Weighted Sum. c. Apply that weighted sum to the correct Activation Function. For Example : Unit Step Activation Function.

d. Why do we need Weights and Bias? Weights shows the strength of the particular node. A bias value allows you to shift the activation function curve up or down.

e. Why do we need Activation Function? In short, the activation functions are used to map the input between the required values like (0, 1) or (-1, 1).

In [2]:

# Lets do some sample code replicating whats show in the above diagram with different input size

# this is with 10 different columns but only one input(training input)

data_len = 10
df = pd.DataFrame()

# x ---> x1, x2, x3 .......... x10
df['x'] = [0,1,1,1,1,1,1,1,1,1]

# w ---> w1, w2, w3, w4..........w10
df['w'] = [1.0/data_len for i in range(data_len)]
s = df['x'] * df['w']
s

Out[2]:

0    0.0
1 0.1
2 0.1
3 0.1
4 0.1
5 0.1
6 0.1
7 0.1
8 0.1
9 0.1
dtype: float64

In [3]: A bias value allows you to shift the activation function curve up or down

#A bias value allows you to shift the activation function curve up or down
bias = -1

s_sum = s.sum() + bias
print(s_sum)
if s_sum > -0.001:
print("light")
else:
print("dark")
-0.09999999999999998
dark

Lets try little advanced with multiple colums

In [23]:

# now we have sample data set with two columns ie. x1 and x2

#
dataset = [
# X1 X2
[2.7810836,2.550537003,0],
[1.465489372,2.362125076,0],
[3.396561688,4.400293529,0],
[1.38807019,1.850220317,0],
[3.06407232,3.005305973,0],
[7.627531214,2.759262235,1],
[5.332441248,2.088626775,1],
[6.922596716,1.77106367,1],
[8.675418651,-0.242068655,1],
[7.673756466,3.508563011,1]]

# BIAS, W1, w2
weights_dataset = [-0.1, 0.20653640140000007, -0.23418117710000003]

dataset = sorted(dataset, key=lambda x:x[0])

plt.plot([i[0] for i in dataset], [i[1] for i in dataset], 'ro')

# filter positive output dataset
dataset_true_op = [[i[0], i[1]] for i in dataset if i[2] == 1]
plt.plot([i[0] for i in dataset_true_op], [i[1] for i in dataset_true_op], 'go')

# Plot weghits with some offset to see how it looks
plt.plot([weights_dataset[1]+1], [weights_dataset[2]+1], 'bx')

Out[23]:

[<matplotlib.lines.Line2D at 0x1155cc690>]

In [9]:

def predict(row, weights):
# takes a row [x1, x2, x3.....]
# weight [bias, w1, w2, w3 .....]

# lets initiate activation with bias
# its equal to
# activation = sum(weight_i * x_i) + bias
# Step activation function -> prediction = 1.0 if activation >= 0.0 else 0.0
# Bias is needed to pull the values up
# y = ax + b(bias)

activation = weights[0]
just_weights = weights[1:]

for x, w in zip(row, just_weights):
activation += x * w

return 1.0 if activation >= 0 else -1.0


for data in dataset:
output = predict(data[:-1], weights_dataset)
print "actual = {} predicted = {}".format(data[-1], output)

# So with the given weights actual value and predicted is correct
actual = 0 predicted = -1.0
actual = 0 predicted = -1.0
actual = 0 predicted = -1.0
actual = 0 predicted = -1.0
actual = 0 predicted = -1.0
actual = 1 predicted = 1.0
actual = 1 predicted = 1.0
actual = 1 predicted = 1.0
actual = 1 predicted = 1.0
actual = 1 predicted = 1.0

In [10]:

# Lets write a perceptron which learns using feedback

DX = np.array([
[-2, 4],
[4, 1],
[1, 6],
[2, 4],
[6, 2]
])

DY = np.array([-1,-1,1,1,1])

X_one = [x for x, y in zip(DX, DY) if y == 1]
X_minus_one = [x for x, y in zip(DX, DY) if y == -1]

plt.plot([i[0] for i in X_one], [i[1] for i in X_one], 'ro')
plt.plot([i[0] for i in X_minus_one], [i[1] for i in X_minus_one], 'bo')

Out[10]:

[<matplotlib.lines.Line2D at 0x11507a690>]

Lets write a perceptron which learns using feedback

# Lets write a perceptron which learns using feedback

def learn(X, Y, epochs=1):
# weights be the lenght of columns of dataset
weights = np.zeros(len(X[0]) + 1)

# set learning rate
eta = 1

# lets monitor errors
errors_list = []

for learning_round in range(epochs):
print("---- Learning {} -----".format(learning_round))
# to calculate error at every round/epoch
total_error = 0

for x, y in zip(X, Y):
# np.dot is matrix multiplication
# eg: np.dot([1,2], [4,5]) => 14
# ∑ (x*w) * y -> Error shoul=d

prediceted_out = predict(x, weights)
error = eta * (y - prediceted_out)
weights[1:] = weights[1:] + (x * error)
weights[0] = weights[0] + error

total_error += abs(error)


print("x= {}, weights= {}, y= {} error= {} predicted = {}".format(x, weights, y, error, prediceted_out))

errors_list.append(error)


plt.plot(errors_list)
plt.xlabel('Epoch')
plt.ylabel('Total Loss')

return weights

Lets add bias to existing array

# add bias to existing array
# DX = np.array([
# [-2, 4, -1],
# [4, 1, -1],
# [1, 6, -1],
# [2, 4, -1],
# [6, 2, -1]
# ])
print(learn(DX, DY, 10))

No you can see that error has gone done to zero after 1 epoch !!

Narendra L

Written by

Techdummies@YouTube + Python:Web-Design:Bigdata:DataScience

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade