100 Days of ML — Day 10 —AI for Football or Why I Dislike the Nine Lines of Neural Networks

Jimmy Murray
Predict
Published in
3 min readSep 26, 2018

First off, a tenth of the way there! Woo!

Second, I have to be brief because I have to give a presentation on starting podcasting from scratch to get clients so I can get a better computer to do things in NLTK and take classes to get certified.

Third, man, so here‘s some code, as promised. It is a neural network in nine lines. It comes from this article by the wonderfully talented Milo Spencer-Harper: https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1

from numpy import exp, array, random, dot
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) — 1
print(synaptic_weights)
for iteration in range(10000):
output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
synaptic_weights += dot(training_set_inputs.T, (training_set_outputs — output) * output * (1 — output))
print(1 / (1 + exp(-(dot(array([1, 0, 0]), synaptic_weights)))))

It yields the following output:

[[-0.16595599]
[ 0.44064899]
[-0.99977125]]
[ 0.99999991]

It’s quick and elegant and teaches the main precepts: inputs, outputs, forward pass, backprop, epochs, weights, weight updates…I can extrapolate into gradient descent if I want.

If, however, I update the code to this:

from numpy import exp, array, random, dot
training_set_inputs = array([[0, 0, 2], [2, 2, 2], [2, 0, 2], [0, 2, 1]])
training_set_outputs = array([[0, 2, 2, 0]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) — 1
print(synaptic_weights)
for iteration in range(10000):
output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
synaptic_weights += dot(training_set_inputs.T, (training_set_outputs — output) * output * (1 — output))
print(1 / (1 + exp(-(dot(array([2, 0, 0]), synaptic_weights)))))

I get this output:

[[-0.16595599]
[ 0.44064899]
[-0.99977125]]
[ 0.99993704]

I fully expected a 1.9999 there. or a 1.98. Something to reflect the fact that I updated everything to a 2. So when I use real-life data from football games to try and predict points based on yards and first downs:

from numpy import exp, array, random, dot
training_set_inputs = array([[27,138,94],[16,160,138],[30,247,182],[20,269,65]])
training_set_outputs = array([[8,20,24,9]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) — 1
print(synaptic_weights)
for iteration in range(10000):
output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
synaptic_weights += dot(training_set_inputs.T, (training_set_outputs — output) * output * (1 — output))
print(1 / (1 + exp(-(dot(array([17,177,104]), synaptic_weights)))))

And get this:

[[-0.16595599]
[ 0.44064899]
[-0.99977125]]
[ 3.09880060e-13]

My example is useless, because the output is dumb.

The issue is that there’s no hidden layer, so there’s nothing building rules on the various numbers. Further, I didn’t normalize the data, so I’m just pumping out what I think is a probability (and please correct me if I’m wrong).

This sucks for me because I really want to make a complicated subject like AI as simple as possible to teach as many people as possible and I thought this was the route. On the plus side, I’ve learned a lot about myself and my teaching methods. Later this week, I’ll have an example with a hidden layer which will predict football scores based on first downs and yards.

Milo Spencer-Harper’s example is still valuable to the first year comp sci student and I really appreciate that. I’ll build off of it to be able to sell AI to team owners in pro sports.

Jimmy Murray is a Florida based comedian who studied Marketing and Film before finding himself homeless. Resourceful, he taught himself coding, which led to a ton of opportunities in many fields, the most recent of which is coding away his podcast editing. His entrepreneurial skills and love of automation have led to a sheer love of all things related to AI.

#100DaysOfML

#ArtificialIntelligence

#MachineLearning

#DeepLearning

--

--