Beginner’s Guide To Neural Networks

Ryan Sheffer
The Startup
Published in
4 min readJun 28, 2016

--

Computers are getting smarter. Some worry about Skynet, others get excited by self-driving cars. Most of us aren’t sure which is more likely. This piece is for the latter group.

Intelligence

As a child, if we touch a mug and it burns us, we don’t touch the mug again. Before touching the mug we had no concept that it could hurt us — now, we’ll likely go the opposite way and NEVER touch a mug even it’s not filled with hot liquid. Then we see someone pick up a mug and not get burned. Our framework for the world has changed. Now we know there are moments when the mug is cool enough to touch. This adjustment of our understanding of the world based on recognizing patterns in our environment is intelligence. And, like us, computers learn through the same type of pattern recognition.

Neural Networks

Traditional computer programs have logic trees. If this happens, then that happens. All of the potential outcomes for the system are preprogrammed.

Logic trees can be very complex, but are unchanging.

A neural network, however, is built without specific logic. It is a system that is trained to look for, and adapt to, patterns within data. It is modeled after how our own brain works. Neurons (ideas) are connected to other ideas via synapses. Each synapse has a value representing the likelihood of the connection between two neurons to occur.

Anatomy of neuron

A neuron is a singular concept. It’s a mug. It’s the color white. Or maybe, it’s tea. It can also be the concept of “hot.” All of these are possible neurons. All of them can be connected, like they are in this mug of tea. It is not only a white mug, but it’s also tea, and it’s also quite hot.

Each neuron is a circular node, the synapses are the lines connecting the neurons. The value put to each synapse represents the likelihood that one neuron will be found alongside the other.

But, not all mugs have the same properties as this one. There are many other neurons that can be connected to the mug. Coffee, for example, is likely more common than tea. The strength of the synapse connecting each neuron determines the likelihood that one is connected to the other. The more mugs that are hot, the stronger the synapse. The system is growing to understand the likelihood that a mug will be hot to the touch.

However, in a world where mugs are less commonly used to transport hot liquid, the frequency of mugs that are hot to the touch would decrease. Over time, this decrease would lower the strength of the synapses connecting mugs to heat.

This small and seemingly unimportant description of a mug represents the core construction of neural networks. A logic tree is predetermined and therefore would require knowing and then manually inputting how likely it was that a mug would be hot to touch. A neural network, simply responds to data confirming or denying the frequency of the neuron “heat” being connected to the neuron “mug.”

This concept is the closest we’ve come up with for how our own brain works. We touch a mug on a table — it’s hot. We think all mugs are hot. We touch another mug, this time in the cupboard — it’s not hot. Mugs in the cupboard are the ones that aren’t hot. Over time, we grow and evolve and our human brain takes in more data. We determine an accurate probability as to whether or not the mug we’re about to touch will be hot. Computers learn in the exact same way. They are our brains when we have experiences — checking off whether or not the experience was expected, and hopefully growing and adapting when the experience is unexpected.

Ryan Sheffer is the CEO of Zero Slant, an AI company dedicated to automating the discovery of news in real-time. You can reach him at ryan@zeroslant.com

--

--