Sitemap
DataThings

DataThings blog is where we post about our latest machine… Feel free to visit our website: www.datathings.com

Neural networks and back-propagation explained in a simple way

15 min readFeb 1, 2018

--

Press enter or click to view image in full size

Any complex system can be abstracted in a simple way, or at least dissected to its basic abstract components. Complexity arises by the accumulation of several simple layers. The goal of this post, is to explain how neural networks work with the most simple abstraction. We will try to reduce the machine learning mechanism in NN to its basic abstract components. Unlike other posts that explain neural networks, we will try to use the least possible amount of mathematical equations and programming code, and focus only on the abstract high level concepts.

A supervised neural network, at the highest and simplest representation, can be presented as a black box with 2 methods learn and predict as following:

Press enter or click to view image in full size
Neural network as a black box

The learning process takes the inputs and the desired outputs and updates its internal state accordingly, so the calculated output get as close as possible to the desired output. The predict process takes an input and generate, using the internal state, the most likely output according to its past “training experience”. That’s why machine learning is called sometimes model fitting.

--

--

DataThings
DataThings

Published in DataThings

DataThings blog is where we post about our latest machine… Feel free to visit our website: www.datathings.com

Assaad MOAWAD
Assaad MOAWAD

Written by Assaad MOAWAD

Interested in artificial intelligence, machine learning, neural networks, data science, blockchain, technology, astronomy. Co-founder of Datathings, Luxembourg

Responses (41)