DataThings
Published in

DataThings

Neural networks and back-propagation explained in a simple way

Any complex system can be abstracted in a simple way, or at least dissected to its basic abstract components. Complexity arises by the accumulation of several simple layers. The goal of this post, is to explain how neural networks work with the most simple abstraction. We will try to reduce the machine learning mechanism in NN to its basic abstract components. Unlike other posts…

DataThings blog is where we post about our latest machine… Feel free to visit our website: www.datathings.com

Recommended from Medium

Stop Using CSVs for Storage — Pickle is an 80 Times Faster Alternative

Introducing the CDS Academic Showcase

Integrating Ethnography and Data Science

Designing Data Visualizations

Using the Strava API and Pandas to Explore your Activity Data

How GIS plays a Decisive Role Amidst the Global Pandemic?

Geospatial Intelligence Software, GIS, GIS Application Development

5 Best Degrees for Getting into Data Science

Responsible Data Science: Charting New Pedagogical Territory

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Assaad MOAWAD

Assaad MOAWAD

Interested in artificial intelligence, machine learning, neural networks, data science, blockchain, technology, astronomy. Co-founder of Datathings, Luxembourg

More from Medium

What is nn.Embedding really?

RVN Algorithm

Handling imbalance in regression modeling

Credit Card Fraud Detection using Supervised Learning Algorithms