Sitemap
TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

From Legoland to Neural Networks: Universal Approximation Theorem in Layman’s Terms

Why Deep Learning is so powerful yet so simple in its core

5 min readDec 21, 2020

--

Press enter or click to view image in full size
Photo by Kristine Tumanyan on Unsplash

So you’ve heard about AI, heard about the amazing things a well-trained Machine Learning model, especially Deep Learning model can do. In some tasks, it even surpasses human performance. For example, a computer can now recognize different kinds of objects like cats, dogs, cars better, and an average human with faster speed, all thanks to the recent development of deep learning and neural networks. But what you may not hear about, is at its core, lies a simple theorem, a simple principle that makes all these possible. Enter Universal Approximation Theorem. Once you understand it, deep learning or multi-layer neural networks will never be a myth to you. You’ll know why it’s so powerful, and more importantly, where its limit is.

So What in the World is Universal Approximation Theorem?

If you don’t know what Universal Approximation Theorem is, just look at the above figure. It pretty much explains itself. No, I’m just kidding. We’ll not go the heavy math route. Instead, I’ll try to explain it as simply as possible so even if you don’t know much about math or function approximation, you can still understand. Put on layman’s

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Michael Li
Michael Li

Written by Michael Li

Data Scientist | Blogger | Product Manager | Developer | Pentester | https://www.linkedin.com/in/michael-li-dfw

Responses (1)