Deciphering Neural Networks

Sana Tariq
OPUS
Published in
3 min readDec 26, 2018
Source

“Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clark

Artificial Intelligence (AI) is the eye of the storm that is disrupting and redefining our future. It is a black box (read: magic), both for those who understand it and for those who don’t. Let me explain why…

I am a ’90s child who grew up fascinated with “connect-the-dots” puzzles. Being led from an abstract array of dots to a structured drawing was my version of magic.

Favorite ’90s past-time (and cartoons)!

Fifteen years later, as I was studying neuroscience, I saw the same patterns again. The architecture of the human brain was laid no differently than a “connect-the-dots” puzzle, albeit slightly more complex and with connections that may or may not exist, cease to exist or lead to a function that we don’t understand yet.

Deep Learning, courtesy of Google Semantics

Today, I am exposed to direct applications of these “connect-the-dots” neural networks as I delve into deep learning (DL), a machine learning (ML) technique concerned with algorithms inspired by the structure and function of the brain.

The relationship between AI, ML, and DL.

These Artificial Neural Networks (ANN) connect various inputs to outputs through deep, hidden layers. They learn from large amounts of data to perform tasks that require human intelligence such as understanding human language.

Source

The notion of the black box arises from the middle, hidden layers, which researchers are actively trying to decode in order to learn about the exact decision-making process underlying ANNs.

There are some common ANNs, each of which are suited to specific tasks:

  1. Feedforward Neural Networks (FNN) allow unidirectional flow of information from input to output.
  2. Recurrent Neural Networks (RNN) allow data to flow in multiple directions, possess a greater ability to learn, and are used for more complex tasks such as language processing. RNNs are regressive so they need to remember the previous input to figure out the next output (known as dependencies).
  3. Convolution Neural Networks (CNN) work with inputs that are signals and images, for example, differentiating between images of cats and dogs. CNNs don’t need to remember information like RNNs so they are very parallelizable and tight-fitted to the problem.

I am particularly interested in RNNs as they apply to language processing — after all, I have a deep love of languages, not just as a scientist but also as a writer.

What inspires me to learn more about AI and DL is the process; working with the how and why of these system workflows. I want to decode the problems to their most basic level so I can find simple, fast, and more optimized solutions.

And what could be more exciting than knowing that the answers are right in front of me in the form of these connections; all I have to do is work on making sense of them — something I have enjoyed doing since day one.

My journey began a month ago and as I try to connect the dots, I hope you will join me.

--

--

Sana Tariq
OPUS
Editor for

Research Scientist. Hobbyist writer. Sometimes, philosopher. Dreamer. Achiever.