Do neural networks really work like neurons?

Artificial Neural Network and Machine Learning have become hot topics in the popular media. The idea of intelligent machines captivates the imagination of many, and especially how they would compare to humans. Specifically, one fundamental question that seems to come up frequently is about the underlaying mechanisms of intelligence — do these artificial neural networks really work like the neurons in our brain?

Tl; Dr:

No. While the high level and conceptual thinking of ANNs (artificial neural networks) is inspired by neurons and neural networks in the brain , the ML implementation of these concepts has diverged significantly from how the brain works. Moreover, as the field of ML progressed over the years, and new complex ideas and techniques have been developed (RNNs, GANs, etc) — that link has further weakened.

Key Similarities

The high-level architecture and general principles of feed forward fully connected networks — at a high level, a brain neuron has three components:

  1. The dendrites (the input mechanism) — tree like structure that receives input through synaptic connections. The input could be sensory input from sensory nerve calls, or “computational” input from other neural cells. A single cell can have as many as 100K inputs (each from a different cell)
  2. The Soma (the calculation mechanism) — this is the cell body where inputs from all the dendrites come together, and based on all these signals a decision is made whether to fire an output (a “spike”). This is a bit of a generalisation, as some of the calculation already happens before the Soma, and is encoded in the dendritic structure of the cell.
  3. The axon (the output mechanism) — once a decision was made to fire an output signal (thus making the cell active), the axon is the mechanism that carries the signal, and through a tree like structure as its terminal, it delivers this signal to the dendrites of the next layer of neurons via a synaptic connection.

Similarly, there is an equivalent structure in ANNs:

  1. Incoming connections — every neuron receives a set of inputs, either from the input layer (the equivalent of the sensory input) or from other neurons in previous layers in the network.
  2. The linear calculation and the activation functions — these “sum up” the inputs and make a non linear decision whether to activate the neuron and fire.
  3. The output connections — these deliver the activation signal of the neuron to the next layer in the network.

Similarly, convolutional NNs have been inspired by the visual pathway. The cool thing is that the original inspiration was about the architecture (small kernels/filters that are responding to specific shapes or patterns, and are applied on a small region at a time). However, years later, when ML researchers developed new techniques to visualise the hidden layers of a CNN, it was discovered that the representation of the images in the CNN, are very similar to what happens in the visual cortex. I.e hierarchical representation — starting from simple patterns at the first layer, which are then composed into complex shapes and objects in the deeper layers.

Plasticity — one of the unique characteristics of the brain, and the key feature that enables learning and memory is its plasticity — ability to morph and change. New synaptic connections are made, old ones go away, and existing connections become stronger or weaker, based on experience. Plasticity even plays a role in the single neuron — impacting its electromagnetic behavior, and its tendency to trigger a spike in reaction to certain inputs.

This idea of plasticity is the key principle in the training of ANNs — iteratively modifying the weights of the networks parameters, based on the batches of inputs. Recently, the field of meta learning expanded the use of plasticity in ANNs beyond parameters, and apply it to modifying hyperparameters or even a whole model that is optimized at solving a given problem.

Key Differences

The complexity and robustness of brain neurons is much more advanced and powerful than that of artificial neurons. This is not just about the number of neurons, and the number of dendritic connections per neuron — which are orders of magnitude of what we have in current ANNs. But it’s also about the internal complexity of the single neuron: as detailed below, the chemical and electric mechanisms of the neurons are much more nuanced, and robust compared to the artificial neurons. For example, a neuron is not isoelectric — meaning that different regions in the cell may hold different voltage potential, and different current running through it. This allows a single neuron to do non linear calculations, identify changes over time (e.g moving object), or map parallel different tasks to different dendritic regions — such that the cell as a whole can complete complex composite tasks. These are all much more advanced structures and capabilities compared to the very simple artificial neuron.

Implementation — the neurons in the brain are implemented using very complex and nuanced mechanisms that allow very complex non linear computations:

  • chemical transmission of signals between neurons in the synaptic gap, through the use of neurotransmitters and receptors, amplified by various excitatory and inhibitory elements.
  • Excitatory / inhibitory Post synaptic potential that builds up to action potential, based on complex temporal and spatial electromagnetic waves interference logic
  • Ion channels and minute voltage difference a governing the triggering of spikes in the Soma and along the axon
  • A lot that we don’t yet understand…

Compared to these, the parameters, weights, and the linear and activation functions that are used in ANNs are very basic and crude.

On top of that, the overall network architecture of neurons in the brain is much more complex than most ANNs. Especially, your common next door feed forward network, where each layer is connected only to the previous and next layers. But even compared to multi layered RNNs, or residual networks, the network of neurons in the brain is ridiculously complex, with tens of thousands of dendrites crossing “layers” and regions in numerous directions.

On the other hand, it’s very unlikely that the brain uses methods like Back propagation — leveraging the chain rule over partial derivatives of an error function.

Power consumption — the brain is an extremely efficient computing machine, consuming on the order of 10 Watts. This is about one third the power consumption of a single CPU…

New developments — GAN, RL, RNN, … — there is a constant stream of new ideas and innovations in both theoretical and applied ML. These aren’t rooted in the brain anymore. They might be inspired by it, or by human behavior, but in many ways, ML research and work has a life of its own now — pursuing its own challenges and opportunities.

The brain as continuous inspiration

Despite all the differences listed above, ML research still keeps referencing to the brain as an inspirational source, since it’s so much more robust and effective than anything we have. So identifying these gaps, and researching how they work in the brain, have sparked and inspired some of the most exciting and challenging recent ML research. For example:

  • Power efficiency — as noted, the brain has orders of magnitude more neurons and connections than any ANN we have built, yet it consumes orders of magnitude less power. There are active areas of research trying to achieve that. From biological networks using DNA and other molecules, to “neuromorphic” electronic switches that try to mimic how neurons and synapses work.
  • Learning from a very small set of training examples — most probably through some built-in models that allow “intuitive” understanding of physical laws, psychology, causality, and other rules that govern decision making and acting on Earth. These accelerate learning and guide prediction/action compared to the current generic tabula-rasa NN architectures.
  • Unleashing the power of unsupervised and reinforcement learning— unsupervised and reinforcement learning are the “dark energy” of AI. Just like in physics, where black energy makes the vast majority of our universe, yet we know very little about it, it’s clear that our brain mostly learns through unsupervised and reinforcement learning. Yet, most of the current applications of ML uses supervised learning. Unlocking this puzzle is critical for building machines that can learn like humans.
  • New approaches and architectures. For example, see article in Quanta Magazine on how the neurological systems behind the smell sense, may inspire a new ML approach to a set of problems we don’t handle well using the current approaches: New AI Strategy Mimics How Brains Learn to Smell.

Finally, this is in no way a comprehensive answer, of course, and there are clearly many more similarities and differences that I didn’t cover. For example, this is a short great talk by Blake Richards from Toronto, who gives a fresh and unique perspective on the principal similarities between the brain and Deep Learning. But hey, it’s the intersection of two of the most fascinating, complex and fast moving research areas of our age- so expect more!

Go to the profile of Tom Kuegler

Tom KueglerFollow

The Mission Contributor. 25. Published on Thought Catalog. YouTuber. Travel blogger. Get my free 5-day Medium course via email → http://bit.ly/2olDN4V

Oct 1

You Need To Understand This Before You Try Anything New

I remember the first time I took my Canon 70D out to “vlog.”

It was super weird.

People were definitely shooting me looks for talking to a camera, but that’s not why it was weird.

It was weird because I didn’t know what I was doing.

I have a strict set of processes for opening up Medium and writing a piece of content, but for videos I, well, don’t.

It was like getting pushed into a pool without knowing how to swim.

I did my best to roll with the punches and figure this new art form out, but for the most part I was pretty lost.

And I loved every second of it.

Let me explain.

Expect Absolutely Nothing When You’re Starting Out

Blogging for me has been super difficult to “master.”

It required/requires an insane amount of research.

Speaking of which, this is normally how I research stuff…

  1. Set out to learn more about something.
  2. Realize 10 minutes in that there’s 5 other things you have to research before you can properly research what you were just researching.
  3. Spend two hours researching one of those things.
  4. Pull your hair out because you feel like you’ve gotten nowhere.

Then you have to come back the next day to dive into the other four things.

Expertise only comes after we pop open about 1,000 Russian dolls (the ones where there’s one inside of one inside of one inside of one).
Photo by Nathan Dumlao on Unsplash

Sure, you want to maybe be a blogger or a vlogger or an artist of some sort.

But to be a blogger you have to become proficient in a TON of different things that you had no idea were part of the equation at first.

About 10 minutes into my little vlogging trip a few months back, I realized that my first month’s worth of videos are probably going to suck.

I need to reacquaint myself with Premiere Pro, and lighting, and editing, and shooting, and storyboarding. Not to mention I need to LEARN what makes a whole new platform tick — YouTube.

Did I really expect myself to hit the ground running and have 1,000 subscribers by the end of month 1?

Absolutely not. Which brings me to my next point..

It’s Not About Numbers, It’s About Learning

My first day with Premiere Pro I learned how to make slow-mo videos. The day after that I learned about f-stops and shutter speed and ISO values on my camera.

Yes, I felt like the biggest noob of all time, but I have to trust in my abilities as a filmmaker (the abilities I used to wield in high school) to carry me through.

I’m learning something new every single day.

That’s progress.

That’s progress towards the goal that I eventually want to achieve — average 1,000 views per video on YouTube someday.

Celebrate learning, not the numbers.

The numbers WILL NOT be there at first, but they will arrive if you focus on learning and improving.

That’s a damn certainty.

And here’s one other thing..

Our Proficiency Multiplies Exponentially Over time

Say I spent the first 3 months sucking on YouTube (I did — but now, 6 months later, I have 1,600 subscribers).

That’s fine.

Even if I sucked on YouTube (I did), I’d still probably grow a small following of 100–200 people, right? If I posted 20 videos per month for 3 months (60 videos), I think getting 100–200 subscribers out of that would be pretty realistic (and I did).

One may miss the mark pretty often but they’re DEFINITELY not going to miss it 60 times.

Something really funny starts to happen around month 3–4 when you actually start hitting the target..

You start never missing.

Then your content starts to resonate more.

Then you build a small tribe of people that will watch EVERYTHING you put out.

Then you start getting the YouTube algorithm working in your favor.

Then making videos gets about 50 times easier (because you’ve developed a process).

Then you basically just sit back, create, and keep growing your audience exponentially over the next few weeks/months/years.

It’s the weirdest thing that it occurs like this.

It’s super frustrating, especially, when you’re first starting out and everything you’re doing sucks.

Make Peace With Sucking

My advice is to make peace with sucking.

You need to understand that it’s perfectly normal to suck.

Yes, it is exhausting and tiring to not be good at something and to put your heart and soul into creating, but before you say you’re not making any progress, think about what you’ve learned.

Think about all you’ve learned.

And realize that you ARE making progress. You are.

I hope this helps.

Want to make extra cash on Medium? I’m hosting a free training titled “How To Make An Extra $900 Per Month On Medium” on Wednesday. Get your free seat right here!

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by + 373,968 people.

Subscribe to receive our top stories here.