Breaking Down the Free-Energy Principle

“Theories of Everything” for intelligence with Deven Navani.

Neurotech@Berkeley
Apr 30, 2019 · 5 min read
Credits to Freepik under Creative Commons License.

All-encompassing ideas don’t come around very often. That’s because they answer controversial, mind-numbingly difficult questions.

Darwin’s Theory of Evolution gave us the answer to whether a common ancestor for all multicellular organisms exists. The Big Bang theory offered a solution to the puzzle of the universe’s beginnings. Our latest quest, building artificial general intelligence, has introduced another such question:

Is there an organizing principle for all intelligent life?

At a time when the AI community is stepping back from traditional algorithms in deep and reinforcement learning, this line of questioning is garnering increasing attention.

AI experts believe building a true intelligence requires a deeper understanding of the premium standard for biological intelligence, the “final frontier” of scientific discovery: the human brain.

Still, if the brain were simple enough for us to fully understand its mechanics and describe it so succinctly, then would we be too dumb to discover such a principle?

Karl Friston, the most-cited neuroscientist alive, doesn’t think so. To be alive, Friston says, is to act in ways that reduce the gulf between your expectations and your sensory inputs. He calls his idea the Free Energy principle.

I want to dig deeper into Friston’s Free Energy principle and provide an explanation that could foster an above average understanding, without all the math.

Before we start, it’s important to keep in mind that Friston views his principle as an “as if” concept. Biological things don’t need to minimize free energy in order to exist, but they behave and self-organize as if they do. We want to avoid understanding the principle in an overly reductive manner and rather more as a working hypothesis.


Back in the 19th century, Helmholtz explained that the brain could be thought of as a Bayesian probability machine. Our brains, Helmholtz believed, compute and perceive in a probabilistic manner, constantly making predictions and adjusting beliefs based on what the senses contribute. The brain seeks to minimize “prediction error.”

For example, if I believe I should be out of bed and my senses tell me I’m still laying down, my brain can resolve this inconsistency or surprise by altering my belief. Instead of getting up and being ready to go, why not sleep some more?

In a way, the idea of the brain as an “inference engine” serves as the foundation of Friston’s Free Energy principle. Free energy is the difference between the states you expect to be in and the states your senses tell you that you are in. The difference is that the Free Energy principle proposes two modes of action for the brain when it makes a prediction that isn’t consistent with what the senses relay back, or when free energy is high:

  1. The brain revises its prediction so that its belief matches sensory input. This is Helmholtz’s idea that we described earlier.
  2. The brain signals the body to act in a way so that the body is in a new state and new sensory input matches the pre-existing belief.

Option two is called active inference, and it’s the key difference between the Bayesian brain hypothesis and Friston’s principle.

Revisiting our prior example, I now have another way of resolving the inconsistency. My brain can command my muscles to engage so that I get myself outside of the bed. Now, my senses tell me I am out of bed, which matches my prior belief.

With the addition of active inference, we can explain not only our changing beliefs and perceptions but also the motivations behind the actions our bodies take.

After all, Helmholtz’s and Friston’s concepts both rest upon the concept of negative entropy, which isn’t a new idea by any means.

The Second Law of Thermodynamics highlights the irreversibility of natural processes. The universe tends towards entropy or dissolution. All biological systems, from the cell to the human brain, resist the Second Law. What Friston’s principle offers is a plausible explanation of how living things go about doing so.

The principle truly becomes convincing when you think of it as a theory for mental illness. Schizophrenics, for example, fail to update their account of the world upon taking in sensory input.


As of today, Friston’s brainchild largely exists in his research, perhaps due to its mathematical complexity. The analysis we did here, despite its depth, was surface-level relative to Friston’s papers.

I see promise in Friston’s idea as an explanation for how we behave, but I’m especially excited to see how the Free Energy principle will turn around the fields of deep and reinforcement learning.

There’s no direct link in a published research paper between the Free Energy principle and a new DL/RL algorithm. Current explicit links exist as informal explorations on Github or theoretical articles on Medium.

However, there is a trend in the AI community towards algorithms that align closely with the ideas of minimizing prediction error. Deepak Pathak, a Ph.D. candidate at the University of California, Berkeley, designed a reinforcement learning algorithm in which the agent doesn’t depend on extrinsic rewards. Instead, Pathak creates a model for curiosity-based learning by programming the minimization of prediction error as the reward for the agent. Just like humans, the agent performs actions so that surprise is minimized.

I look forward to seeing the Free Energy principle evolve as a major guiding force in the fields of artificial intelligence and neuroscience. If a link is established, the Free Energy principle and Friston’s work could very well emerge as a stepping stone towards Artificial General Intelligence.


Deven Navani studies electrical engineering, computer science, and business administration at the University of California, Berkeley.

References:

  1. Rocke, Aidan. “Aidan Rocke.” Pauli Space, 12 Apr. 2018
  2. “God Help Us, Let’s Try To Understand Friston On Free Energy.” Slate Star Codex, 5 Mar. 2018,
  3. Raviv, Shaun. “The Genius Neuroscientist Who Might Hold the Key to True AI.” Wired, Conde Nast, 19 Nov. 2018,

Neurotech@Berkeley

Writers, consultants, engineers, and designers working toward advancing neurotechnology for the benefit of humanity.

Neurotech@Berkeley

Written by

We write on psychology, ethics, neuroscience, and the newest in neural engineering. @UC Berkeley

Neurotech@Berkeley

Writers, consultants, engineers, and designers working toward advancing neurotechnology for the benefit of humanity.

More From Medium

More from Neurotech@Berkeley

More from Neurotech@Berkeley

More from Neurotech@Berkeley

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade