SELF-GOVERNANCE BY PREDICTION AND ACTIVE INFERENCING ?

THE AI AND BAYESIAN MECHANICS OF KARL FRISTON

John Clippinger
16 min readSep 13, 2021

“The Idols of the Tribe have their foundation in human nature itself, and in the tribe or race of men. For it is a false assertion that the sense of man is the measure of things.”

“Nature cannot be commanded except by being obeyed”

Francis Bacon

At a moment when truth is but a soundbite of dueling opinions, regardless of merit or evidence, and significant public figures and segments of the public “refuse to follow the science”, ironically, we might be on the cusp of a new kind of physics and AI potentially more significant than the Newtonian physics that shaped the Industrial revolution.

A new physics is in the making. Rather than being just a physics of inanimate things, this new physics can also account for all “living things” — biological, social, cultural, economic and digital. These fundamental discoveries raise the prospect that we might be able to scientifically organize and govern ourselves in concert with our fellow creatures and the planet. Rather than our political institutions and choices being captive to endless cycles of the conflicts of the “idols of Tribe,” there might be an open, evolving and reflexive authority of fact and explanation that would enable us to raise above or at least better govern our Natures. In the somewhat archaic language of Francis Bacon, Are we willing to relinquish the principles of consensus of the demos, “the idols of the Tribe“ as a “measure of things” and submit to being “commanded” by a science of our Natures? That is a choice that is manifesting itself in the growing clashes between Libertarian and Communitarian views of technologies, especially AI, and between a “humanistic” skepticism or even outright rejection of technology.

THE PHYSICS OF LIVING THINGS — THE BAYESIAN MECHANICS OF KARL FRISTON

The significance of the discoveries of modern physics by Isaac Newton in 1687 and electromagnetism by James Maxwell in 1861 to humanity and the planet cannot be overstated; both for their beneficial and their deleterious effects. They have forever changed us and our planet and have set us on an unchartered and unprecedented trajectory. We are perhaps on the cusp of a similar, and perhaps even more transformational scientific and technological revolution, which MAY be a corrective to the first and a possible answer to the foundational issues of governance and authority. One must begin with numerous qualifiers — this is a new and still highly theoretical science, long in incubation, and initially synthesized by one man, Karl Friston. Yet it is significantly embraced, critiqued and extended by an international network of colleagues in diverse multiple disciplines: physics, neuroscience, mathematics, cognitive science, computer science, computational psychiatry, complexity sciences, evolutionary biology, economics, sociology and many more.

What is so compelling and at the same time, credulity testing, is what Friston calls the “Free Energy Principle”, which is “scale free” and “domain independent”. In layman terms this means that there are a universal set of principles, derived from the mechanics of physics, that show that in order for “living things’ to exist they must act to minimize the discrepancy between their predictions of sensory data and what they actually sense. In short, life depends upon good predictions about what is true and what is false! In order to survive, all living things are or embody “generative models” of their surroundings and they need to accurately predict how it affects them and be able to respond accordingly. By itself, this perspective is not a departure from classic evolutionary theory. However, Friston wants to ground his theory in physics and make it mathematically coherent, and hence, builds upon concepts directly out of physics. For instance, the term “free energy” is a bit confusing to the non-physicist as one would intuitively think that “free energy” is a good thing, and why would you want to minimize it? But in the nomenclature of classical physics “free energy” means the dissipation of energy towards an equilibrium that is in effect, death for living things. Hence, the ability to define a boundary of sensing, action, and prediction in order to secure that boundary and sustain the “internal states” of a thing, as in a cell or bacterium, is the basis for all living things.

What is so mind boggling, and at the same time so promising is that these principles function at all scales — from the tiniest biota to the planet as a whole — and they apply across “domains” or disciplines — neuroscience, biology, economics, psychiatry, etc. Hence, in its espoused explanatory powers, it essentially functions like Theology did in the early Middle Ages, as the “Queen of the Sciences”, encompassing multiple domains. Notably, and controversially, especially for the Neo-Darwinians, Free Energy Principle methods induce and model different forms of agency, individual, collective and “nested”, of not just biological forms, but cultural, economic, and social forms, and even, digital forms of agency. Hence, the Free Energy Principle offers a rigorous form of teleological analysis, which its predecessor, Cybernetics pioneered but failed to advance, for virtually any complex form of organization. But that is not all. Friston was early into the nascent forms of artificial intelligence, popularly known now as “deep mind” and “deep learning” based upon hierarchical neural networks and reinforcement learning. He was an early colleague of Geoffrey Hinton, one of the early inventors of the dominant AI methods today. But his recent work in “active inferencing” and “dynamic causal modeling” goes well beyond prevalent AI techniques to discover or induce transparent, that is, human intelligible, dynamic causal models of highly complex phenomena from causal models of neuronal behaviors to the behaviors of energy markets.

Such a breadth of scope would again severely test credulity were it not for three facts. Karl Friston by any measure, particularly, Google Scholar scores, is one of the most influential and citied neuroscientist in the world. His ability to apply his methods to neuro-imaging data over the last 15 years to model and to predict the complex functions of the brain gives him a special and warranted license. When one combines that with the scrupulous mathematical and methodological rigor with which he developed his mathematical models and his willingness to invite critique from all quarters, one has to grant him another concession of credibility. The third factor is that Friston, as well as being an MD and trained psychiatrist, programs and tests his own dynamic and Bayesian AI models. These Bayesian models are derived from the work of the English cleric and statistician, Thomas Bayes, who in 1763 published a highly influential mathematical model on “subjective” probability, which provides a mathematical basis for deciding whether “beliefs” or “hypotheses” should be relied upon. In its more modern form, “Bayesian inferencing “is used in machine learning to determine whether to adopt a particular model to explain the dynamics of data. Bayesian inferencing is an iterative process that uses feedback — expectations over observations — to continuously update or revise its models of data. When combined with the mechanics of classical physics, Bayesian Mechanics, uniquely and profoundly, introduces feedback, prediction and regulation as a new kind mechanics of natural phenomena that persist and evolve by virtue of their ability to make accurate predictions about their surroundings. By this combination, Friston and his colleagues have introduced “subjectivity” or “beliefs” and the inferential agency of “living things” as a part of natural physics. Hence, to properly understand “living things,” such as people and institutions, and even faltering republics, one must understand the dynamics by which they are able to succeed or fail in matching their expectations of the world and their action in their worlds with their actual observations and experiences.

The failure to achieve precision as to expectations over experience can result in cycles of addiction and denial, not only at the individual level, but at the collective and even institutional level. When in the context of such cycles, appeals to reason, moral outrage, even physical force has little effect. Soshana Zuboff, author of “Surveillance Capitalism” (2020) made an impassioned appeal in a New York Times opinion piece (Jan.30,2021) to redeem the faltering American Republic and to curb the excesses of “big tech” through anti-trust and certain legislative reforms. Yet such reforms will not succeed unless we understand the underlying dynamics of the failures and range of feasible and testable remedies. To paraphrase with slight modification, Francis Bacon’s earlier admonition, — “a people cannot be governed except when their nature is known”. Legitimacy of governance depends upon the power of predictions and the capacity to execute against those predictions.

For the majority of human history, peoples have interpreted physical and mental illness as failures of morality, character, or the judgement of demonic spirits. Not until the last century has there been broad public acceptance of the human body as a living mechanism that itself is subject to natural laws that must be “obeyed” in order for health to be achieved. In that vein, in the not so distant future, we might be able to move away from arbitrary concensus and “treat’ the “body politic” with scientifically informed and evidence-based policies and technologies.

PARTICIPATORY MODELING AS A LEGISLATIVE PROCESS

We may be closer to this future than one might think. Given the power of online gaming and flight simulation platforms to replicate the physics of the physical world as well their ability to provide engaging narratives and gaming scenarios, we may soon be able to digitally model, experience, validate, and consent to the outcomes of our policies before ever having to implement them. The ‘law makers” in this new scenario would be the “players”, indeed, the citizens and residents of communities who can explore and experience digitally the impact of policy alternatives on their real world communities.

Rather than have Left verses Right ideological differences play out in a parliamentary legislative sphere, where they can neither be tested nor grounded in evidence, this dynamic is built into Bayesian Mechanics itself — not as ad hoc anthropomorphism, but as a principle of physics in which generative modeling and active inferencing are continuously trying to reconcile — “priors”, that at is, current beliefs, against its expected beliefs — its “posteriors”. The “truth condition” or “consensus” is the extent this process “minimizes surprise” — reduces “free energy” and preserves life. That seems like a pretty good, independent governing principle, which by the way, is never complete, but an ongoing and open learning process.

What is critical to the success of these modeling dynamics is that they based upon deriving their models and predictions from continuous real “data”, that is, what are sensed and recorded by the “living thing”. These models are not “simplifications” but “digital twins” that perform in the digital world identically to the way their “twin” does in the physical world. This access to data is critical and why such approaches would not have viable until the present. The ironic and positive side to Zuboff ’s “surveillance capitalism” is that there is a growing data layer inevitably representing every action and artifact on the planet. The negative side, the unwanted and unwarranted surveillance and breach of privacy, is mercifully fully addressable by encryption and private key technologies combined with appropriate regulations. Such regulations are begrudgingly slow and often out of phase and misconceived. But an accelerant may be on the way though a change in the business models of platforms whereby individuals and communities will control and monetize their data. That will flip not just the business model but the power equation and only accelerate the transition to decentralized, evidence and model based governance both in the drafting of model based legislation, but in its enforcement as well.

BENIGN AND TRANSPARENT AI

Another highly charged companion demon to “surveillance capitalism” is the specter of an alien, omniscient, and inscrutable “AI”. Yet that too may have its benign side. If Karl Friston is correct in his Bayesian mechanics of “living things”, then there is nothing alien nor even omniscient about “AI”. We and all living things, large and small, are subject to Bayesian mechanics, and hence, there is no single nor constant point of omniscience. All living things have their competing and mutual defining and recombinant generative models of themselves and their surroundings. There is no hidden Archimedian perch or inevitable digital Sun King throne from which to exert absolute power. Unlike current neural network methods found in Deep Mind and other current machine learning techniques that use statistical reinforcement learning and optimization methods, Bayesian mechanics uses Dynamic Causal Models that explicitly express, test and select among causal models. These unlike current methods infer causal relationship — not simple correlations — and they are human readable and accessible. This is a significant advance not only in the efficacy of AI, but its potential integration into human centric processes and institutions.

What Bayesian mechanics offers that is truly mind bending is the prospect of modeling the “social physics”, the dynamics of individual, group and institutional beliefs and behaviors in response to different policies. Within such models there is no Left or Right, only the precision of outcome predictions. Certainly, there could be differences in preferences for outcomes that could reflect Left — Right sentiments. Indeed, one might see in the inability or refusal to update “priors” a “conservative” bias towards the status quo, but that bias needs be understood in a larger context whereby to abandon certain priors, for instance, certain traditions and affective and material investments in the “priors” — e.g. status quo, could itself generate uncertainty and the need for costly new generative models. Similarly, the desire to upgrade the model or to create new generative models and actions, in itself could have enormous and potential benefits, but at the same time entail existential risks. Unlike current Bayesian AI/ML techniques, whose many practitioners and investors espouse a form of “Libertarian Rationalism” that adheres to a kind of zero-sum game dynamics and a “context free” decision tree of choice preferences, the generative models of Bayesian mechanics are highly contextual, often “nested” in other generative models and subject to the boundary conditions, that they “self-evidence” to make them viable living things. In other words, the Rationalist Bayesian models are mechanical and Newtonian, whereas the Bayesian Mechanics models are biological. In that sense the Rationalist AI is something to be legitimately feared as it is “zero-sum,” adversarial, and converges towards an equilibrium that is “death”.

IF NOT REASON THEN WHAT?

Human beings are not rational creatures. The weight of scientific evidence, especially neurological evidence, shows that for us, reason follows emotion. People with highly vested beliefs, those that in Bayesian mechanics terms reduce “surprise”, will not under conditions of duress, yield to counterfactuals. That is, they are impervious to reason. They will retain their “priors” and refuse any “updates,” regardless of the factual merits. (Current circumstances in the United States make that abundantly clear.) The adherence to such beliefs is secured by positive and negative “affect”, that is, the emotional attraction for a belief and the emotional avoidance of a belief. Ironically, the insistence by the Rationalists that reason and evidence governs all, is itself an example of such factual denial, indeed, irrationality.

But if reason is not supreme, our anchor in the storm of chaos, then what is to be believed? Should not Science rush in to provide the way? Yet for those looking for certitude, even Science is not the cure. One would like to think of Science as achieving systemic progress through an iterative sequence of theory, hypothesis testing, experimentation, and verification. But as many historians and philosophers of science have warned us, it is not that simple nor straightforward. While there can be general agreement as the scientific method, there are enormous differences in the acceptance of scientific evidence, and when and how to overturn “prior” beliefs with new “posterior” beliefs. (Reminiscent of the earlier Right — Left divide discussion.) Notably, scientists and institutions have different thresholds and interests in whether to accept new evidence and theories. This not so much a “rational” process as a social process. As Max Planck famously noted “science advances one funeral at a time”.

Here again the Bayesian mechanics of Friston may provide a fresh perspective. Noting that the scientific method itself is a form of “active inferencing”, then what can Bayesian mechanics tell us about achieving a “scientific consensus”? The biggest difference is that we cannot disengage the “subjective” (priors) factor from the objective factor (posteriors) — that of independent observation. Since the existence of all “living things”, including the scientific method itself, it too is predicated upon preserving and indeed, projecting evidence for itself. Hence, even the scientific method begins with the “bias” for its own existence. That “bias’ is expressed in its most tightly held priors, those axioms, theories, experiments and proofs that define the boundary or “blanket” of what it expects to be true. Hence, there is a bias of expectation based upon prior experience as to what can be observed, and consequently, a skewed focus of attention and selection on what is important or relevant. By that same process — the unexpected is often not seen or filtered out as noise, as exemplified by the dismissive labeling of phenomena, “junk” “DNA — “dark” (unseen) matter.

Yet the putative gold standard of the scientific method is to express scientific hypothesizes in a form to invite disproof with the expectation that the most to be learned occurs when what one thought to be true is no longer true nor complete. In its idealized form, the scientific should work as disinterested party that simply traverses, prunes, updates, and selects from a decision tree of preferences or evidence to arrive dispassionately at rational conclusion. What Bayesian mechanics demonstrates is a far more subtle process where there is always a complex and highly interdependent of state of beliefs and evidence, and that to change one is to change the other. Rather than the “decision space” being like a forest of trees, it is more like network of nodes floating like corks on the surface of a turbulent sea. They disperse, they converge, they stabilize. A stable convergence is like a scientific consensus which though stable is only contingently so. Yet in the pursuit of scientific exploration and validation the criteria for acceptances depends upon not just a parsimony of explanation, but a comprehensiveness, that is, not just to account for the special case, but across multiple and seemingly unrelated cases. The goal of science is not just to preserve the boundary of particular theory but to expand the boundaries of explanation — that is, generate models to include as many cases as possible. Hence, it is not just looking for self-evidencing of existing boundaries or models, but to discover new and more inclusive boundaries and models for self-evidencing. By method and design, science is an open, generative system that is constantly re-inventing and re-designing itself to create a better model of “reality” that which it can sense and predict. Since science is a singularly human undertaking, certain belief states and models will be imbued with affective valences, emotions of attraction and avoidance, and therefore, will not be abandoned until these emotive valences are diminished.

THE GOOD REGULATOR THEOREM AND DEMOCRATIC COORDINATION

In offering a 21st century scientific update to Francis Bacon’s observation that “nature cannot be commanded except by being obeyed.”, Karl Friston, cities a paper by Roger Conant and W.Ross Ashby (1970) called The Good Regulator Theorem. In this paper the authors seek to prove that “every good regulator of a system must be a model of that system”. In Bayesian mechanics terms, this theorem makes the assertion that forming a model of its environment is the necessary condition under which all living things need to regulate or govern themselves. As a neuroscientist, Friston applies this same principle to understanding the evolution of the brain, arguing that the different components of the brain evolved as coherent predictive models of its environments, and hence, the structure and organization of the brain mirror long term regularities in its environments. A critical insight is that the brain is made of many different areas of specialization, each with their own semi-independent generative models, and yet the brain acts as a unified organism with coordinated internal models and external models. This architecture of distributed and dynamic self-regulation is relevant to the effective governance of any living thing, person, ecology, institution or society, where stability and adaptivity are dependent on having predictive models that mirror both internal and external “realities” — those things that can be sensed and acted upon.

By having both internal and external “digital twins” of communities, markets, supply chains, transport infrastructures, incentive programs, institutions whose actions can be commonly experienced, critiqued, and tested by an affected public may prove essential to having not just a “democratic, participatory” and authentic form of governance, but one which is designed to be self-reflective, self-corrective and inclusive of the those dependencies and complexities that affect its well being. In other words, we need to stop thinking of our societies, institutions, markets, and technologies in inanimate or mechanical terms, but as living things at all scales and in all domains, as Bayesian living things, that are entangled, nested, interdependent and mutually defining.

THE POLITICS OF TECH AND AI; THE LIBERTARIAN AND THE COMMUNITARIAN

Given the array of existential threats before us as a species and society, one might think or that the titans and technologists of Silicon Valley would rise to the occasion and help us invent a more viable, equitable, and livable alternative future. But their sights and efforts appear to be pointed elsewhere. As Rationalist Libertarians, their values, world view, and agenda are still framed by the “free energy maximization” of Newtonian physics. What they have come to value most are their individual freedoms and their unbounded right to exercise their tech and unshackled powers in the classic Ayn Rand fashion. For this very powerful and influential constituency, which at times teeters on the edge of Alt Right elite authoritarianism and techno-utopianism, the government is enemy number one. They do not want to be a part of anything they do not create nor control. They seek to profiteer from the fragilities and inefficiencies of the current moment and then at the proper time, “exit” to their gated communities, islands, sea-nations and ultimately, other planets. In their personal narratives, blogs, and tweets, they are undertaking a heroic journey for the betterment of mankind by virtue of their effort, merit and singular talents. For them, the government, the uneducated, and the tech challenged are impediments to success and their earned privilege.

Yet if the existential challenges of our time are to be addressed before crossing a point of no return, matters of global governance — the invention of “good regulators” will need to be addressed though science and technology. The discredited notions of hyper-individualism and “free market exceptionalism” must give way to subtle and scientifically informed models of mutuality and distributed forms of governance that “obey nature”. This Communitarian model of tech and AI does not entail nor imply some Global Government of bureaucratic homogeneity and collectivism so feared by the Libertarians Right. Nor does it entail a revival of a Prussian“neo-carmalism” of Frederick the Great, as advocated by some of the Libertarian Alt-Right, nor the “economically progressive” dictatorial powers of a Xi Jinping and a Communist party elite”. Rather it MAY lie with a new science of living things and mutualism, Bayesian mechanics which MAY both act as a corrective to the excesses of our current extractive processes and provide a principled and scientifically informed path to design intentional and sentient forms of digital, biological and mechanical forms of organization that do indeed obey nature so that we can properly and legitimately command ourselves.

--

--

John Clippinger

Open Earth Found, MIT Media Lab, Law Lab, HLS-B-K, Advr DeFi, NFT, Active Inference Lab, Co-Fo ClearTrace, Auth "Bitcoin to Burning Man, Crowd of One, Ph.D.