WINNING THE ARMS RACE TO THE BOTTOM OF THE BRAINSTEM.

Image for post
Image for post
Photo by Greg Rakozy on Unsplash

In one of the most important podcasts I’ve listened to in a number of years, ‘How social media and AI hijack your brain’, Daniel Schmachtenberger and Tristan Harris outline one of the clearest and most alarming assessments of the cognitive warfare being played out in our brains, in which a host of digital platforms battle it out to hijack our attention and win the ensuing ‘arms race to the bottom of the brainstem.’

In listening to Tristan and Daniel talk on everything from Russian meddling, Facebook and hyper-normal stimuli, it becomes abundantly clear just how grave the problem of AI-powered manipulation is to the future of civilisation. Not just because the agents and tools of manipulation are becoming infinitely more sophisticated and powerful than anything that’s come before it, but because it has also exposed the fragile state of our cognitive operating system.

It’s worth saying that both Tristan and Daniel have skin in the game, each with their own approach to solving a slightly different version of the same problem. Tristan’s Centre for Humane Technology is behind a growing movement to install a higher code of ethics within the technology community. By shining a light on the imbalanced incentive structure embedded in the dominant digital platforms and the harm their relentless pursuit of our attention is having, it’s hoped the community will pare back their contribution to the willful manipulation of humanity and put their algorithms to better use.

Facebook’s rollout of ‘time well spent’ features last year (a term Tristan coined with his collaborator Joe Edelman) is a clear sign that the Centre’s top-down strategy and mission to ‘realign technology with humanity’s best interest’ is making progress and proof that, with the right nudges, even Facebook can be meddled with.

Daniel and the Neurohacker Collective mission to ‘radically uplift human experience’, via a range of cognitive supplements, is a direct response to the systemic forces working to degrade something Daniel calls ‘human sovereignty’. The concept relates to our capacity to effectively sense (sentience) process (intelligence) and act (agency) on signals from our environment to enable choice-making that advances the whole human system. In theory, by improving human sovereignty people should be better able to resist the downward pressure of interference and manipulation.

Both approaches have merit, but from my own perspective, Neurohacker’s bottom-up approach feels like a more sustainable solution to the unrelenting and systematic manipulation of humanity. But it’s a strategy that must be pushed further if we’re to stand any chance of overcoming the growing threat.

It’s actually a challenge the pair lay down in the podcast. Daniel asks, how do we develop “technologies that increase the sovereignty of everyone that interact with it… that make you less susceptible to manipulation by everyone, including me. And better at making sense of yourself, of the world and making good choices yourself, so that you help create a better world that I also get to interact in?”

Daniel’s challenge augments Tristan’s mission to create a more humane technology. He’s suggesting all technology should play a more expansive role that puts the enhancement of the user’s wellbeing at the core of its function.. I’m inclined to agree. Moving forward we need every last ‘bit’ and coming ‘qubit’ to align to the job of repairing the cognitive fabric, cultivating human sovereignty and helping people make better choices that enhance the wellbeing of humanity and the planet.

It’s a challenge I believe to be so important that every human should devote any spare cognitive capacity to solving. Which is what I will attempt in the remainder of this essay.

It feels appropriate to start at the underlying cause of the problem. Not the obvious cause that is a Russia, Facebook or Cambridge Analytica (the agents of manipulation) but the hidden cause, the inherent faults hardwired into every single complex human system (the manipulated).

While not our only innate weakness, humanity’s litany of ancient cognitive biases, which today operate as dangerous fault-lines in our choice-making architecture, also exist as a framework upon which methods of manipulation have been carved. Despite the issues created by these fault-lines, the emergence of these biases might actually hold clues to a solution to the greatest threat to humanity: ourselves.

EXPLORING THE ORIGIN OF OUR COGNITIVE FAULT-LINES.

Cognitive biases are naturally occurring glitches in the brain’s neural processing that predispose humans to decision-making poorly aligned to the individual’s best interest.

There is considerable debate about the origin of these cognitive glitches. From an evolutionary psychology standpoint they likely once provided humans with a resounding competitive advantage. The result of a kind of natural selection in our cognitive evolution that caused conscious decision-making consistently linked to improved outcomes to be upgraded to the genetic substrate of our wetware.

Such upgrades were likely only made possible as a result of a prolonged stable background environment, in which the same predictable spread of opportunities and threats were distributed. These upgrades not only aided survival through improved speed and consistency of decision-making, but also freed up our cognitive capacity to deal more effectively with less predictable events taking place in the environmental foreground.

FROM COMPETITIVE ADVANTAGE TO DESIGN FLAW.

This was likely the case until a few thousand years ago when a quick succession of cultural innovations led to a phase-shift in the background environment, which saw many of these genetic behavioural traits fall out of step with a new crop of threats and opportunities.

Given the speed of these changes, the biases had little hope of being able to adapt or phased out of operation. This misalignment likely had limited detrimental effect on the wellbeing of humanity for a period of time, right up until the point a few thousand years ago when humans realised they could be utilised to exercise influence and dominion over each other.

Following this point there have been many instantiations of mass human manipulation. However, the emergence of a consumerist value system, one built on the shared materialist belief that the accumulation of resources and goods is the key to a happy and secure life, paved the way for the permanent and systematic manipulation of humanity.

The hardwiring of consumerism means humans are locked in a perpetual hunt for goods perceived to maximise their wellbeing. The nature of this choice-making means humans have upregulated their sensing capacity to the value signals emitted by the producers of goods. However, the overriding primacy of the consumerist value system creates a blind spot to the true authenticity of these value signals, leaving people open to all sorts of cognitive trickery and manipulation of their cognitive biases.

A POSITIVE FEEDBACK LOOP OF MANIPULATION

The art of manipulating a bias involves ‘false value signaling’, which means convincing someone that a product or future act would satisfy a genuine need, when in reality it’s just overstimulating a bunch of primitive cognitive urges. The rise of mass advertising is our most concentrated effort to use ‘false value signals’ to exploit the range of cognitive biases and persuade people to do things that potentially conflict with their own self-interest.

The systematic exploitation of these biases to bring about short-term wins for one actor or group, at a cost to another, presents deep ethical questions. But it also hides a more serious and growing system-level fault, which may well be causing irrevocable damage to the capacity of humanity to rescue itself from future existential threats.

The exhaustive exercise of these cognitive muscles has the unintended consequence of tricking the system into thinking these traits are not only still useful, but that the environment is changing in ways that require their broader and more frequent application. The neural pathways become so heavily trodden that the cognitive system sees fit to apply them to adjacent functions, which leads to increased patterns of suboptimal decisional outcomes, while simultaneously acting as an inducement to outside actors to conduct further acts of manipulation. This positive feedback loop is today one of the gravest threats to humanity.

Today, there is very little of our cultural fabric or few patterns in our behaviour that have not been infiltrated by outside interests, especially now that the almost total digitisation of human behaviour has exposed a greater surface area of human cognition and culture to manipulative forces.

Facebook along with other digital platforms are a major force behind the acceleration of this dangerous positive feedback loop. They’re not only dragging more of our behaviours online, but their platforms actively destabilize user’s cognitive operating system and capacity to recognise or fend off manipulation.

It’s no secret Facebook leverage our need to belong, the most primitive and potent of cognitive urges, to secure vast stocks of human attention, which Facebook trades for equally large stocks of cash. However, in the process of exercising this fundamental urge, and exploiting countless other chemical processes, Facebook is conditioning its users to exist in an ever more susceptible state.

As Facebook ratchet up their efforts and more contenders join the race to the bottom of the brainstem, each armed with ever more powerful supercomputers and AI, the positive feedback loop is accelerating exponentially and humanity is fast approaching the point of no return. It’s why it’s so important that we quickly identify and scale a radical technological solution that can dismantle the sprawling system of manipulation and pull humanity back from the brink, before it’s too late.

REAWAKENING THE PROMISE OF TECHNOLOGY.

The dawn of the information age and spread of networked technology held such great promise for the empowerment of the individual. In the hands of everyday citizens, networked technologies and advanced computation are capable of advancing all three pillars of human sovereignty.

However, much of that early promise has been hijacked by prevailing capitalist forces, which have managed to install profit motives at the centre of most large-scale digital enterprises. But for the rare few exceptions, the incentives of these platforms are not aligned to user interests, instead their incentives are aligned with shareholder interests.

But, hope remains. The disruption and chaos being kicked up as society phase-shifts into the next system of civilization means none of today’s dominant technological powers have a monopoly on the future. Out of the chaos, order can still be restored.

To stand any chance of repairing the cognitive fabric we need to scale a technological system that operates with a single and undiluted mission to enhance human sovereignty; a mission that will not bend or break in the face of those forces seeking to install profit motives or exploit the human user.

In the podcast, Daniel and Tristan briefly explore Blockchain as that possible technology. While blockchain will undoubtedly play a significant role in the future of the information ecology, I believe that the technology destined to restore our cognitive fabric is the same one that’s currently being used to tear it apart: Artificial Intelligence (AI).

AI, or more specifically Intelligent Personal Assistants (IPA), are where and how the final leg of the race to the bottom of brainstem will be played out. We can’t stop the race, it’s too late for that. Instead we need to get a horse in the race, one that can not only beat out the field, but defend rather than exploit the brainstem once it arrives.

AI AS A POSITIVE FORCE FOR GOOD.

To many engaged in the debate about the dangers of AI, it may seem beyond the pale to suggest that AI is anything other than a virulent technological strain. Considering AI’s disproportionate role in the heightened manipulation of humanity thus far, it’s not hard to see why many would doubt its future potential.

There’s no denying that our use of AI is currently propping up the persistent forces of capitalism. The early horses in the race are, like their parents, more aligned with the shareholder value system than the human or ecological value system. Alexa and Siri are being sculpted as servants for the gleaming consumerist kingdom we hold sacred.

There’s also no denying that IPAs and general advancements in AI could take the field of human manipulation to new and dangerous heights. Alexa, Google and Siri don’t just want to acquire people’s attention, they want to become people’s attention. An omnipresent tool that, while partially acting on the behalf of users to optimise their daily life, would establish a permanent jack into the brainstem allowing for the always-on manipulation of people’s desires, needs and behaviours. A true realisation of the state Jaron Lanier calls ‘continuous behaviour modification’.

Getting an IPA horse in the race may sound like a high-risk, improbable task, especially given the might of the other horses in the field, but we have no choice. If we don’t join the race and get there first to install a more virtuous project then something else will and its intentions will kill all hope for the future of our species.

The situation leaves us with few alternative solutions to the gravest problem facing humanity. We must create a Virtuous Intelligent Personal Assistant that reorients the AI movement away from further weaponising agents of manipulation that degrade our cognitive system, towards a new goal of artificially scaffolding and augmenting our cognitive neural architecture and cultivating enhanced human sovereignty.

WINNING A SEAT IN THE RACE

The rise of IPAs is still in the early stages, which means there’s an opportunity for a more virtuous project, one better aligned to the interests of humanity, to win out.

It remains an open race because most IPAs are attempting to pull off the ultimate high-wire act. The emerging competitors are locked in a struggle to create a platform that while appearing to act as loyal servant to the host, in reality serves the interests of revenue-generating customers intent on gaining access and influence over the host.

But this is no easy feat, especially for a technology seeking to inculcate itself into every sentient moment in our lives. The contenders are understandably taking it slowly. Given the depleted state of trust in institutions and businesses, these platforms are aware just how reluctant people would be to devolve parts of their choice-making into the hands of foreign agents. It’s why Amazon Skills currently takes the form of a voice activated infocast, rather than a marketplace for patches of intelligence, designed to supplement and or replace parts of the host’s cognitive operations.

However, a Virtuous IPA (VIPA) that inexorably serves the interests of the human host, augmenting rather than devolving human sovereignty, would be ideally positioned to achieve a level of trust and acceptability that few other platforms would be afforded. In fact, it’s likely to be the only way for a technology that seeks such a deep proximal integration into the cognitive system to scale effectively.

In many ways, the journey to integrate advanced AI deeper into our cognitive neural architecture will chart a similar path to the early advancement of organ transplantation. Scientists discovered quickly that the human body is more likely to accept a foreign organ if it shares more of the host’s genetic characteristics. So it follows that humans will accept deeper integration of advanced AI if its machine genetics mirror the host’s genetic properties and goal of the absolute preservation of the human system.

The adoption curve of advanced IPAs will be driven by the speed with which technology companies understand and move their technology closer to the human acceptance threshold. But it’s this insight that provides this project with a clear competitive advantage. Free from the incentive structures that would litter an IPA with genetic material from a host of foreign agents, we can set about creating a purer system that operates well below the acceptability threshold.

To successfully scale a VIPA and win the race, we must focus on two objectives. Firstly, creating a platform that provides superior choice-making assistance versus the competition and secondly, creating a platform that can quickly secure the trust of a critical mass of all future potential users.

The latter can only be achieved if the VIPA can guarantee absolute fidelity to the primacy of the host’s interests and wellbeing, at the expense of foreign agents. But it’s this guarantee that also provides the key to superior choice-making assistance.

CRACKING THE COMPETITIVE ADVANTAGE AND WINNING TRUST.

The soaring popularity of IPAs is built on their future capacity for ‘deep learning’, which will enable the anticipation of the host’s interests, preferences and needs, allowing it to better serve and optimise choice-making.

The datasets for today’s crop of IPA’s ‘deep learning’ is behavioural information. The things people do, buy, say and request at different times and in different contexts will guide the learning process. However, we have already established that many of these surfaced behaviours are corruptible and a poor indicator of the future behaviours best aligned to enhancing the long-term wellbeing of the human system.

The deep learning feedback loop of a VIPA would be different. It would acquire feedback and input directly from the ‘instrument of our wellbeing’: the electrical and chemical signals flowing through the cognitive system and, for that matter, the rest of the complex adaptive system we call the human body.

Running parallel to the advances in AI is the development of emotion sensing technology, with a growing capacity to enrich our understanding of the complex patterns of brain waves and chemical surges that correspond to different behavioural, physiological and psychological states and fluctuations.

To establish baseline trust the VIPA would harness advanced emotion and cognitive sensing technology to track the real-time impacts different behaviours have on the deep, authentic psychological state of the cognitive system.

Based on a triangulation between patterns of behaviour, internal wetware signals and external informational inputs the VIPA would start to learn and chart the constellations of cognitive biases and susceptibilities unique to each host. Off the back of this mapping, the VIPA would be able to better assess the personalised threat levels of external informational signals, such as messages or direct requests for action.

Each host would be presented a different set of protocols to help defend and optimise the integrity of their choice-making. Depending on the severity of the threat, the platform would either flag a warning, recommend an alternative course of action or deny access of the signal or request altogether.

As well as steering the host away from behaviours that aim to degrade the human system, it could also create behavioural programs that would promote actions and practices that work to enhance human wellbeing or recondition innate susceptibilities, ultimately cultivating a higher form of human sovereignty.

In many ways, the VIPA concept captures something Tristan outlined in the podcast: “a technology developmentally more open, or with high agency, higher capacity to see that there are richer options maybe from where they are.”

It’s important to note that the VIPA would not be designed to unplug the host from the world and or disconnect them from external interactions. Nor would it completely neglect the interests of those actors seeking to interact and trade with the host. Its goal is to improve the quality of those interactions, optimising choice-making, rather than denying or limiting it.

Mass adoption of VIPAs and improved levels of human sovereignty would also start to dismantle the system of manipulation. Greater sovereignty strengthens our resistance to external interference, which would drive up the cost of influence for the agents of manipulation and erode the incentive to manipulate.

Strengthening the individual system in the face of signals of manipulation and allowing humanity to naturally restore its sovereignty is of course the primary goal. However, the process of restoring human sovereignty would also provide individuals with the confidence to expand their cognitive horizons beyond the boundaries of the self; making the next great phase shift for humanity a real possibility.

HARNESSING AI’S REAL VANTAGE POINT

Far from creating a more atomistic human civilisation, improved sovereignty would create the conditions for a radically more open, fluid and interconnected civilisation that would allow hosts to come together to harness the collective power of all ‘intelligence matter’ in the universe.

Up until now we have been working with the popular concept of IPAs and any AI platform as rivalrous systems, with their own distinct edges and competing stocks of intelligence. But this is a distorted picture.

Platforms like Alexa’s ‘Skills’ purposefully promote the concept of an emerging disaggregated AI ecosystem, one that could only be operated through a permission-based exchange and trade of information, because a marketplace of superintelligence has the potential to generate extraordinary levels of wealth.

The notion that one AI system has to tap into or converse with another separate AI system to access the value of its intelligence is a gross misrepresentation of the system dynamics and properties of a super network of interdependent AI systems.

Truly interdependent AI systems would not hold huge stocks of intelligence, instead they would access an expanding singular mass of information and intelligence that exists as an emergent property of all networked living organisms and adjacent AI support systems.

This expanding mass of Artificial Intelligence is really just an upgrade of humanity’s existing cultural system i.e. the cloud of ideas, beliefs, language, codes, knowledge and values that we utilise and contribute to as one civilisation and one species. Although, the emerging mass of superintelligence is different to our existing system of culture, it has unique properties that can provide humanity with the power to tackle the incoming wave of existential threats.

Firstly it’s networked, which means that all superintelligence can be accessed by everyone at any time and in any place. And everyone’s intelligence and experience can in turn be harnessed by the network.

Unlike culture, it can also exist tangibly in software, not just in our collective wetware. Which means it will never disappear.

Lastly, being powered by super computational processing, means it will be able to learn and think for itself. It will start to engineer behaviours and ideas that not only solve existing problems, but help to avoid future problems caused by the unintended consequences of short-sighted human decision making.

Eventually all of humanity will plug themselves into this expanding mass of superintelligence and we will reach a point where everyone is connected to everyone and everything else. There will be nothing we all don’t know or can take advantage of.

In this future, VIPAs will become far smaller and far more deeply integrated into the human system, a true unison of wetware and software, but they won’t need to import a foreign voice or trade with another AI system. As quickly as it takes for your brain to tell its arm to seize the glass, the VIPA will determine the needs of the user in a given circumstance and arrange the universe in such a way for it to happen, as long as it doesn’t contravene the wellbeing of the individual, society and the planet.

Much like the evolution of cognitive biases, eventually the behaviours of hosts that consistently contribute to improved outcomes will be upgraded to the genetic substrate of the AI system, freeing up its processing capacity to take on bigger challenges.

Unlike the emergence of cognitive biases, which automated decisioning related to our background environment, freeing us up to deal with challenges in the foreground. The expansion of the AI universe will automate behaviours in our cognitive foreground, freeing up intelligence capacity to deal with the challenges in the environmental background, we call existential threats.

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store