AI Psychosis

The fragility of our minds and theirs

Mark Humphries
The Spike

--

Well with a title like that, what other picture could I possibly use? Credit: Pixabay

We have fragile minds. Disorders of thought affect a large proportion of the population of rich countries at any one time; each person in these countries has an uncomfortably high probability of having at least one of these disorders of thought in their lifetime, peaking in early adulthood.

These disorders come in many flavours, with many labels. Depression is common, as is anxiety. Addictions and compulsions too. More extreme are the darkness of obsessive compulsive disorders and the fragmentation of schizophrenia. They are uniquely human.

Unknown is if they will remain uniquely human. Research in artificial intelligence is making spectacular progress; for many researchers, this progress is along the path to developing human-like general AI. This leads to a troubling thought: will a human-like AI inherit human-like disorders of thought?

For disorders of movement that originate in the brain, we have some understanding of what happens: specific neurons get damaged, and specific movement problems result. In Parkinson’s disease, losing a small collection of dopamine neurons in the midbrain seems yoked to when tremors or difficulty in movement appear. In Huntington’s disease, loss of neurons in the striatum is directly linked to the disease’s tell-tale involuntary movements and spasms.

Disorders of thought have yet to yield to simple mechanistic explanations. They seem rather a disorder of very large networks of neurons. Some might object that most disorders of thought have a mechanistic explanation, as they are disorders of neuromodulators – because most of their treatments change the amount of one or more neuromodulators within the brain. But all this tells us is that they have to be a network-wide problem, for neuromodulators are released all over the brain. And, as their name implies, neuromodulators change, but do not cause, the transmission of information between neurons. Even if changes in neuromodulation are at the root of thought disorders, their effect is played out in how they change the way neurons talk to each other.

It is unclear why disorders of thought even exist. Are they inevitable in any sufficiently complex brain? If so, is this inevitability restricted to biology?

How we answer that question depends on whether disorders of thought are due to

(1) inherent flaws in biology,

(2) the effects of culture, or

(3) the inherent side-effects of large-scale complex networks of neurons.

These are not mutually exclusive, as we’ll see.

Inherent flaws in biology is the medical explanation. Brains are bags of cells, each of which is a bag of chemicals, and sits in a soup of other chemicals. A flaw in those chemicals, or the bags in which they sit, is a natural first target for how thought disorders arise. But innately flawed biology alone is unlikely to be the explanation. Such flaws would be selected against by evolution. Fish that can’t face another day of swimming won’t survive. Genetically inherited forms of thought disorders are rare. Instead, small differences in many genes can increase or decrease the probability of experiencing a thought disorder in a lifetime. Something then tips the balance from a probability to a reality.

Thought disorders could thus be an inherent consequence of our cultures. Such a consequence could be the physical products we make: a pesticide, a solvent, a drug. The presence of particular manufactured chemicals in our environments has been linked to the increased probability of a few brain disorders; but thought disorders have apparently ancient origins, predating much of our industrial effluence.

A more likely consequence of our culture is its sheer complexity. A well-rehearsed argument is that we are using our brains outside the niche in which they evolved. Our brains are subject to constant stressors in a society vastly bigger than they evolved within. Of hundreds of things to do, hundreds of things we are aware of, but have no individual control over: wars, famine, disease, climate change. Chronic stress affects the wiring of neurons, and changes how they respond to inputs. In this scenario, the genetic differences that increase or decrease the chances of a thought disorder are played out in the resilience they confer on our neuronal networks to these culturally driven changes.

But a common factor to all disorders of thought is that they arise in the most complex network of neurons on Earth. We have 17 billion neurons in our cortex; no other animal comes close. We like to give our cortex credit for our apparently unique combination of talents, for language and writing and maths; for creativity and cooking. Indeed theories for how our cortex got to this outlier size see it as either cause or consequence of culture: either that such large, adaptable networks allow us to form large social groups, and develop language; or that forming large social groups and developing language drove the evolution of the largest cortex on Earth.

We have a small part of cortex dedicated to the understanding and production of spoken language. Another part to written language. Loss of knowing how to speak sentences does not mean losing how to write the same sentences. Written language is an exceptionally recent event, too recent to have evolved a dedicated brain region to process it; that we have a brain region that deals with writing shows how our cortex is an adaptable, versatile machine.

And therein lies the problem. Cortex is endlessly rewiring as new skills are learnt; as new memories are formed; as new knowledge is acquired; and as it continues to develop into early adulthood. In all that rewiring, there are chances of associations being formed between things that can’t exist, giving hallucinations; or associations that could exist, but are exceptionally unlikely, giving obsessive thoughts; or associations of bad outcomes with innocuous things, giving depression or anxiety. That the rewiring will lead to the activation of one set of neurons accidentally activating another set that are not relevant right now.

Adaptability is not just rewiring. Cortex predicts what is about to happen. It predicts the next word in a sentence. It predicts when a sound follows a bright flash in the sky. And these predictions can grow too strong. They can write over the information from our senses. They can create events that are not happening, so hallucinating; can predict event that will never happen, depressing us.

We seem to be the only species with a widespread and many-faceted array of things that can go wrong with our minds. We are also the only species with a 17 billion neuron cortex, containing trillions of connections. The two are plausibly linked.

Contrast this with the current state of AI. We have witnessed remarkable progress. But at root we are still at the stage where one AI agent learns just one thing. A network learns to translate one language into another; or to classify a specific set of images; or to play Go or chess or draughts. While we have now reached the point where the same architecture, the same set of algorithms, may be used to solve different problems, the individual AI agent is still only learning one thing.

Humans learn chess and Go and draughts; and learn multiple languages; and learn to paint. And learn sequences of events, predicting outcomes, good or bad. And do all this with one cortex. Which deals with many different types of learnt information, and many different uses of that information. In one densely complex network, ripe for malfunction.

This line of argument suggests a “general” AI is not possible, or at best inadvisable. It suggests that a sufficiently complex network able to exhibit human-like abilities – to adapt to each new task, to make predictions, to learn, and to form memories – would also exhibit human-like frailties. That such AI would exhibit a range of disorders of thought, would have psychoses.

The retort would be that, as we construct such AI ourselves, then we can engineer the networks we build to not fall prey to these frailties. That retort assumes we will have sufficient understanding of how these disorders arise in order to engineer around them. Patently, we do not have that at the moment, nor any indication that understanding is coming soon.

A more refined retort may be that we need not follow the evolved brain slavishly, that we can find ways to have a complex network that can learn and do many different tasks without inheriting the design flaws of biology. In particular, that human disorders of thought seem dependent on neuromodulators, and AIs do not have them. One problem with this view is that it inherently believes neuromodulators are not doing computation. But it is all computation. A neuromodulator like serotonin changes the state of brain, by changing the strengths of connections between neurons. Neuromodulators have this role in both the tiniest nervous systems on Earth and in our cortex. We ought to assume they are necessary for being intelligent. It seems likely that AI will need something that mimics neuromodulation if it is to reach for general intelligence. For it is how real networks of neurons can be adaptively sculpted to change the problem at hand, by changing how they interact, both briefly and permanently.

Another problem with this view is that many AI systems already use neuromodulation. Dopamine neurons change their firing to signal the difference between the outcome you expected and the outcome you got. This is the “prediction error” at the heart of many of the most spectacular recent AI demonstrations. It can drive wrong associations between actions and outcomes in AI networks just as easily as in neuronal networks.

But say we did understand how these disorders come about. Then if they arise from anything other than pure inherent flaws in the biology, if they arise from our culture or are inherent side-effects of large densely-connected networks or both, then they cannot be engineered around. Such advanced AI would exist within our culture – one that is disengaged from it would not, by definition, be the mooted general intelligence. Such advanced AI would undoubtedly need large, complex networks, in which to learn and store many overlapping and different functions. Put this way, such advanced AI would seem just as vulnerable to thought disorders as us.

This essay is not an answer, but a question. Because I want to know: will a network sufficiently complex to exhibit human-like intelligence also inevitably exhibit human-like disorders of thought?

There are answers to the questions raised here. For example, if cell death and malfunction underlie every thought disorder, and those are always due to environmental stressors, then AI will be immune. In finding the answers to these questions, we will inevitably better understand the brain, and perhaps understand how to build a resilient, general purpose AI. A non-psychotic one. I think we’ve all seen enough sci-fi to agree: that would be a good thing.

Read more on neuroscience at The Spike

@markdhumphries

--

--

Mark Humphries
The Spike

Theorist & neuroscientist. Writing at the intersection of neurons, data science, and AI. Author of “The Spike: An Epic Journey Through the Brain in 2.1 Seconds”