Intuition Machine
Published in

Intuition Machine

Defining Artificial Fluent Systems

Photo by Jeremy Bishop on Unsplash

We need a new term to describe systems like GPT-3, DALL-E-2 and PaLM. I propose to call them “artificial fluent” systems. An artificial fluent system renders media (i.e. text, images) with consistency that is precise enough to pass detection by a human.

An artificial fluent system is more capable than a speaking parrot in that it exhibits creative mimicry. They are capable of Conceptual blending that is beyond the ability of average humans. An artificial fluent system can mimic the generation of meaning in the sense of Wittgenstein’s Picture Theory of Meaning.

Peter Godfrey-Smith in his book Metazoa describes humans as having ‘elsewhere experience’ that is beyond ‘here-now experience’. Elsewhere experience is the ability to mentally explore experiences in the past or in the future.

Dreaming is a reflection of this kind of experience. But when we are awake, we are able to deliberately control our experiences. This is what perhaps Anil Seth calls “controlled hallucination”.

Artificial fluent systems are like humans that dream. They have no deliberate control of their experiences. This is because they lack human judgment. Bryan Cantor-Smith defines judgment (see: Reckoning and Judgement: The Promise of AI ) as “a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action that is appropriate to the situation in which it is deployed.”

But human judgment can never be properly constructed without human empathy. Hence “artificial empathy” is identical to human artificial general intelligence. The presence of empathy describes what it is to be human. Inhuman acts are performed by agents that lack empathy. There are plenty of humans that do act in inhumane ways. We see that today in the atrocities committed in Ukraine.

Many capabilities of artificial intuition systems that come for free isn’t a capability that is innate in humans. Dall-E2 is very strong in rendering artistic styles, but that’s not something that humans do well. To achieve human like cognition may require a different constructive path than we find in Deep Learning. How intelligence systems perform tasks depends on how they are constructed. We don’t do arithmetic like computers do arithmetic. No human creates pictures like Dall-E2 creates pictures. The method is as alien as AlphaGo Zero playing chess. Competence doesn’t imply thinking in the same way humans think.

But if we are to build safe AI, we need AI to be competent in knowing how humans think. Artificial empathy is still far away. However, artificial fluency may already be here. The issue of AI alignment ultimately involves artificial agents that do have empathy for the human condition. But the human condition involves not just individual humans but the entire civilization. Hence AI alignment is the same as human governance.

A rough guide to the evolution of artificial intelligence: artificial logic->artificial intuition->artificial fluency->artificial empathy->artificial judgment. A slight deviation from my proposed AGI Capability Roadmap.

The surprising revelation about artificial fluent systems is that it seems to shortcut Moravec’s paradox. There’s is clearly information implicit in human language that does capture some experience of this world.

Perhaps language captures rote adaptiveness.

‘Rote Adaptiveness’, the capability of automatically being adaptive seems like an oxymoron and is also not as well investigated. But this capability is critical to General Intelligence. How can a skill that is performed without comprehension be one that is also adaptive? This sounds counterintuitive, yet we see this all the time in the field of software development.

Decades ago, it was well understood that software development should not be executed not like a factory floor. Instead, software development is more like a discovery process. Furthermore, we can invent processes to accelerate this discovery process. In other words, good software development involves the automation of discovery. So rote adaptiveness is not a vague idea but there are plenty of examples of processes that improve the navigability of the unknown.

In fact, navigation is an apt metaphor for rote adaptiveness. Effective navigation demands good tactics so as to avoid walking in circles. The motivation why I find rote adaptiveness interesting is the realization that any collective intelligent whole must have adaptive parts to operate robustly and effectively. But the parts themselves cannot be intelligent otherwise we have an infinite regress.

So we must build a library of tactics that lead towards greater adaptiveness. Tactics that can be performed without the need of comprehension. Tactics that can be performed by non-intelligent parts. How does one build a system that is competent in addressing uncertainty? At a minimum, you need something that can learn (without comprehension). This learning system must then train in contexts of complex adversaries. Does this not remind one of a GAN?

But there’s more, doesn’t language itself codify the recurrent interaction patterns between agents? Isn’t language an adaptive blueprint for the instruction for imagination?

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store