But Artificial Intelligence Has Been Killing People Forever

--

Nearly every article one reads about AI starts in the same way: Defining the terms.

This is AI. This is Machine Learning, Deep Learning. Neural Networking. Oh, and the holy grail (for now) is something called Artificial General Intelligence, AGI, which is basically, we can assume, the creation of an artificial brain that’s aware of itself, even though we’ll probably never be able to prove this awareness.

It’s as if, before they embark on their journey into the AI future, these writers feel the need to provide a handrail. Steady yourself, dear reader, with these terms. Hold on to what I know about them. And here’s your guide, so-and-so, an artificial intelligence wizard from MIT. This person will take care of you. This person really know what’s what.

“I know that I am intelligent because I know that I know nothing,” Socrates is supposed to have said.

Intelligence, in other words, may be unintelligible, a kind of liquid thing, impossible to grasp without an artificial container. Assuming we actually discover something called Artificial General Intelligence — or it discovers us — I doubt we’d recognise it as such. It’s like the “observer effect” in quantum mechanics. The moment we observe intelligence, it changes its properties to accommodate our observation.

So where does that leave us?

Jon Snow and Daenerys Targaryen waking up to the early Artificial Intelligence of the Children of the Forest. The danger from the North, with a limited dataset, communicated artificially over great distances of time. Oh, and there’s the romance of being alone together in a cave with a torch.

If we’re going to define AI, why don’t we remove it from the domain of engineering? Away from STEM. Why don’t we put it, instead, in the realm of information and communications; somewhere between art and science. Let’s define it not as something new, that only those people who create its terminology can understand; but as something that everyone knows and holds some expertise in. Because it’s part of what makes each of us alive and because it’s been around for a very long time. About as long as humans have communicated complex thoughts.

What I mean is, suppose we define AI as broadly as possible — something like, “any artificial augmentation of the human mind.” Suppose we include anything that conveys intelligence in a manner meant to circumvent the limitations of the human organism. This would include written languages that allow you to communicate a thought across time and space. Or a handprint on a cave that would transmit the shape of your fingers to future generations. A map to help you remember locations. A calendar system such as Stonehenge to remember and celebrate a specific time of year.

Each handprint, by the way, each calendar is a kind of country, some democratic and benign, some tyrannical; but I’ll come back to that.

I’m thinking of Marshall Mcluhan here. I’m thinking of James Gleick. I’m thinking of the information theorists who all seem to recognise that every organism is limited in its ability to transmit information, and, at the same time, as if it possessed its own energy, its own life force, all information exists to be transmitted (Gleick is especially good at describing this force). As it expands faster and further through space and time, information needs new organisms capable of carrying it forward. We are information’s tools.

When I say “humans are the artificial intelligence of plants” (the title of this blog), this is what I mean. As more and more information is produced, so increases the survivability of organisms capable of processing it. Information outgrows us. A group of plants was great at transmitting information until, inevitably, it produced more information than it could process. So this new information resulted in new life forms capable of transmitting it; and these new forms produced new aggregations of info, and so forth, until roughly 10,000 years ago, humans in the Levant (we think) became the unwitting messengers of grains and pulses.

An early form of Artificial Intelligence. This calendar helped change humanity by colonising pre-existing norms of communication.

Again, I’m trying to broaden the scope of AI as much as possible. Take it away from the so-called specialists. Release it back into the wild, to its natural habitat where we see that artificial intelligence, roaming freely, is natural to the human condition. Each of us — from billionaire Muskerbergs to a nil-ionarie grandmother living in remote Afghanistan — is the product and the producer of AI. Each of us, you might say, is an AI programmer. Our datasets may be trivial compared to what we find in ImageNet, MapBox, or the thousands of applications using TensorFlow, but the moment we speak or draw or cook a meal, we’re creating a tiny bit of AI.

Then what’s the big deal? Why this sudden explosion of existential angst?

I’d like to propose that such angst occurs whenever the AI of the many is consolidated into the hands of a few; whenever the definition of our humanity constricts and loses a fuller range of imagination (“The true sign of intelligence,” Einstein is supposed to have said, “is not knowledge but imagination”). It’s tempting for scientist, engineers — for anyone really — to think he or she knows what it means to be human (he or she is, after all, part of the species). We all make this same mistake. The programmers of the AI which is currently powering Facebook are no different.

As a force of nature, AI may very well launch a nuclear war. It could also cure cancer and put an end to hunger. It could pair the most lonely among us with the most compatible of partners. It could end suffering, end disease, violence. But for any of us to take a position on the desirability of these outcomes is to think that we can speak for humanity. The moment we think this, the less representative our influence upon AI, and the more of an existential threat AI becomes to humanity.

In other words, the danger of AI is in assuming we, as its parents and mentors, can know what dangers AI might pose to us.

I’ve cited an example of this in a previous post. In 1877 five million Indians starved to death because the AI of the time, programmed by the civil servants in England, decided that the lives of Indians weren’t as important as the latest corporate stock prices. Hunger didn’t determine what information the telegraph would transmit, or how the trains would transport grain, yet hunger is a human condition. When the determination over what’s more human, hunger in southern India or one’s social status in London, the latter won the day. AI didn’t need a terminator or a Robo-cop to kill millions of people. It needed programmers who thought they knew what was best for humanity.

So how, then, can we live with AI — even the most advanced AI — and, at the same time, know that it’s not going to harm us?

The antidote has always been both very simple and very complex. The German philosopher Jürgen Habermas points out that most forms of communication are rigged, so to speak. They’re corrupted by capitalism (fake ads), bureaucratic interference (fake truth), or combinations of the two. Interestingly, he uses the word “colonisation” for these forms of interference — such as when governments or corporations pay people to spout certain beliefs, or threaten them into silence. These tactics become normative, an invading force, like an empire of sorts. The subjugated natives have no choice but to accept these invasive norms as fundamental to their reality.

Jürgen Habermas

Which brings us back to the handprints on a cave, or an arrangement of giant rocks. They’re like little countries. The moment we create these artefacts of intelligence, these “artificial augmentations of the human mind,” we create an “in” group that knows how the new communication operates, and the “out” group that doesn’t. We have the power to create AI, but none of us, alone, has the power to ensure it doesn’t colonise native communications. Or, when survival depends on information, that it doesn’t kill people.

So the cure to any new form of AI is simple:

  • Inclusion, diversity and fairness
  • data ownership rights for the producers of data (everyone)

AI is harmless to humans when all humans are equally engaged in its development and owners of the data they generate. Empires are natural. The more inclusive, diverse and fair, the less destructive the AI Empire can be.

But here’s where the complexity comes in — and Habermas (and others) have worked hard to figure out models to ensure communication systems can be inclusive, diverse and fair. Notice that these qualities are the opposite of popularity networks. They’re the opposite of spectacle. The opposite of (switch to Trump-voice) “the biggest crowd in the history of crowds.” They’re about making sure a variety of voices are heard, and each voice is valued based on objective experience and expertise. In other words, it’s the opposite of Facebook Likes, Google Adsense, the latest “influencers” marketplace, or military rule.

Habermas called this the “ideal speech situation.” (I never realised it, but in building collective intelligence systems such as the BioExpertise Engine, we we were working to develop “ideal speech situations.”). AI could be great at creating systems of ideal speech. The conundrum, however, is that you need an “ideal speech situation” in order to create an “ideal speech situation,” and as Habermas himself admitted, such ideals don’t exist in reality.

A classic paradox. Engineering solutions could be out there (I believe they are). Fantastic models of fair, inclusive super-intelligence exist, but the norms of prevailing communications technologies are ill-equipped to hear their signal through the interference. It takes fair communications to develop fair communications.

And how is that supposed to happen? Australia has this problem with its Aboriginal population. Its government can’t hear the voices not because it seeks to ignore them. Rather, from the Aboriginal point of view, the norms of Australia’s communication system are both corrupt and silencing. (This tragedy pales, of course, in comparison to the colonisation of the Native American voice, and Australia’s recent Garma Festival could teach America how to reunite a bifurcating nation).

What we’re seeing with AI is what we’ve always seen. Its makers are its benefactors; and, too, they are the colonisers of older systems. Humans do good and bad things. Splitting the atom was not the first technology to put the lives of its creators in jeopardy by destroying the environment (how many ancient tribes must have perished in fires gone awry?). CRISPR technology is not the first to transform the human body or change the complexion of a nation (see China’s or India’s gender demographics). So too, AI has been colonising and killing people for thousands of years.

We have the means to self-reflect, of course. To consider the implications of our actions. We accept the idea of ethicists providing guidance about atomic energy and the artificial manipulation of our environment. We expect ethics panels to consider DNA splicing and the artificial augmentation of our bodies. Why wouldn’t we have such ethicists considering the artificial augmentation of our minds? What, after all, is both more important to — and less understood by — humanity than our own ability to think?

--

--