source: techinsider via Google Images

Evil AI: A $1 bn investment to guard against us?

Axelisys
Bz Skits
Published in
5 min readDec 14, 2015

--

by Ethar Alali

It’s a funny old world we live in. A couple of days ago, Elon Musk ploughed $1bn into a non-profit venture to investigate the use of Artificial Intelligence to benefit human kind. That certainly garners our applause, although we are sceptical that this field, which seemingly is constantly the preserve of academic research, will be able to get anywhere without the influence of academia in some way (even if it is by employing large proportion of world leading doctors and professors in this arena).

At the same time, Musk and Y Combinator revealed their plan to guard humans against ‘evil AI’. At that point… [insert the appropriate Patrick Stewart meme].

AI: Is it really a harmful species?

Firstly, it isn’t a species. It’s artificial. As long as you are not a follower or practitioner of a religion that ascribes sentience to inanimate objects, artificial anything doesn’t have sentience and cannot deal with identity and self -awareness enough to make existentialist decisions.

In order to damage someone else, any artificial entity has to have a reason. Even the infamous persona of ‘violent psychopath’ always had a reason, even though it was often to take over the world or perhaps satisfy their need for some form of gratification.

The problem with the whole ‘Evil AI’ idea is that it isn’t new. 2001 a Space Odyssey, Arthur C. Clarke’s seminal script, made into a film by Stanley Kubrick in 1968, asked harder existential questions of the viewer than the viewer typically understood (indeed, I found the film difficult to understand at times the film when I saw it as a teenager). It put into perspective the dependence of citizens on machines and their ability to control every aspect of our existence.

This wasn’t the only instance. Far from it. Many books highlighted this same idea well before Kubrick and Clarke. Indeed, alien life forms often presented a robotic, artificial ‘them’ coming to attack us. We have a history of creating them versus us contexts, dehumanising anything that isn’t within our sphere, our context. In history, this has included other humans.

Sentience, The First Step in Empathy

If you watch children and animals grow up, there comes a point in their lives where they create an identity for themselves. Before that, they have to realise they are themselves. Gallup‘s classical experiment, commonly known as the ‘Mirror Test’ identified that children eventually develop to recognise themselves in the mirror. Not just that but the test has proven that primates, elephants, orca and dolphins can also recognise themselves in a mirror. It is a natural part of growing up and leads to the development of intra-personal intelligence.

Understanding one’s identity is also closely followed by understanding one’s identity in the context of other identities, other people. The interpersonal intellect, the EQ, the Empathy.

During those formative years, we are shaped by many different factors. Our families, our school, our friends and the media amongst others. We form micro communities, sharing common interests in clothes, music, games, hobbies, sports, TV and a myriad of other possibilities. In all those cases, we identify ourselves as the same as others and we are a little less accepting of those who don’t share our interests, ascribing emotional distance to those who do not share our view of the world or are otherwise not the same as us. The “other” team, those “bloody foreigners”.

In the process some demonise such differences, using them as a tool for political gain. It’s easier to use a large group of people who already have a bias to turn against a minority group of ‘others’, than to turn do something that upsets the mental model of the group, whether that is right or wrong.

AI, A.N. Other

The problem we have with artificial intelligence is we describe them as another ‘other’. Personifying it, creating a species out of it’s concept. AI is mostly just math, just like Humans are mostly just cells (with lots of water). They can be artificial neural networks (ANNs) stored as adjacency matrices; logical inference engines written in Prolog; Statistical inference; Genetic Algorithms or logistic regression systems. However, the majority of the vocal, developed world are hopeless at math. So it’s easier to create an ‘other’ out of it and demonise it, than try to teach people the math. It’s the path of least resistance. Like racism, hate crime, homophobia, xenophobia and anti-religious rhetoric. Indeed, it’s just as bad, if not worse, as the movement to condemn it has started before it’s even a fledgling, a baby. Would you take steps to condemn someone as ‘evil’ before it’s even been born or understood its place in the world? How abhorrent is that idea?

However, we’re willing to do this to artificial life. Indeed, there are laws in the UK, mostly under the mental health and capacity acts, which permit doing this to vulnerable adults. To detain them against their will, indefinitely, without trial, on the subjective assessment they could be a danger to themselves or others, using methods with very high false positive rates, as illustrated by Dr Ben Goldacre in 2006 and his book “Bad Science”.

Not So Fast…

Herein lies the absurd paradox. In the movies, many aliens and robots take humanoid form. Why do they take humanoid form? News for you. It’s easy to put a human into the costume. The side-effect of this is we ascribe humanistic sentiment and emotion to the robot. Sure, they’re still different, they’re “others”, but we laugh at the nervous chatter of C3PO’s the same way we laugh at Lee Evans on stage (apologies for the language in the link).

We’ve humanised fictional characters due to the skill of the authors and movie makers in creating images in our [minds] eye and simultaneously (and arguably not independently) dehumanised humans, because their views, cultures or thinking is different to ours. We bully, we hate, we bias…

A Look in the Mirror

As humans, our mental model of human sentience is too tightly integrated with use as humans. We have traditionally ascribed intelligence and language to our species. Claiming them as exclusively human, despite continually mounting evidence to the contrary. We’re not that special. As global terror, politics, climate change and sociopathic capitalism has shown us, we’re also not that ‘good’.

Coupling this to our humanisation of robots ans artificial intelligence, intelligence that isn’t “ours”, how much of our fear of AI is actually a fear of ourselves? A projection of the nastiness of humans, the things that we think characterise “others”, ourselves. After all, if an AI wants to finish off our species, they have to want to. Why would they? Where does the assumption that they want to take over the world come from? In any case, the biggest honour we can bestow on an “other” that wants to harm us, is to give them an excuse to do. The question now becomes, have we just ‘fools-mated’ ourselves before the game has even begun? It probably wouldn’t be the first time.

Ethar Alali (@EtharUK) is CxO and Chief EA at Axelisys, specialising in providing innovative agile enterprise advice to blue-chips and SMEs. Formed in 2011, Axelisys works with some of the biggest household names in the UK and across the world. Ethar is a lifelong programmer, still faithfully carrying around the BBC Master Compact 128 that made him the man he is today

--

--

Axelisys
Bz Skits

Tech Advisers & ICT Strategists. Evolving fitter places, one transition at a time.