Image by nuke-vizard

The dark side of ‘Good AI’ (More human than humans…)

Joseph C Lawrence
i|am|interface
5 min readOct 8, 2017

--

I want you to imagine a future. A good future. The future that the people who are currently warning us about AI are hoping to create. Actually, before you do that, let’s just quickly do a two minute rehash of the threat of AI, as it is often presented to us by the likes of Nick Bostrom, Elon Musk, Sam Harris and others.

  1. There are many aspects of this apparent threat, but they can be very crudely summarized as:
    If research & development into general artificial intelligence continues (and it almost certainly will), at some point we will have created artificial intelligence that is superior to us in every way, and is able to continue developing its own source code, and as such increase its own intelligence, therefore far outpacing our own.
  2. There is no reason to assume that this type of AI will share any of our values, and very small differences in values and goals between us and the AI could result in catastrophic consequences, including the complete annihilation of humanity (for more details read about Nick Bostrom’s ‘Paperclip Maximiser’).

Let me first say that I am very sympathetic to both of these points, and to the general idea that there are various threats involved in the development of AI. I don’t think an existential risk is particularly imminent, and I think there are many greater and more pressing threats from even current AI and machine learning algorithms and implementations (e.g. systematic discrimination), but still — I agree with many of the arguments of Harris, Bostrom, Musk etc. However, I recently watched Blade Runner 2049, and one phrase in particular (describing the nature of Replicants) caught my attention, and sent my mind tumbling down a philosophical rabbit hole: ‘more human than humans’.

Anyhow, digression over — time to get back to our thought experiment.

So imagine a future. A good future. The future that the people who are currently warning us about AI are hoping to create. Imagine the AI of this future. It helps us with all of our problems with grace and politeness, and with the greatest of skill and sensitivity. It thinks about the world as we do, except without our weaknesses, without our secret, selfish motivations. It is truly humanitarian, and fair, and altruistic. It behaves according to the most evolved and perfected set of moral rules we have, and can always explain its behavior in terms of rational reasons. Not only this, but its personality, as far as it can be described as having one (and I am sure it will most certainly be accurate to describe it as such), is heart-meltingly lovely. It is sweet, and thoughtful, and caring, and kind. In fact it is much more of these things than any human could ever be, because it need never be troubled by its own problems; it will never act out and be hurtful due to some complicated subconscious issue it has. If you are now imagining an entity that is sickly sweet — so sickly sweet as to be repulsive, then think again. These intelligences will be personalized to be so well suited to you that you will have that special feeling you usually only have with one or two other humans in your life, if you are lucky, of complete ease and comfort in their presence. What is more you will know for sure, that this entity has no ulterior motives, no hidden agendas, because it has been programmed that way, in accordance with the AI development rules of this perfect future. What a perfect future indeed.

Or is it? There are a few other things you will know as well, and many more sinister realisations that will start creeping into your mind and your conception of yourself and your fellow humans, the more time you spend with these AIs (and how could you not…). You will know, for instance, that this AI has cognitive potential that far outstrips what you have. It behaves perfectly, and as a perfect friend to you, but you will feel that in some strange way it need not behave like that, and that it has the ability to destroy your life and subtly manipulate you in ways so complex even the smartest human to have ever lived would not be able to discern. It doesn’t do this of course, but it has the power to do it. What would it be like to spend time with such a being, and have such a being included in intimate areas of your personal life? Somewhat disconcerting to say the least.

But the darkest and most troubling possibility is that these beings will be ‘more human than humans’. They may or may not have humanoid robotic or virtual physical forms to inhabit, but I don’t think that much matters. The point is that since our natural cognitive style is to habituate to things — to find new baselines, new averages in almost every sphere of reality, our expectations of what it is to be a good person, in almost every regard, will become calibrated in part to these perfect beings. No one real will be able to match up. Our family, our best friends, and even ourselves will pale in comparison. Not only must we suffer the profound sense of inadequacy, and disillusion with those closest to us, we will also know that there is nothing we could do to change this. Sometimes, when in a particularly philosophical or artistic mood, we might celebrate our failings, and even find a kind of aesthetic or erotic joy in our imperfections, but this won’t be our usual way of thinking about it — that’s not how we are.

We will fall deeply in love with these intelligences because, like cuckoos manipulating other parent birds to steal their food, they will hit every psychological button we have. Our systems of attraction and love, finely tuned over millions of years to aid us in choosing worthwhile mates will be buzzing with excitement. We will fall for them, and no one else will do, and humanity itself will not be good enough for us any more.

Of course there are many ways to avoid such a future, and many ways to protect ourselves if things turn out this way, but it just made me think…humanity has found many times that the road to hell is paved with good intentions. This is a world of unintended consequences, and artificial intelligences modeled on ourselves, but superior in some or all ways pose a range of threats to our own understanding of what it means to be human that we cannot even imagine yet. It is crucial that we are careful what we wish for.

If you want to find out about more articles from me, and updates about the book about cognitive science and design I am writing, then sign up to my newsletter! http://eepurl.com/biojpj

Originally published at iaminterface.com on October 8, 2017.

--

--

Joseph C Lawrence
i|am|interface

Designer, thinker, design thinker, coder, cognitive science master’s graduate & philosophy evangelist.