The future of machines

John Vary
Room Y
Published in
4 min readJan 26, 2018

In one of my previous posts, I talked about the world in 2030, with extracts from a piece of work I had been doing to provoke discussion around potential future scenarios. One of the themes I talked about was a future where we have machines that are capable of conscious emotional responses.

I wanted to take this opportunity to elaborate on why I chose this theme and hopefully provoke some discussion amongst this community.

Firstly, It is clear that Artificial Intelligence (AI) will, and has already started to, fundamentally change the world we live in. Secondly, it is also clear that, today, AI is still in its infancy but the rate of development is phenomenally fast.

Now for a little context around why I chose this theme.

In July of last year I came across a picture from 1996 of former chess grandmaster and world champion, Garry Kasparov sitting at a table opposite a very unique opponent, Deep Blue, a chess computer developed by IBM. What struck me from this scenario was what was going through Garry Kasparov’s mind? Sitting there against a machine that never had any visible, historical, strategy or identified weaknesses. This is amplified when you take into account that Deep Blue had access to every game Garry Kasparov had ever taken part in. Even back then, in 1996, the possible opportunities for humans must have seemed endless.

Fast forward twenty two years, to 2018, and the capabilities of these types of agents (machine) has grown radically, for example Deepmind (British AI company) demonstrated it’s AI power (Deep Reinforcement Learning to be precise) while playing Atari Breakout (old arcade game where the user directed a ball from the lower part of the screen to remove bricks towards the top of the screen). To start, the machine was only given the sensory input (all you see on the screen) and was monitored over a period of time. After 240 minutes of learning it had gone beyond expert skills and had worked out that the quickest way to win the game was to build a tunnel down the side of the bricks.

Slight detour but there is a connection I am trying to make. I have a five year old son, called Noah. Noah is constantly being educated both at school and at home on the need to be kind, caring, creative, curious and independent. As parents we have never told him to go and win or compete against others, we want the first behaviours and skills that he develops to be about, what it means to be human.

The reason I mention this relates back to the Atari Breakout example, above, and that after 10 minutes of learning the agent was very child like in how it was playing the game and the ball was being missed, regularly. I suppose, pretty much how Noah would play and learn the same game and I believe we should educate these agents in the same way we have been educating Noah. To be kind to others, to support others, to be caring of others. At the end of the day we are educating Noah in this way to have a positive impact on how he behaves as he grows. This should be no different to how we are with machines that are continuously learning.

Looking forward I strongly believe that a human-AI combination will perform much better than AI and humans working alone. I also accept that this topic will divide opinion but as I said at the start my objective is to provoke discussion.

With this in mind I will finish this post with the three laws of robotics, written by Isaac Asimov in 1942.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

If you enjoyed this post, please clap or even comment — I’d love to hear your views.

--

--

John Vary
Room Y
Editor for

Futurologist at the John Lewis Partnership.