The AI that reflected.

Part Two: Don’t let AI play video games!

Naturally, there is a scepticism with this little-understood technology that we have created. Perhaps this is partly due to the fact that even people operating at the forefront of its development are surprised by its learning capacity. Which brings us to DeepMind, Google’s centre of AI. If you have not heard of DeepMind, the headlines are that it was a start-up organisation bought by Google for $500m and is seen as leading the world in artificial intelligence. One of its founders, Demis Hassibis has a life mission for AI, informed through the lense of video games: he wrote his first aged eight, and founded the games company Elixir after graduating with a double first in computer science from Cambridge.

Herein lies a problem. To teach AI to play games is to teach it a win/lose scenario, which in essence is what Space Invaders, Chess, Go and so on are. They are scenarios which pit one player or team against another to see who wins. This may be life viewed through a male and capitalist prism, but it is not life in the round or in a philosophical sense.

If we teach AI based on a win/lose and problem solving basis, we ignore the vital, emotional side of humanity around connections, communication and growth to improve rather than growth to win. In a current society which values confidence over quality in job interviews (and US elections), we need to recognise that whatever we build in the future will reflect the values of its creators when it learns to think for itself.

This is something that the other founder of Deepmind, Mustafa Suleyman, seems to appreciate more. At 19, he dropped out of Oxford to set up the Muslim Youth Helpline, a telephone counseling service to help young Muslims overcome barriers in employment, sexuality, mental health and more. He helped start Reos Partners, a conflict resolution consultancy. He has also worked for the UN, The Dutch Government and WWF as a negotiator and facilitator.”

It is Suleyman who talks in interviews about the broader and more nuanced applications of AI and its relationship to social impact, as he said in an interview with the FT: “We learn so much about the strength and weaknesses of our algorithms by testing them on large-scale, real-world, noisy and messy data sets…It’s a pretty unique way to make progress with our toughest social problems.”

AI needs to connect with the world in all its messiness, because it is through being able to understand the world that it can connect with our real, infinite and constantly changing world issues. We can learn many things from games and from game theory, but setting it a challenge of solving London’s traffic problems, reliance on fossil fuels, hunger or perhaps even the Human Condition should be the focus. Games are not a good starting point where the goal is philosophical.

Final Part…Keep going!

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.