Testing Artificial Intelligence

Marcus Preston
Psyc 406–2016
Published in
3 min readMar 22, 2016

I’ve always been intrigued by Artificial Intelligence (AI), the idea that something inhuman, something not even remotely biological, could have some form of intelligence, or cognition.

Intelligence is a construct, by definition, and is so difficult to properly define because of the fact that it is so broadly defined. It is something so differently perceived and so differently acquired in every culture.

As a general case, what is looked at when studying artificial intelligence is how adaptable an AI is to a particular environment. Artificial Intelligence is designed to replicate the ‘intelligence’ of human beings. This concept had only really been discussed in the days of Alan Turing, who coined the term ‘Turing Test’, a test for intelligence in a computer that had a single goal: convince the tester that it is not a computer, but a person.

Apple iPhone Software ‘Siri’ responds with a joke.

These days, many computers can pass simple Turing tests, just ask Siri, Cortana, your Galaxy, any smartphone you have access to. Ask them what they are, or what they like to eat, or what the meaning of life is: they’ll give you a very believable answer.

Artificial Intelligence is not uncommon, it happens all around us, we just need to take note. Most recently, one piece of AI that has been very notable is the software AlphaGo.

Google Deepmind software AlphaGo

AlphaGo is a software designed by Google DeepMind to play the Chinese board game Go.

But, if it’s already so difficult to test for some general ‘intelligence’, how would it be possible to test a computer for intelligence? This is what the game Go allows for, researchers at Google Deepmind have concluded.

Go is a traditional Chinese game created over 2,500 years ago, with very simple rules: have more ‘stones’ than your opponent has on the board. It is a complex and intricate game, with the number of possibilities for every turn being around 200 possibilities — that’s 10 times more than chess. To put that in perspective, there are 1,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 (170 zeros) possible positions in every game.

That’s more than the number of atoms in the universe.

Traditional Chinese board game Go

The key with Go is that there is never just a simple algorithm that can be used to win the game, at any time. It can be entirely unpredictable, and some professionals claim to make moves just because ‘it felt right’. This is extremely common.

What it tests is intuition and adaptability, two things that are seemingly impossible for an AI to achieve, and sometimes they’re even difficult for an average person to have. AlphaGo was built with a dual brain model in the attempt to replicate factors like those, resulting a multitude of possible judgments because of this split-brain functionality. This is what gave it the capacity to beat European Champion Fan Hui 5–0, and World Champion Lee Sedol 4–1.

This begs the question: is intuition what separates humans from artificial intelligence? If so, why aren’t constructs such as intuition and adaptability being tested with much more significance?

Or, perhaps another question: is this the way we should be testing people, through something more complex like an intuitive game?

Perhaps we should be looking more into ourselves — before testing computers to see if they can act like us — to really be able to identify what true intelligence is, and where it truly lies.

260572734

--

--