Sunday Science: Artificial Intelligence (AI)

Red Dwarf’s famous Series 4000 Mechanoid: Kryten. Copyright Joel Anderson

This week, Google’s DeepMind AlphaGo AI beat the world’s number one Go player Ke Jie — but what is Artificial Intelligence?

Artificial Intelligence (or AI) is the simulation of human intelligence by machines. It’s a buzzword that’s really entered the mainstream in recent times — but its origins began in World War 2.

British mathematician Alan Turing and neurologist Grey Walter first tackled the idea of intelligent machines some 70 years ago at the Ratio Club, an influential dining society for biologists and engineers.

While Grey built some of the first ever robots, Alan invented the famous “Turing Test”. Basically, if a machine could fool someone into thinking they were talking to another person, then it would pass the test.

But it wasn’t until 1956 when the term “Artificial Intelligence” was coined by computer scientist John McCarthy.

What is AI?

There are a lot of different benchmarks with regards to building AI systems. They fall into three broad categories: Strong AI, Weak AI and something in-between the two.

Strong AI aims to genuinely simulate human reasoning. A strong AI system would beat the Turing Test in a mechanical heartbeat.

A strong AI system can think like a human, and perform tasks on it own. There’s no need for a human to manually enter a task that it wants the strong AI system to do. They can make decisions on the spot.

No strong AI machines exist in the real world, but Kryten from Red Dwarf is a sci-fi example of such a system.

He’s self-aware, can learn new skills, perform tasks independently and shows emotions, as the below video shows. Yes, you would be able to distinguish Kryten from a real human by looking him, but not if you were chatting to him without seeing him in a computer chatroom, for example.

Weak AI

Weak AI is a computer system that acts like a human but does not understand how humans think. ​It is non-sentient AI, which is focused on one narrow task.

Google’s aforementioned DeepMind AlphaGo AI is a good example. ​Although it beat the world’s number one human Go player, it did not play in the same way that humans do.* It’s the same for weak AI systems like Siri, or your car’s sat nav.

In-between AI

Then, we have something in between the two. Or In-between AI, as I’ll call it now. These systems may not perfectly model the human mind — but they use human reasoning as a guide.

The IBM Watson AI machine is in the in-between category. It’s, essentially, a chatbot that can answer your questions using natural language.

But I’ve had complaints about the lack of Lego in the last couple of posts, so I’ve got to use Ironman’s cyber-butler J.A.R.V.I.S. as an example of in-between AI.

J.A.R.V.I.S. was originally a bit like Siri — a highly advanced language-based interface. Over the years, Tony Stark updated J.A.R.V.I.S. so it appears that the system has a mind of his own.

We don’t know the exact details as to how J.A.R.V.I.S. works — but he seems to build up evidence by looking at thousands of pieces of information to give Tony a valid conclusion when asked a question. And he talks in a similar manner to a human — which is a true example of inbetween-AI.

In other words, J.A.R.V.I.S. (Just A Rather Very Intelligent System) makes decisions as a human would. He looks for patterns in the evidence to reach a conclusion by weighing up different elements.

Things aren’t black or white to J.A.R.V.I.S. — just are they aren’t for human intelligence. But J.A.R.V.I.S. still needs a human to function, so it’s not yet** a strong AI system.

How close are we to a strong AI machine?

Some researchers have argued that the Turing test was passed by a chatbot in June 2014 called Eugene Goostman that fooled people into believing it was a 13-year-old Ukrainian boy.

But some critics claim the test was not long enough and the fact that it was talking in its non-native English language meant the machine had an unfair advantage.

Either way, we’re a long way from building a self-aware machine that can call us a bunch of smegheads.

Extra reading and watching

Here’s a more in-depth discussion on the strong and weak types of AI (also known as general and narrow). Here’s a great video detailing the differences between strong and weak AI too:

Here’s a longer video from John Searle, where spoke about the philosophy of mind and the potential for consciousness in artificial intelligence at Google’s Singularity Network.

The future of AI is a fascinating topic, with many tech experts disagreeing on its future directions. It’s also interesting to consider how a computer system could evolve into a strong AI system.

To quote my favourite series 4000 Mechanoid: “Please sir, give me some credit. I am not the one-dimensional cleaning droid I was once was; I’ve evolved into something far more complex and multi-layered, and if I may so say so, superior.”

​And here’s a final, fun example of the difference between a Strong AI system (Kryten) and a Weak AI system (the talking toaster):

Notes

* AlphaGo is a step up compared to other weak AI systems like IBM’s master chess beater Deep Blue. While other weak AI systems rely solely on constructing a search tree over all possible positions, this wouldn’t work for a game of Go where players take turns to place black or white stones on a board, capture each other’s stones and try to get the most amount of space on the board.

Go is a simple game to describe — but very much more complex than a game like chess. So, AlphaGo used an advanced tree search combined with functionality informed by the behaviour of the brain — called deep neural networks. ​I’d be tempted to say AlphaGo could be classified as in-between AI — but the experts disagree.

​​** I’m describing an early version of J.A.R.V.I.S here — not the subsequent versions seen in the Marvel world, which are closer to Strong AI, particularly F.R.I.D.A.Y. — the AI system seen in the Age of Ultron film.

What is Sunday Science?

Hello. I’m the freelance writer who gets tech. I have two degrees in Physics and, during my studies, I became increasingly frustrated with the complicated language used to describe some outstanding scientific principles. Language should aid our understanding — in science, it often feels like a barrier.

So, I want to simplify these science sayings and this blog series “Sunday Science” gives a quick, no-nonsense definition of the complex-sounding scientific terms you often hear, but may not completely understand.

If there’s a scientific term or topic you’d like me to tackle in my next post, fire an email to gemma@geditorial.com or leave a comment below. If you want to sign up to our weekly newsletter, click here.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.