The most interesting question asked about Alpha Go…
I’ve followed the competition between Google Deep Mind’s Go-playing artificial intelligence Alpha Go and 9th dan Go master Lee Sedol with great interest, as many people have. It has been a fascinating spectacle to me, even though I don’t know or play Go very well at all. As well as bringing global attention to the ancient game of Go, it has of course also sparked a huge amount of conversation about the current state of artificial intelligence and it’s imminent disruptive power.
The vast majority of this speculation has occurred outside of the official reporting and press conferences, which were marked by great respect and decorum as they should be.
Despite this, one particular question really stood out:
During the press conference following Lee Sedol’s sole win versus Alpha Go — game 4 of the series — a reporter from NHK Japan addressed the following question to DeepMind co-founder, and Alpha Go project lead, Demis Hassabis:
“Today there was that sequence of Alpha Go moves which looked like an unfathomable mistake to even the experts, but they couldn’t dismiss it because mistakes have previously turned out to be advantageous. If this happens in real world usage, something medical where someone’s life depends on it, and even to experts it looks like a grave error, but people accept it thinking that there’s a bigger picture in mind, it will cause a lot of confusion. What do you think about that?”
Here’s how Demis responded:
“Well of course the first thing you have to remember is that Alpha Go is a prototype program. I wouldn't even say it’s in beta. It’s not even in Alpha, probably. So, of course, part of why we’re doing this match is to look at what those weaknesses are, and you can only do that in games, in Go, by testing against a very diverse range of opponents, who are extremely skilled. And there are not very many of them in the world. Lee Sedol is one of those. So that’s one thing I would say. Of course, also we’re playing a game. A beautiful game. But healthcare would be a different matter, and that would require, obviously, extensive, stringent testing in the normal way for software. But this is a one-off programme, you know, in a prototype phase and work in progress phase that we are testing here, so I think that’s a very different situation.”
His response is perfectly reasonable and correct. He’s right to point out the current state of development of Alpha Go and the category difference between playing Go and other applications. On the other hand though, the techniques employed by DeepMind in AlphaGo, and this tournament itself, are part of an attempt to explore the dynamics of general purpose artificial intelligence. So the question by the NHK reporter is absolutely relevant and to my mind gets to the crux of our developing relationship with AI.
Will we still trust the machine when our instincts tell us it’s making a mistake?
And what will that do to our sense of agency? Our psychology?
There are many questions raised by the prospect of general purpose AI, but while economic ones tend to grab the most headlines, I suspect it’s this one, about the psychology of trust, that may ultimately be the most important and transformative.
Originally published at suspendedjudgement.net