The most interesting question asked about Alpha Go…
Chris Dymond
1

One experiment you might find interesting was one in which people followed a robot in the event of what appeared to be a fire, trusting its judgment, even when the experimenters demonstrated that the machine was prone to errors.

For details, please see:

http://www.news.gatech.edu/2016/02/29/emergencies-should-you-trust-robot

At the same time, it is worth keeping in mind that even medical experts make mistakes, and in fact they do so all the time. So should we trust them? We hope that if someone is especially prone to mistakes this will be caught, possibly through statistical analysis. Doctors may bury their mistakes, but hopefully the hospital is at least keeping track. I would hope that we will take the same sort of approach to DeepMind. In fact, I believe they already are in the UK with respect to identifying early signs of disease that result in blindness.

Please see:

As with any new technology that is to be employed in areas involving risk, you should try to reduce the risk by selecting problems that involve less risk first, then if it performs well in that context, gradually increase the risk of the problems that you attack.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.