One experiment you might find interesting was one in which people followed a robot in the event of what appeared to be a fire, trusting its judgment, even when the experimenters demonstrated that the machine was prone to errors.
For details, please see:
At the same time, it is worth keeping in mind that even medical experts make mistakes, and in fact they do so all the time. So should we trust them? We hope that if someone is especially prone to mistakes this will be caught, possibly through statistical analysis. Doctors may bury their mistakes, but hopefully the hospital is at least keeping track. I would hope that we will take the same sort of approach to DeepMind. In fact, I believe they already are in the UK with respect to identifying early signs of disease that result in blindness.
Google's DeepMind and the UK's National Health Service (NHS) have recently teamed up to help doctors detect early signs…www.techtimes.com
As with any new technology that is to be employed in areas involving risk, you should try to reduce the risk by selecting problems that involve less risk first, then if it performs well in that context, gradually increase the risk of the problems that you attack.