Air France Flight 447 is only one of many plane crashes where the auto pilot disengaged leaving the humans to deal with the mess. There an intersting NOVA special on the crash: https://youtu.be/OfURd8ItHRw
Two things to note here: (1) The auto pilot disengaged precisely because there was no obvious, good solution to the situation the plane was in. (2) The head of the pilots union says automation is making pilots *less* capable of dealing with emergencies while simultaniously expecting them to do so and blaming them when they can’t.
I’m not saying that humans are less fallible, I’m saying just the opposite, humans are *more* fallible. That’s why they shouldn’t be expected to suddenly take over when the shit hits the fan. Human’s are *more* distractible. That’s why they shouldn’t be expected to monitor automated systems. (Here’s an article on *this* sort of problem: https://www.washingtonpost.com/archive/politics/1996/04/21/airlines-take-hard-look-at-automated-flight/31eb498c-a0ba-41e8-8c1c-a20b358b4421/
Google had the right idea when they wanted to remove the steering wheel and pedals completely from the vehicle. That way when there is a crash, there’s no doubt who’s to blame.
What we are going to end up with instead are rediculous systems like Tesla’s where humans are expected to stay focused on the task of driving even though there is nothing for them to do most of the time, and then when the emergency happens, the company blames the driver for not wresting control of the vehicle back from the computer.
I’d love to have a discussion about this. I think it should be talked about much more in the self-driving car lititure.