My overarching point was that I expected the two points I listed to come up. It’s not that you missed them, you just went in a different direction.
My point about automonomous wipers was in the larger context of autonomous cars. After 100 years, we know windshield wipers “very well.” Yet current sensors and logic are unable to handle ambiguous situations like a very light drizzle well. How is the “brain” of a car going to handle an ambiguous situation of choosing to swerve off the road or into an oncoming lane of traffice rather than run over the pedestrian who just jumped out into my lane of the freeway?
Which lead to my other point. Right now there is one person behind the wheel. That person is liable, barring a technological failure where the manufacturer is liable. If my autonomously driven car in the previous paragraph is “behind the wheel” and chooses (that is the key word here) to run over Bob rather than swerve into an empty lane of oncoming traffic, has my car committed manslaughter or vehicular homicide? Someone had to program the car to make that choice. Is Susan the AI programmer who wrote that code at Tesla now liable for Bob’s death? Or Susan’s boss Deborah who approved Susan’s code? Or George who programmed the test scenarios of Susan’s code and said it didn’t fail? The point is, if no human can ever go to jail for choosing Bob’s death over an alternative, we can expect that Susan, Deborah, George and co. will give less thought to the life and death decisions their software is making — right now. And that is going to suck for those of us in the path of those vehicles.