Journalists are slamming into scientific studies, exposing a key flaw in media
I was writing a few days ago about science reporting in media. Well here’s a bit more on this. Just as Robert Scoble was writing yesterday about the issue of “techno skeptics” and the role of media (bottom line: fear sells more paper/pageviews. Full read here) I bump into a highly sensationalised headline on Bloomberg Business: “Humans Are Slamming Into Driverless Cars and Exposing a Key Flaw”.
Reading the article reveals a bit more:
“Turns out, though, their accident rates are twice as high as for regular cars, according to a study by the University of Michigan’s Transportation Research Institute in Ann Arbor, Michigan. Driverless vehicles have never been at fault, the study found: They’re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.”
So the self-driving cars are never at fault and the crashes are not high speed ones. This would make it mostly a non-issue for any logical person. Personally, I would easily trade these crashes to eliminate the high speed ones. My first thought was that scientists studying this would better look into fatalities than just number of incidents.
Well, reading into the key findings (Bloomberg linked only to the abstract, the full paper is here) of the study
“[T]he current best estimate is that self-driving vehicles have a higher crash rate per million miles traveled than conventional vehicles, and similar patterns were evident for injuries per million miles traveled and for injuries per crash. Second, the corresponding 95% confidence intervals overlap. Therefore, we currently cannot rule out, with a reasonable level of confidence, the possibility that the actual rates for self-driving vehicles are lower than for conventional vehicles.”
And in the actual body of the study: “To date, no self-driving vehicles have been involved in any fatal crashes, compared with 0.5% of all conventional vehicle crashes”
To top this off, the study is about a small number of miles driven by automated cars and for a total of 11 (eleven) crashes. And eight of them were while the car was stopped or in very slow speed. And in when looking at injuries, we have a smaller rate than human-driven cars.
To summarise, we’ve got a case of a very early study, with a small sample, producing only partial results. If I was to make a generalisation from this small sample of a single article, I’d say it is evident, that for (technology) journalists, accurate reporting is not their top priority. We’ve come to a point where, for better or worse, even when not in doubt, one needs to fact-check everything.
Originally published at The Markos Giannopoulos Blog.