In China, a Tesla Model S in “autopilot” mode crashed into a semi-truck, killing the Tesla driver, in 2016.
Another fatal Model S crash occurred while in autopilot in Florida a few months later.
In early March 2018, a self-driving Uber just hit and injured a woman in Scottsdale, AZ. She died later at a hospital.
On March 23, 2018, a Tesla set on Autopilot mode crashed into a highway barrier, killing the 38-year-old driver.
Those are the fatalities so far. There have been dozens of non-fatal accidents and other incidents.
Regarding the Tesla crash, NTSB (National Transportation Safety Board) spokesperson Chris O’Neil said, “ “The NTSB is looking into all aspects of this crash including the driver’s previous concerns about the autopilot. We will work to determine the probable cause of the crash and our next update of information about our investigation will likely be when we publish a preliminary report, which generally occurs within a few weeks of completion of fieldwork.”
Though an acquaintance of the victim said that the victim had complained the car would sometimes veer toward the barrier, Tesla representatives say there were no logged complaints regarding Autopilot in their system.
Tesla’s announcement included this:
There was a concern raised once about navigation not working correctly, but Autopilot’s performance is unrelated to navigation.
However, according to Tesla’s own website, when their vehicle is set in Autopilot mode:
Once on the freeway, your Tesla will determine which lane you need to be in and when. In addition to ensuring you reach your intended exit, Autopilot will watch for opportunities to move to a faster lane when you’re caught behind slower traffic. When you reach your exit, your Tesla will depart the freeway, slow down and transition control back to you.
All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed.
The word “navigate” is used 4 times on Tesla’s Autopilot page alone.
How, dear Tesla representative, is Autopilot NOT related to navigation?
What other navigation issues have been reported that would, according to you, have no bearing on Autopilot’s function?
Let us count the ways:
Tesla Navigation is Terrible.
Does anyone else have problems with the navigation system like this: Clearly it’s not trying to avoid traffic, 202 is…
And this does not address the question of why Tesla’s camera and radar did not detect the barrier in the first place.
All of the positive momenta that have been building up will come crashing to a halt (pun intended).
Tesla, and other autonomous vehicle makers, need to be honest and open about the issues and the solutions, otherwise, they risk public and political ire. Why would Tesla try and say Autopilot performance is unrelated to navigation when it clearly relies on the proper functioning of the nav system to do its job? It is such an obvious attempt at misdirection (pun intended) that we have to question how Tesla’s PR department let it slide.
Tesla does, however, message its owners clearly to always keep their eyes on the road and hands on the wheel.
I must come clean. I love the goals and ambition behind Elon Musk’s 21st-century companies. I am a big fan of SpaceX, Solar City and Tesla. But I need to know: Where is the truth, Mr. Musk & co.? Tesla Motors?
Autonomous driving, Full self-driving, and Autopilot
These three terms should not be confused.
Autonomous driving is actually a 5-level step progression to fully autonomous vehicle action. Level 1 is adaptive cruise control, Level 2 is the ability to lane keep and some lane manoeuvring, Level 3 is an awareness of the surrounding environment such as stop signs, stop lights, and pedestrians. Up to this point, there is no actual self-driving: the DRIVER is in charge. Level 4 is geofenced self-driving and Level 5 full self-driving.
Tesla presents Enhanced Autopilot and full self-driving in their vehicles. Enhanced Autopilot currently operates at a Level 2+ and will add Level 3 tools soon.
A driver can use the driver assist tools ( Levels 1- 3 ) with navigation in a Tesla currently if they purchased the full self-driving package, although Tesla does not have full self-driving activated in any production vehicles.
So it is important to note that all accidents reported so far have been within those “driver assist” levels 1–3 modes, in which the driver is supposed to be in control with a hands-on wheel.
We knew this was going to happen. It was inevitable.
The rationale behind self-driving cars has always been sound: People tend to react either too slowly or act illogically on the road. They can be reckless and they don’t follow all of the rules. Get people out of the equation in driving, let really smart computers control your vehicles, and everything should improve: less crashes, less traffic congestion, less fuel consumption.
In May 2017, a Morgan Stanley analyst team predicted that if Alphabet Inc.’s self-driving car venture Waymo has an 84,000-strong fleet by 2022, and driven almost 4 billion miles, that could mean nearly 50 deaths based on the current rate of auto fatalities (1 death for every 80 million miles). There is an inherent flaw in this conclusion, however. Waymo’s driving tests are done in most ideal conditions and therefore not directly comparable to the majority of time spent on the road by U.S. drivers.
But the systems being used by the self-driving experts at Tesla, Ford, Waymo and Uber are being built with the express purpose of being safer.
Ford is working with a video and radar-based system for pedestrian detection that is based on 500K miles of driving data gathered from a dozen cars buzzing around in various conditions on three continents. It will be self-contained and unable to download software updates like Tesla’s autopilot system allows.
Toyota, Subaru and Honda are all in on the game as well, all with very well-testing systems.
This stuff works. It has been tested, re-tested, and refined thousands of times.
The trouble in the case of pedestrian accidents will, most likely, come from the pedestrians themselves, or other “human” drivers sharing the road with the autonomous vehicles.
Why? Because our self-driving cars will, barring severe failures in hardware or software bugs not covered by back-ups, follow the rules they are programmed to follow. And they will do so without being affected by emotion (read: road rage), or fatigue, or distraction. Or medical conditions, or random last-second decisions, or curiosity.
And people are affected by those things all the time. 90% of the factors that cause traffic accidents now would be almost entirely eliminated if self-driving cars ruled the roads.
One of the biggest problems, then, is that they won’t be the only kind of cars out there for a long time. For a long time, there will be some mixture of significant portions of autonomous vehicles and human-controlled vehicles. In essence, the highways and avenues of the world will be shared by huge numbers of 100% logical multi-ton machines and huge numbers of human-fallible multi-ton machines.
There will be accidents. A lot of them. Perhaps less than now, but they will happen.
The makers of self-driving tech are working hard right now to help their cars better learn and understand the myriad forms that weird, random, unpredictable human behaviour can take. What they’re made will continue to improve, and become safer. But it’s not really the autonomous vehicles we need to worry about. Just as it always has been, it’s the human factor.
Man vs. Machine
There will always be the matter of a natural human tendency toward mistrust of new technology.
Even the (comparatively) simple technology of many electrical systems in modern cars has resulted in a veritable pile-up of accidents and recalls.
Toyota’s “sticky pedals” incidents, that caused unintended acceleration, ended up killing almost 90 people. It was traced to a confusing muddle of what programmers call “spaghetti code” in the car’s software, in which so many people have written so much code over such a long period of time, without proper documentation, that it became almost impossible to find the problem.
There is a lesson in that.
The cars didn’t screw up. The people who made the cars screwed up.
We can’t let the few accidents that do occur, and that will occur in the future, to scare us away from advancing this technology. When it is fully adopted, people, in general, WILL be safer. You will hear of fewer friends and relatives dying or being badly injured in wrecks.
But we need time and support to get there. Once we iterate enough and improve on the tech to best the human problem with self-driving cars, one of the first words everyone thinks when they hear “car” won’t be “accident”.
Many engineers and VCs want to speed through R&D as quickly as possible and get a product to market, but when it comes to autonomous cars and AI projects in general, it is definitely safer to proceed with caution. Rather than see the road ahead as endless intersections of “green lights”, we should treat every stretch of road as if it culminates in yellow light. When it comes to human lives, that’s the least we can do.
Thank you for reading and sharing.