Trial and Error

Christian Barraza
3 min readMar 11, 2016

--

My first opinions of self-driving cars all revolved around skepticism. How can a computer-controlled vehicle possibly be a safe idea? Computer failures happen all the time; so how can anyone guarantee a self-driving car will not succumb to a similar faith? In my eyes all I could imagine were self-driving cars going berserk and sending its passengers to their demise at the bottom of a cliff. However, I recently read something that mellowed out some of my skepticism. What if the computers aren’t the real problem? What if human error is the only imperfection preventing self-driving vehicles from taking off?

Google has been one of the leading developers of the self-driving technology. With that in mind Google has had the most experience with testing the technology since 2009 in which they first started to encounter issues that may surprise you. During one of the tests in 2009 one of Google’s vehicles experienced paralyzing difficulty at a four-way stop sign. First thoughts imply it’s likely some sort of computer failure; instead, the real issue was the human drivers. The vehicle’s operating systems was behaving well within safe-driving procedures that all of our driving instructors drilled into our heads before the driving test we all need to take. The problem is that humans don’t preserve this discipline and rather develop unsafe driving habits. Our imperfect driving skills are what stumped Google’s computer systems and froze the vehicle in its tracks. Since the vehicle’s operating systems were designed to be safe law abiding drivers the vehicle could not cross the intersection until all vehicles made a complete stop at the intersection. This is an extremely rare occurrence for us human drivers. Our impatience and fast-paced living has pushed the boundaries of stop signs with every half-assed stop we make at each stop sign.

The vehicles are being programmed to strictly abide the rules of the road. In most cases computers are capable of being much more accurate than humans; which has posed a threat to the law-abiding robot. The fact that a robot is capable of being a much safer driver than the average human driver is rather unsettling if you ask me. Currently Google has reported 16 incidents with the self-driving vehicles. However, each of the incidents has one common factor and that’s us, the humans. In each incident that has occurred a human was at fault. One of the incidents occurred as the vehicle began to slow down to allow a pedestrian to safely cross the street when the vehicle was hit from behind by a human driven sedan. The vehicle obviously performed with the guidelines. So far there has only been one accident in which Google was left at fault; and even then the incident was not the robot’s fault. This incident occurred as Google’s vehicle collided with another moving car. At the moment of the collision Google’s car was under control by the driver inside. The mixture of human and machine appears to be disastrous when we don’t allow the robot to do what it’s designed to do.

My first impressions of self-driving technology left me skeptical of the robot’s capability to safely maneuver on the streets. However, in the end I was abruptly reminded of the dangers of driving amongst other humans who should probably take a few more lessons in traffic school. In the meantime the computers will need to be programmed to safely adapt to today’s driving environments. At least until 2050 when robots can safely control everything.

--

--