About self driving cars
Joshua D. Brown was a real person but Elon Musk wants him to be a statistic. That is why I write this.
He was an ex-Navy SEAL, now working on creating his own business. You can see from his twitter and facebook that he loved his family and people in general. He wished followers merry Christmas, a happy fathers day and consistently urged them to have a safe journey.
It is tragic that he was killed in a preventable car accident (this would not have happened without this self driving car) and it deserves analysis.
Of course the salesmen are doing the typical corporate-psychopath reaction of damage control and providing fanboys with the stats and tools to fend off criticism.
The first thing Telsa said in their statement was
This is the first known fatality in just over 130 million miles where Autopilot was activated
then several paragraphs about how you can’t watch DVDs while driving, autopilot isn’t on by default. The last thing they did (it’s a wonder they even remembered to) was pay tribute to the deceased.
For a car to drive itself it needs to have a driving program in it. A program isn’t like a gear or an internal combustion engine. For a program sophisticated enough to drive a car it’s going to be an extremely complicated entity with its own personality. By no means is there one right answer that different engineers would converge on.
Computers are a new thing and computer programming is a very new discipline. Nobody knows how to do it well yet. Software today only works by coincidence and it’s all held together by chewing gum and duct-tape.
Making a program is not like building a bridge or erecting a skyscraper. There are no known guidelines or rules on how to stop it falling over. (Lots of fake metrics have been invented, this is called cargo-culting). Programming is immature and doesn’t have a science yet.
What programming really is like, is casting magic spells. It might seem to some that you can profit some short term convenience by casting spells that create autonomous beings to do your bidding. The problem is that unforeseen things happen when these systems diverge unchecked.
Anyone who uses a computer is familiar with computer crashes and freezes. A blue screen that comes up and stops everything. What would happen if the program driving your car froze like that?
We don’t let people at risk of epileptic seizures drive vehicles. It isn’t reasonable to let computer programs drive cars until we have operating systems that do not crash. It isn’t safe and it is not fair to other drivers.
In games you can see people being pushed through the floor and zipping into weird grey zones where you can see the entire rest of the world inside out.
Gamer players search out glitches like this, they try placing items when next to a wall. Or using a move or character the game developer never thought would be used in this level. Looking for option that hadn’t been considered by the programmer to break through a wall.
Computer programs have to have code in them to handle every single possible case there is otherwise there are going to be situations where they will do the wrong thing: like driving straight into another vehicle and totaling both cars.
Humans learn and think in a completely different way than the computer programs people make. We understand social dynamics and cues from other drivers that enables us to understand their intentions.
But a program just has a list of cases to check off. It might have hundreds, as many of these cases coded into it as it’s designers could think of but it doesn’t have the ability to make sensible decisions in circumstances that programmers overlooked.
A computer program can’t make moral decisions, it will only try to optimize for the ’best’ outcome. Where ‘best’ is decided a committee at hyperloop or google inc.
In an exceptional circumstance it might go affirmative-accident and judge your own life as less valuable than some other privileged vehicle that chose to overtake you.
Who gets to audit the programs to see if they are high quality and understand how they operate? Is there an independent body that goes through it to ensure the autopilot doesn’t break any traffic laws. All of the code is a trade-secret hidden from the peoples whose lives it endangers.
That isn’t good enough — the driving program must be open source. Volkswagen used the same sort of secret code to cheat on diesel fuel emission tests. The implications of hidden code for self driving cars even more serious.
Other smart cars have been thoroughly hacked, these hackers (one an ex-NSA agent) explain how they were able to track and even take complete control of their driving mechanism, turn off auto-brake systems and smash them into barrels from their own laptop. All without even guessing a password!
When he got flack from the media Elon Musk continued his whitewashing attempts:
Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public
These are words of sickness from someone that is trapped defending an immoral product by capitalistic and share-price bondage.
He wants to minimize this death by making the number one seem tiny in comparison to completely hypothetical million. What a shame.
We all want safer roads and fewer accidents and deaths. It’s not the time for automatic driving yet though, programming need a few more decades to advance before it can be done. Every entrepreneur wants to be the first and make the most money doing something fresh and new with technology but before scientists work out how to make sure they don’t crash it is irresponsible.