Tesla, Pokemon, and Consumer Protection

In May 2016, Joshua Brown was involved in a car accident and died. Car accidents, even fatal ones, are nothing of note. In 2014, almost 30,000 fatal car accidents happened in the United States. That means about 82 every day. So what makes this one special? Joshua Brown was using the auto-pilot feature of his Tesla model s.

The auto-pilot function has a few different tasks that it can do. Joshua Brown appeared to be using the auto-lane following feature. He struck a white truck as it was crossing the highway in front of him. This was a fringe case. White truck, blinding white sun, the car didn’t register it, horrible result.

There is an easy way to take this case. What if this had been a cyber-attack? China has the blueprints for the F-35 and the ability to wireless send code to it, as per Shane Harris in his book @War. If a malicious agent had access to Tesla’s code and was able to take control of the car, the proliferation of self-driving cars could be a terrible thing. However, this is a simplistic lens to view this case. Talking heads and pundits have been fearful of the “Cyber Pearl Harbor” for a long time and have been ringing the alarm bells over these issues for years. This does not mean that these fears aren’t legitimate. The Target credit card issue a few years ago, the OPM hack, and even the Wikileaks posting of DNC emails shows that no one is safe from intrusion. However, this post is going to go in a slightly different direction. It will Pokemon Go to be exact.

Pokemon Go is an augmented reality video game which as been sweeping the United States and much of the world for the last few weeks. There have been plenty of think pieces and social commentaries published about the impact of this game upon not only the tech industry but the world writ large. However, I want to focus on one aspect of the roll out. Adam Reeve was the first to break the story that the app had access to the entirety of users Google accounts, not just the basics. This meant Pokemon Go could send email as you and delete your documents as well as many other invasive and strange things. Why would a game need to be able to do all of this?

Short answer is that it didn’t. Niantic, the maker of the game, made a statement suggesting that they had “discovered” that the game had requested more access then they had needed. The fact that the company was an internal startup of Google before it spun-off makes this oversight seem even more questionable. Although the harvesting of information does not appear to be their business model, that could’ve changed when they found themselves sitting on an active user-base larger than Twitter and Tinder.

Both of these cases made me think about who defines safety and what does being safe in cyberspace really mean. For instance, Elon Musk, the founder of Tesla, has defended the company by claiming that Teslas in auto pilot mode are much safer than a human operated car. Although his claims seem to be undermined a little by another accident by a Tesla auto-pilot driver. Niantic quickly fixed the problem that it faced but users continued to play despite the warnings.

Currently, the onus is on the user to make decisions in their best interest. Google has updated its app store to tell users what each app asks permissions for and in Android, users can turn of those privileges at any time. Tesla also had placed warnings on the auto-pilot function about staying alert and keeping control of the vehicle. However, it does not appear that these cautions are always heeded. There are even reports that Brown was watching a Harry Potter movie at the time of the crash.

So, like all cases about protection, when something goes wrong the issue of liability appears. Tesla is currently being investigated by state and national authorities to determine if they are at fault for the crash and whether or not they misled investors by not disclosing the crash earlier. Niantic on the other hand, appears to have gotten away with their mistake with nothing more than apology on their website.

There is no current authority that protects the consumer in cyberspace. However, it would not be possible to make laws and rules that keep up with the lightning speed of progress. Like so many other areas of cyberspace, it is best to try and look for parallels in other industries. The FDA is one of the agencies that works to protect the American consumer. The job of the FDA is to enforce the regulations on food, drugs, tobacco products, cosmetics, and a variety of other products. Although the drugs often take years to approve, it is a process that keeps dangerous medicines out of the hands of most Americans. The financial industry provides a different model for consumer protection. S & P, Moody’s, and Fitch Group provide ratings for a variety of financial instruments to help investors determine the risk of investing.

Safety in cyberspace will probably involve a mixture of both governmental oversight and private ratings. However, the line between public and private is important to get right. If cyberspace had an FDA, technologies would be throttled and the public would suffer. If the current system continues, expect more news stories of accidents involving technology and privacy invasion. The government needs to step in to protect its citizens but only if their interactions could cause harm to themselves or others. This means the creation of a new type of independent oversight organization which tests wearables, automated helper devices, and self-driving cars. These pieces of tech are usually harmless but cyber attacks can sometimes cause these machines to perform in unusual ways. Such a system would help protect consumers but it would also protect technology companies from liability. Currently, there are no real laws for liability when it comes to diver-less cars, which is why the Tesla incident is so interesting. However, if a regulatory body was able to set a standard, companies could be protected against legal claims.

Potentially a new space for private oversight would be in the form of privacy protection. Although Google and Apple tell users what information they are allowing third parties access to, it is often unheeded as in the case of Pokemon Go. However, an even more simplified version of this, similar to the financial model of AAA, AA, A, etc, could help consumers better discern which apps to use and which to avoid. It would also allow companies to continue to function on business models that sell the data of its users because there would be no regulatory restriction. However, consumers would be more knowledgeable about the risk they are taking in using these products.

No system is a silver bullet. There are hundreds of cases of government oversight gone wrong and the private sector does not have a great track record as well but as we continue to move towards a world where more an more time is spent interacting with technology, the safety of these devices and apps must come front and center.