We have a killer robot problem, but not the one you think

Jack H.C.
6 min readMay 12, 2018

--

Picture a killer robot. Is it Arnold as the T-800? Is it a Westworld host? Is it the sexy Ava from the film Ex Machina? What about a web cam, a DVR, or the driving assistance features in your car? It turns out any of these might be a killer robot, but only some of them are real. In this post I want to convince you that we have a killer robot problem, but probably not the one you think. The good news is that there is something we can do about the real killer robots. The bad news is that right now we are collectively focused on the imaginary ones. Let’s fix that.

We live in a time of great anxiety around AI. No less a public figure than Elon Musk has described recent advances in the field as the “greatest existential threat to humanity.” Really, Elon? What about the threats of war, pandemic disease, climate change? What about the massive, systematic risk posed by our poorly secured industrial and home automation we’ve indiscriminately connected to the internet? At least in the short run we have much bigger problems than AI research. This type of fear mongering is irresponsible because it takes our eye off the ball of real and present threats.

To motivate the discussion of present threats, consider an advanced robot with network connectivity that is currently in the hands of thousands of people, namely the Tesla Model S with Autopilot self driving features. Even Elon would agree that there is no chance this robot will achieve sentience and rise up against its human masters; the Autopilot system is too simple and too specialized for this to happen. What about the likelihood that the Autopilot in the Tesla will fail to see a damaged highway divider and fatally crash? Low, but real. Now, consider the likelihood that a Tesla or any other network-connected appliance will eventually be compromised by Russian and French hackers as part of a massive Viagra spam botnet? Significant, and increasing by the day.

It is debatable that some future self-driving vehicle might develop sufficient intelligence to become an evil sentient AI. However, a robot doesn’t need advanced AI and the emergence of sentience in order to do dangerous things we don’t expect. A simple sensor failure or software glitch in your car can kill you. Perhaps more worrying, most modern robots (including your car, if you have WiFi or OnStar) are periodically connected to globe-spanning communications networks that expose them to a wide range of threats. That means that bad actors can exploit these connected devices to conduct coordinated cyber warfare and cyber terrorism attacks on a large scale without the owners ever being the wiser.

If the existential threat of cyber warfare or cyber terror sounds like fantasy on par with Hollywood killer robots, I’m afraid you haven’t been paying attention. It’s already happened — in the Ukraine, where attackers took down the power grid (attack origin: likely Russia) and Iran, where attackers sabotaged a nuclear weapons facility and other military assets (attack origin: likely the US). And GM may have taken as long as five years to fix a critical flaw in millions of vehicles with OnStar that would subject them to remote hacker takeover. In fact, modern vehicles are more vulnerable than ever and the techniques for compromising them are widely known — hands-on car hacking classes are taught at conferences like Black Hat and DEF CON. Imagine an attack that causes every car of a particular make and model to disable brakes and continuously accelerate at the same moment. If nobody has been killed by a compromised car yet it’s only a matter of time.

We need to face up to the fact that the current “Internet of Things” (IoT) ecosystem composed of poorly secured industrial controllers, home automation, consumer networking gear, and vehicular systems is creating huge systematic risks that are analogous to poorly constructed buildings in an earthquake zone, except that the zone is the entire connected world. And if you don’t think a compromised piece of home networking gear is dangerous, perhaps you forgot the day that consumer IP cameras and DVRs broke the internet:

Outage map for the October 16 IoT attack, source is downdetector.com by way of krebsonsecurity.com

In 2016, malware known as Mirai was used to take over more than half a million consumer networked cameras and DVRs. On October 16 hackers used the the resulting botnet (network of compromised devices) to conduct the largest distributed denial of service (DDoS) attack yet seen, targeting Dyn, one of the critical internet infrastructure companies. The result was a day-long outage that made the internet virtually unusable across much of the U.S., and effected services such as Twitter, Amazon, Tumblr, Reddit, Spotify and Netflix. It turns out that the compromised devices, sold by many vendors, used a single manufacturer’s flawed embedded system component, leading to the wide-spread vulnerability.

How can we expect inexpensive consumer IoT gear to be properly secured when we have difficulty protecting critical national infrastructure, including power plants and voting systems? Right now we can’t, because there is currently very little downside risk to selling crappy, easily compromised products. What’s worse, existing developer tools make it very easy to ship poorly secured products and challenging to do it right. And there is a large and growing installed base of legacy devices with easily exploited security flaws.

We have national safety standards for cars because an unsafe vehicle isn’t just a threat to the vehicle operator, but to anyone else on the road. Similarly, the risks created by poor infrastructure and services effect everyone, not just those who pay for them. Unfortunately, while NIST and others have proposed security engineering guidelines for IoT devices, no national security standards exist.

So, what can we do about all of this?

First, we need to let go of the current pop culture obsession with the threat posed by the emergence of sentient computers, otherwise known as strong AI. This threat doesn’t currently exist and may never exist. At the very least, such an emergence is a long way away, and will almost certainly require a radical increase in computational power and new synthetic cognitive technology before we get the kind of flexible, general-purpose intelligence that the AI doomsday people worry about. (And if your biggest worry is that in the far future a simulated version of your past self might be tortured in simulation, I just don’t have time to talk to you.)

Perhaps more importantly, just about anything you can imagine an evil autonomous AI doing in the future is likely doable right now by smart, evil people with hacking tools and access to advanced automation. Arnold Schwarzenegger as the T-800 Terminator is really sexy on the movie screen, but a General Atomics MQ-1 Predator drone armed with AGM-114 Hellfire missiles can do his job faster, more efficiently, and doesn’t need your clothes, your boots, or your motorcycle. And if a hijacked Predator isn’t handy, maybe a compromised Jeep will do.

Second, we need to become educated consumers about the risks posed by poorly secured digital appliances. This means upgrading routers and enabling firewalls for home and business networks, but it also means becoming security-informed consumers when we shop for digital appliances and even cars. Manufacturers won’t feel the pressure to improve their products unless consumers vote with their wallets. This will require extra effort when researching purchases, but that effort is justified by the reality that a poorly secured networked appliance can take down critical infrastructure as part of a DDoS attack.

Third, we as knowledgable consumers and members of the digital device industry need to push for the adoption of strong security engineering standards for IoT devices. And we should consider whether these standards should be backed by new laws creating financial liability for vendors and service providers in the case of negligence.

Finally, we need to hold those who provide services, be it power, or water, or social recommendation systems, accountable for the failures of those systems, especially when those failures lead to large-scale harm. And for those of us who create complex automated systems, there should be a moral obligation for us to consider the consequences of our design choices before we ship. Just as structural engineers consider the possible failure modes of a bridge before the bridge is constructed, software and hardware engineers and product designers need to think through obvious unintended consequences of their choices as well.

Pop-culture speculation about the perils of AI and Hollywood fantasy killer robots have distracted us from the presence of the real killer robots already in our midst. For years, we’ve let these these threats proliferate, but it’s not too late to act. Consumers, engineers, standards bodies, and lawmakers all have a role to play. We can make a future that is significantly safer for everyone, or we can hope that nothing bad happens. Hope is not a strategy.

--

--

Jack H.C.

Bay Area entrepreneur with polymath tendencies and strong opinions