Our Cybersecurity Problem Can Only Be Solved in Hardware
And it Needs to Be Solved Now
For the purposes of this post, let’s focus only on IoT — the Internet of Things. Why? Because it is just as big, broad, and pervasive as the term implies. If you generalize a bit to include embedded devices like industrial controllers and things in our planes, trains, and automobiles, IoT represents all the critical infrastructure we depend on. There is general consensus that by 2020 there will be over 21B connected IoT devices and 1T smaller sensors feeding data to the cloud of Things. For context, the world population is projected to be 7.8B people by 2020, so we’re looking at an average of almost three devices per human on the planet. It’s an astounding statistic, but also an alarming one when we consider how vulnerable all computing devices currently are to cyber attack. Before I describe how this growing cybersecurity problem can only truly be solved in hardware, let’s first talk more about IoT devices and what makes them — and all devices with computer processors — so vulnerable.
What is an IoT device?
By definition, IoT is the interconnection via the Internet of computing devices embedded in everyday objects. IoT includes your Nest thermostats, Web cams, smart lights, pacemakers, Amazon Alexa, baby monitors, and the electric meter on your house. IoT also extends to industrial controllers used in the power grid, our transportation systems, factories, hospitals, and ultimately every aspect of life. Take the modern car. It has 100 processors: everything from the wireless tire pressure sensors, anti-lock breaks, and navigation system, to processors for the steering, engine, accelerator, and transmission. Once driverless cars come online, vehicles will also have processors sensing everything around them, and nearby vehicles will network with each other to share information about traffic and road conditions. Extrapolate from the car — one Thing — and it’s easy to imagine reaching and exceeding that 21B number in less than three years.
What makes an IoT device work?
Like your laptop or smartphone, all IoT devices have a CPU (Central Processing Unit) — often just called a “processor.” It’s the brains of the device. The device has to have power (battery or solar) and some memory, and it runs applications described by code that is written in a software programming language. An IoT device needs inputs, which might be from sensors, to gather information about the world around it, and its applications process those inputs to produce outputs. It also needs to communicate, which is usually over the Internet (hence the Internet of Things), and it has peripherals — at least one of which is used for communication.
How would one cyberattack an IoT device?
Imagine you are the bad guy. The first thing you will probably do is buy one of the devices you want to attack and take it apart, first logically to see how the code drives the CPU. Then you will try to communicate with it, and find an interface that perhaps allows you to log in or reconfigure it. Then, you might physically take it apart to understand all its components and how they interact. Inevitably (and very easily with today’s IoT devices), you will find an opening — some sort of flaw in the software that provides access over the Internet. Your inspection might reveal something silly, like an administrator login with a built-in password of “password” (yes, this really happens). Or maybe you will find a less obvious opening, like a place in the application where the programmer forgot to check that the input provided fits into the region of memory reserved for it. You will be able to exploit that bug by injecting longer input that overwrites the reserved memory with your own carefully-crafted instructions. Because of how processors work, the device won’t know it is executing your instructions instead of the instructions intended by the programmer. It will be “game over,” as they say. You will have taken over the device, and because that one device is connected to the Internet, you will have a way in to attack scores of other devices.
Once attackers have the CPU running their code, they can do anything they want: change the temperature on the Nest thermostat, make the electric meter charge one million dollars instead of 30, fake an image coming from a web cam. What they can do on devices controlling an automobile or the electric grid is both easy to imagine and unimaginable. And if they marshal hundreds or thousands of devices to take coordinated action against a target, well, that is when things become truly frightening.
There are always bugs in software, and attackers will find them.
Experts who have studied thousands of different pieces of software — even well-tested, quality software — consistently find 15 bugs per thousand lines of code. And it takes a lot of code to build today’s complex systems. In fact, it’s hard to find a deployed application that took under one million lines of code to program. That’s 15,000 bugs right there, and while not all of them can be exploited by an attacker, some can. And some is too many.
Determined hackers — some from nation-states — will find the exploitable vulnerabilities in software. They might do it by taking the device apart, as described earlier. Or they might deploy hacking tools, like Internet bots, that comb the web for vulnerabilities. When they find one of these openings, they actually find many because one piece of software is often found in numerous applications all over the net.
The more layers of defensive software (like firewalls and virus protection) we use to try to protect our Internet of Things, the more we exacerbate the problem. Those 15 bugs per thousand lines of code will keep haunting us, and the bad guys will keep getting in and taking control of our CPUs.
Our CPUs are vulnerable because they use a simple 1945 design.
Okay, so bugs are inevitable. But why aren’t our computer processors smarter about knowing what they should and should not do? The simple answer: they weren’t designed for that. Virtually all our computing devices have processors with an architecture — called “von Neumann” after its inventor — that dates back to 1945. These simple yet powerful processors are great at following instructions, but cannot differentiate between right and wrong. They enabled an entire industry to live by the mantra “smaller, cheaper, faster,” and for years that focus was just fine. Machines were not interconnected and there was no need to defend against cyber attacks. But as the Web expanded, and the number of devices and people on the network exploded, cyber security became one of the main issues keeping IT department heads awake at night. Meanwhile, the smaller/cheaper/faster treadmill is darn hard to get off. People don’t want to give up performance or convenience for security, so we have continued to build defensive software rather than make major architectural changes to the CPU.
CPUs cannot enforce security policies they do not know about.
The crux of the issue with the simple von Neumann architecture is its simplicity. It has just a memory, an arithmetic-logic unit, a program counter, and a set of instructions. What is missing is additional information that tells the CPU if it is doing a good thing or a bad thing. There is also no concept of rules or policy enforcement. For example, an important policy would say that under no circumstances should the CPU ever read or write outside the boundaries of a region of memory as defined by the programmer. Such a policy would require that the CPU had access to information about those regions of memory, but with today’s processor architecture, that information does not exist. Consequently, the CPU will execute instructions exactly as it finds them, even if those instructions were hijacked by a cyber attacker with malicious intent.
So, since we will not achieve zero-defect software in this century and since the holy grail for the attacker is to take control of the CPU, we have to stop adding more layers of highly vulnerable (and attackable) software and address cybersecurity at the core. We must guard the CPU.
Hardware is unassailable — the attacker cannot modify your silicon.
Larry Ellison, the CTO and chairman of Oracle, promotes a principle of “Always-On Security.” Forget the concept of turning security features on and off — computer security should be elemental and “on” all the time. To achieve this, Ellison believes that security should be pushed as low in the stack as possible: “Database security is better than application security, operating system security is better than database security, and silicon security is better than operating system security.” As Ellison noted, “Even the best hackers have not figured out a way to download changes to your microprocessor…You can’t alter the silicon.” I like to say, “You would need a very small soldering iron to attack the CPU itself.”
We can’t change the world’s CPUs all at once.
Solving security in silicon makes sense, but how do we do it when our current CPUs simply aren’t smart enough? It is not practical or possible to replace them all any time soon with some new architecture. But it is possible to integrate today’s processors with a co-processor that maintains extra information that can be checked against security policies on every instruction. For more on the advancement of this approach, check out Dover’s CoreGuardproduct in action.