What might government intervention look like for Internet of Things security?

Update: This story was written before the extent of Friday’s attack on Dyn infrastructure became clear. While Friday’s attack adds no material insights to the core tenets of my argument, the widespread effect of the outage does increase the likelihood of regulatory scrutiny. If anything, the argument laid out here becomes more pressing as interested parties begin to solidify their positions.

When it comes to the Internet of Things, security, in a variety of ways, is a hotly debated topic. The discussion ranges from semi-hypothetical scenarios that often sound like they could be lifted straight from Die Hard 4 — state-sponsored attacks like Stuxnet, or predictions of societal doom due to script kiddies, that would put Y2K fanfic to shame come to mind—to a variety of “high value” individual devices. Attacks on cars, traffic lights, insulin pumps and pacemakers are regularly showcased, but expensive to research, and cumbersome to carry out in the real world.

About two years ago, the first stories of smart fridges being hijacked into malicious bot nets surfaced, but quickly faded again, as these novelty stories do. We should have paid more attention then, and weighted that weak signal better, as we are learning at the moment, as it showcased the first glimpses of a security problem in connected devices and IoT that is significantly cheaper to exploit and has a far wider attack surface.

What happened? Security Researcher Brian Krebs was victim of a Distributed Denial of Service (DDoS) attack. He regularly experiences such attacks, and was hosted by akamai, a content delivery network with serious experience in high-traffic loads and protecting against DDoS attacks. The attack against Brian Krebs was so significant, so unheard of before, however, that even akamai was forced to drop him, as they couldn’t sustain their defences against an attack so large in scale. As later transpired, a large amount of connected devices, mainly cheap security cameras and the likes, were recruited by a, what some observers called “amateurish”, exploit kit into a large-scale bot net and performed the DDoS. By relying on insecure connected devices, the authors of the attack could recruit a far larger amount of bots for far cheaper than had they relied on traditional PCs. And the manufacturers of those devices didn’t exactly make it hard. A lot of the products had hard-coded default passwords.

What now?

So, a slap on the fingers for the device manufacturers, who now pledge to do better, and we can all walk towards a more secure future, right? Well, if only.

As I’ve covered in this tweetstorm, and as Bruce Schneier details in his column at Motherboard, security for connected devices is a problematic complex with the underlying economics stacked against a systemically favourable outcome. The argument I made in the tweetstorm is thus:

In terms of connected hardware, especially for consumers, we’re still in a product discovery phase. We have rough notions of what consumers might want from connected devices, but the tremendous amount of startups in that field is testament to the experimentation that is necessary to figure out what exactly the value propositions and business models are that work. This is not in and of itself a bad thing, rather this is how markets come into existence and evolve over time. The problem arises with the application of Startup economics and methodologies in a field that is ill-suited for it. As a startup, your chance of survival is directly correlated to finding a customer base quickly, making your product desirable, engineering and manufacturing your product at costs that allow further growth, and scaling quickly enough to attract large enough amounts of equity capital to finance all that.

In such a situation, desperately searching for stickiness, trying hard to come up with the cash to manufacture batch 1, while reworking pricing so that the next batch can be financed, and trying to find additional channels of revenue generation that the business may become sustainable after all, you’re taking shortcuts. You have to. And you’re taking shortcuts where they’re the least costly. Security is one of these. With limited resources, do you spend the time and money to make your product better, more desirable, or do you spend it to make it more secure. You can’t do both. And if you focus on the latter, you might never get to a point where security matters at all.

It’s not only that intrinsic risk calculation, a certain penalty vs. a hypothetical penalty. It’s also that security concerns are, for the largest part externalities to the startups or firms involved, as even if (one might argue it’s more of a when than an if) the security of your product is compromised, the fallout is very likely to be contained. People don’t change email providers, even if the one they’re with suffered dramatic data breaches, and they’re even less likely to take notice if the breach doesn’t directly affect themselves, but an abstract third party.

And it’s not just startups, that have hardly the time to think about the security of their products. In a world where competition amongst is so cut-throat that the production costs for a replica Apple watch with GSM are on the order of $6, as laid out in this amazing story of the Shenzhen electronics market, you know that there is going to be an increasing class of products that don’t implement even the most basic security procedures, never mind have it tested.

The call for government intervention

This is in essence the situation that Bruce Schneier analyses as well. We have all the trappings of a market failure, because the penalty of poor security is close to zero, and the cost is substantial, so nothing gets done. Or to quote Bruce:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.
What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution.

And that makes sense. If you have a market failure, that is a situation where the incentives for the individual actors within that market lead to suboptimal outcomes for the market as a whole, you’re going to want to look at regulation to realign incentives. This won’t work in all cases, of course. The IoT Standards wars, for instance, might be construed as a market failure, but in an area where no legislator or regulator would think of intervening. Here, the magnitude of the externalities are already so big, and we are still in the infancy of this market, that a regulatory intervention seems to be appropriate. But arguing for regulation is not condoning specific kinds of regulation, and it‘s in these conclusions where Schneier and I diverge.

Traditionally, when we speak of regulation, we think of increasing the cost of undesirable behaviour. We’re thinking of a Carbon Tax to curb CO2 emissions, we’re thinking of safety certification and mandatory recalls. In essence we are usually saying: prices in the market present an incomplete picture of the costs borne by parties not counterparties in a transaction. So to minimise the adverse effects, we are increasing the price so that transaction volumes are closer to where they would be, would the price appropriately reflect the costs, and divert the revenue (should there be any) towards mitigating those effects.

And indeed, that’s what Schneier proposes as well:

The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

And now we arrive at a classic conflict point when it comes to regulation: its impact on innovation. Because with everything we do, there are tradeoffs. It’s not that market participants had multiple options and just chose to ignore security. And foisting additional security regulation onto an emerging industry that’s still trying to figure out what value there might be in it for consumers, with pretty much the explicit goal to increase the cost of security breaches, is one surefire way to stifle it. The problem here is that in a world of material products, there didn’t seem to be any other way of doing it.

Instead of increasing the cost of breaches, decrease the cost of secure systems

As we’ve discussed above, the market failure consists of two parts. As security is costly to implement, the incentives are stacked against it. So the conventional way of regulation is to increase the cost of a breach, so businesses will be incentivised to invest more in security. But what if we could lower the cost of good security instead? Instead of punishing firms for failing to implement robust security, help them with doing that?

A lot of the fundamental building blocks of the software systems we use everyday are covered under free and open source licenses. They’re relatively easy for businesses to implement and adapt to their use, and without them, the whole internet ecosystem wouldn’t work the way it does. The F/LOSS ecosystem is a non-market solution to a problem we didn’t even know we had, and it helped to drastically lower the cost of experimentation across a wide array of industries, and is at the core of what enabled the current wave of transformation across market.

Realising the impact of F/LOSS, and keeping in mind an open-ness towards innovation and experimentation, I believe progressive regulation around IoT security would be best achieved by Government committing to lower the costs of implementing good security practices. This could be achieved by creating, or funding, the software packages and making them available under a liberal license for experimenters to work with. The systemic benefit of better security and reduced external costs should be argumentation enough. The impossibility of policing the security of off-shore products that might nonetheless impact parties under your jurisdiction makes other approaches complicated at best, unfeasible at worst. And anchoring usage of secure foundations in FCC or CE certification processes would speed up adoption to the point where the unit costs become negligible.

The attack on Brian Krebs has shown the significant side-effects the current roll-out of connected devices coupled with the market dynamics as they exist. The calls for regulatory intervention are in my view justified. But weighing the trade-offs, I believe regulators would be well advised thinking about more progressive means to achieve the end of reducing adverse externalities. In the internet age, reducing the costs of good practices can serve those ends, while preserving room for experimentation and innovation.