Can We Make The Internet Of Things “Secure Enough?”

The answer is yes — but only when business models align with users’ interests

Nicholas Weaver
4 min readFeb 11, 2016

--

Are we over-focused on yesterday’s problems at the exclusion of tomorrow’s? I don’t actually think so. I’m an academic, so my job is to focus on tomorrow and the the day after, and if it doesn’t come to pass, well, I’ve been wrong lots of times. That is the joy of being an academic.

Pretty much everybody else on this roundtable has a much more practical bent, so they have to be focused on the shorter term horizon. And they are refreshingly optimistic, whether it is the possibility of better usability (we’ve seen some hints at this with Signal and other easy to use secure tools), the possibility of information sharing producing useful results, and the focus on truly usable authentication.

On this last bit, memo to Apple: My watch’s authentication is secure and easy to use. Can you apply this technology so that I don’t have to type in my F*@#*#@** password (5 times due to typos) every time I return to my computer after going to the bathroom?

And, although I’m a bit of an absolutist and fatalist with “Adama’s Law” as Max Parnell correctly flags, I think it is necessary to at least try to be an absolutist as an academic. It may be possible to build “secure enough” systems, where the combination of security measures and incentive to attackers together makes it low enough to be OK. For example, the OnStar (gen 9) in my new car may qualify, since GM learned its lesson with OnStar gen 8. This may be good enough for the less tin-foil-hat community, but with my higher level of paranoia I may still decide to disconnect the antennas.

Finally, and segueing into the larger question of regulation and/or economic incentives for security, user IPvFletch correctly tagged that the cost of “security” is seen as too high, especially for IoT. I’m actually slightly hopeful in some contexts, but it only works when business models align with security goals.

So commercial incentives can work, but only if people understand the tradeoffs involved when choosing a commercial product.

A good example are the two big competing IoT APIs from Google’s Nest division and Apple. Both offer easy to use APIs that reduce cost to developers while potentially promising greater security by restricting communication. But only one has a potential to deliver real security, and it comes down to business attitudes, not technical decisions.

Dr. Nicholas Weaver, Senior Researcher, Networking and Security / UC Berkeley

Google is a company that views user data as an asset, so the Nest model has all the data uploaded to the cloud where it can be accessed by Google. By limiting all communication through the cloud or local network, its helps ensure that IoT devices are hard to exploit from the Internet because they only communicate through limited authenticated channels.

Unfortunately, this places all the data in Google’s hands. There is a “privacy statement”, but this is a policy that includes a dreaded asterisk: “Please note that this Privacy Statement may change from time to time. We will provide notice of any changes on the website or by contacting you.” So, sometime in the future, Google could decide to use your Nest Spycam (err, Dropcam) data to better profile you for marketing purposes, removing the separation between Nest and other business units within Google. But at least they will tell you on the website when they do that.

Because Google can’t tell you if the guys with guns compel them to give them your data. Local police and law enforcement can say to Google “give me all this person’s information, here’s my warrant” and sometime in the future you might get told about it. If you are outside the US, the NSA can say, “We don’t need no warrant, just allow us to peer through this person’s spycam. Thanks.” Hardly a “secure” system if someone can peer into a target’s bedroom with just a request.

Apple appears to be taking an opposite approach in their API. In Apple’s model, the IoT devices pair with the phone over the local network but can then communicate through Apple’s cloud. But unlike Google, Apple’s business model has them viewing data as a potential liability, so all the messages are (supposedly) encrypted. This provides the nice limitation on attack surface present in a cloud-based flow where you don’t need to directly connect to your thermostat from the Internet, but (hopefully) also acts to protect your data from misuse by Apple or the government.

So I think it may actually be possible to have a “secure enough” IoT for the home if we’re dealing with sensors and safe actuators, but only when the business models align with the security interests of the user.

The Future of Security Roundtable is a Google-sponsored initiative that brings together thought leaders to discuss how we can best protect ourselves from the data breaches and security risks of tomorrow. Panelists are not affiliated with Google, and their opinions are their own. Read the post that kicked off the roundtable here and feel free to join in the conversation.

--

--

Nicholas Weaver

Researcher: International Computer Science Institute & Lecturer @ UC Berkeley