Is end-to-end harmful?

Oswald Maskens
Takeo Engineering
Published in
5 min readDec 5, 2017

Smartphones have become an extension of ourselves. It is crucial to think about the ethics and security of the systems we rely on for our daily lives.

Traditionally, cyber security has focused on protecting devices with the assumption that whoever owns and controls the device owns the data stored on it.

WhatsApp is a great example of a device-centric view of computer security

A great example of this mindset is end-to-end encryption. When using WhatsApp, all the messages are stored on the end-users devices and it is their responsability to protect and backup the device. The security of my WhatsApp messages is tied to the security of my iPhone.

The same argument gives companies the right to use data on their servers however they want. This has led to the data and analytics boom of the last decade. Should this really be the case?

I disagree with the notion that device ownership=right to use the data on it and its implication for secure computer system design.

First off, it completely contradicts copyright law. Even if you have a movie on your phone, it does not mean you can send it to anyone.

Secondly, my data, even on medium or Facebook, it still my data.

Thirdly, it assumes I control my device, which is ridiculous. I have no idea how an iPhone is built nor do I have access to the source code of the OS and apps that run on it. Some think Open Source is the solution but we can’t expect individuals to read and understand all the code they use daily. They still need to trust others.

So, how should we understand computer security ?

All our devices are connected together to form a large computer system. This system is made out of hardware and software. Both hardware and software can change without users understanding those changes. Users have no other option than to trust hardware and software companies.

Facebook’s change log

When Facebook updates its servers, I don’t know what the new code will do. When Facebook changes its application, I don’t know what the new code will do. My phone is partially controlled by Facebook and their servers completely. My phone is a part of Facebook, a part of the hardware they use to provide me a service. Whether to store data on devices or servers or anywhere else is only a question of performance and reliability (network disruptions, physical security of datacentra).

When is an app or service secure?

This is an extremely complex question to answer. Roughly speaking it should not perform any actions I did not intend to do. This is an extremely broad answer to a complex question.

To achieve this goal applications should provide a simple user experience with clear semantics. Users should be able to understand the implications of each press of a button intuitively and clearly.

The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the security policy.

Additionally developers should strive to reduce the amount of trusted code to a minimum. This includes their own code (to reduce the likelihood of security bugs) but also third party code and hardware (to reduce the number of third parties that might have malicious intents). Users will always have to trust someone unless they build and design their entire system from scratch, which is not realistic.

What about privacy?

Privacy much like security is hard to define. Broadly, I would define it as not leaking information to third parties, in particular humans. Does Google respect my privacy? Well I haven’t heard anyone having had access to my data.

But they send you targeted ads!

Yeah, machines do, I am fine with machines processing my data. As long as Google’s server code does not allow employees or third parties to see my data, I am fine.

But isn’t it more private on your device?

No. Google could very well write some code that sends my data to someone else. It’s called a back door. Whether it runs on your device or their servers, it’s a back door.

Interesting side note: should bank employees be able to see your bank account information? They used to need to in order to withdraw money, calculate your interest or see what products they could offer. But all of this is now automated. There is no reason for them to be allowed to see your banking history anymore.

Not trusting servers = less trusted code ?

Yes and no. Not trusting servers can be a great way to reduce the amount of trusted components in a secure service. It is however not always true. Users have a variety of devices with many different configurations. Servers run less code and are tightly controlled/standardized.

Data will always have to go through a users device to be viewed/acted upon. But by not fully trusting the device, service providers can help users protect themselves. This can be achieved through a variety of means such as cache size limits or throttling.

This is particularly true in a professional context: companies and organizations don’t want to trust employees to keep their devices secure! Limiting the amount of data/number of actions that an employee can access/perform in a span of time reduces the impact of a breach.

The opinions in this story are my own and might be wrong. If you disagree with me feel free to engage in the discussion.

--

--