The WhatsApp “backdoor” issue is really about customer experience and trust

Today the Guardian reported about the discovery of a “backdoor” in WhatsApp that could allow messages to be snooped. I will not go into the details of the issue, or even the semantics of whether this can be called a “backdoor”, and I suggest instead reading two articles:

In the Wired article, in particular, the topic is quite clearly summarized in the following way:

[…] in its default state, WhatsApp doesn’t alert a sender when the key fingerprint of a recipient has changed. A new fingerprint could merely mean that the recipient has started using a new phone or deleted and reinstalled the app. Or it could mean something more troubling: that a “man-in-the-middle” — such as a law enforcement agency with a wiretap order forcing WhatsApp’s cooperation — has inserted himself, and is intercepting and decrypting every message before passing it on to the intended recipient.

and this explanation holds the key to the two relevant aspects for me on this:

  • Security decisions from WhatsApp have been driven by customer experience
  • Security experience in WhatsApp in the end is based in trust

The Customer Experience part

When WhatsApp launched its end-to-end encryption capability last year they wanted to make sure that users were made aware of it. To that end, as the encryption was enabled for new contacts, users would get this message in their conversations:

WhatsApp indication of encryption enablement

But this message was not particularly understandable to average users (do they really know or care what “end-to-end encryption” is?). In fact in some cases it generated confusion, like in Spain, where the National Police Department was compelled to tell users not to worry from their official Twitter account:

Translation: “Did you get this message? Don’t worry it’s not a hoax. Now just YOU and your contacts can read #WhatsApp messages”

So why did WhatsApp do this? I think there were two main reasons:

With this, WhatsApp made sure that every user would realize that security in the service had been upgraded, but other security aspects of the product were not considered to require that much exposure.

As the Wired article describes, the changing of the key fingerprint in WhatsApp, that could point to a potential security attack, is by default not notified to users. If a security conscious user wants to enable this, they need to go to a specific setting, that once changed would present this message when a key change occurs:

Notification of contact security code change

But besides this notification, nothing else happens, as WhatsApp (unlike other services like Signal) will not block further messages in that conversation.

And this is because WhatsApp thinks that doing something else would increase security at the expense of making the customer experience much more complex. They want to provide a secure communication mechanism, but not bring security to the forefront in a way that could end up interfering with the experience. They have said as much in their response to the Guardian’s accusations:

The design decision referenced in the Guardian story prevents millions of messages from being lost

In WhatsApp’s own FAQ around security they describe what is the most likely reason that a security code change could occur:

This is likely because someone reinstalled WhatsApp or switched phones.

And in another page they actually say this:

If your contact has recently reinstalled WhatsApp, or switched devices, we recommend you refresh the code by sending them a new message and then scanning the code.

So they are encouraging users to trust that if there’s a mistmatch in codes, the likely reason is not a real security issue, and they should just refresh the code and keep using the app. They may have even measured the probability and impact on users of this kind of situations (reinstalls and phone switches) to the point that they consider the likelihood of false positives on this warnings does not justify really presenting this situations as potential security risks.

This is relevant, because experiences like Signal’s show that a more strict security mechanism in this situation could leave customers disconnected. And customers want their communications to be secure, but (again most of them) not at the expense of a lot of complication.

In the end it is just about trust

At the end of the day, for most users the main requirement around security is peace of mind. They want to know that their communications are secure, yes, but they don’t want to deal with a lot of complexity to guarantee that.

WhatsApp offers mechanisms so that more security-conscious users can verify nothing wrong is happening, but for “normal” users peace of mind comes with WhatsApp telling them that everything is safe in a conversation. From then on it is better to trust that this is still the case and not be bothered by potential false alarms around key changes.

But the trust must go beyond that, because the mechanism that WhatsApp provides to validate the integrity of communication security is managed by WhatsApp itself. You can Verify the security code for a conversation with another contact using the app:

WhatsApp verfication mechanism

But quoting Wired’s analysis:

arguably the platform’s biggest security flaw remains not open sourcing its code to allow for external audits

This means that while the mechanism can help you validate that no third party has done a man-in-the-middle interception of the communication, it does not ensure that WhatsApp itself is not doing it, and the app is not just displaying a fake (identical) security code for both users. Controlling the whole user interaction, via the full ownership of the end-to-end experience of what happens in their app, gives WhatsApp full domain over everything, for the good and the bad.

You just need to trust WhatsApp when they are saying everything’s safe (and hope for the best).

Or quoting another article:

When a provider says that they use end-to-end encryption and they have “no way of reading messages”, this is definitely wrong! 
A provider always has the ability to intercept messages as long as the user does not verify fingerprints. With WhatsApp, it is even harder to make sure, no MitM takes or took place. WhatsApp is closed source, so who can tell, if WhatsApp just displays wrong identity keys and lets the user think that everything is perfectly OK ..?
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.