Coercion – a problem larger than authentication

Technical solutions for authentication are orthogonal to political abuse of power structures

thaddeus t. grugq
6 min readJan 16, 2018
Three friends contemplate the bad takes on FaceID and authoritarian regimes.

Before the release of the iPhone X there were some assessments of the security value of FaceID against authoritarian governments, as well as how secure it would actually be (pretty secure, it turns out). It seems appropriate to address the flawed understanding of security threats prompted by the FaceID authentication mechanism when it was announced. Particularly frustrating was the deep confusion around how coercion works at different levels, and why the sinister threat of “authoritarian regimes” is a poor threat model to apply to authentication mechanism security. It is popular to ask “how will this technology enable abuse by authoritarian regimes,” but the people asking that question, the technologies they choose to fret about, and the fantasy logic they use constructing threat models, need the cold water of reality.

You can’t solve social problems with technology

Your threat model is wrong…no…more wrong.

Very few people face nation state adversaries (sorry Western privacy activists, you’re not on anyone’s radar.) Most of those who do, explicitly signed up for it (foreign service departments, intelligence services, terrorist groups, transnational criminal groups), although some just fell into it (dissidents). The former group, well, they have the resources and expertise to play the game. The latter group typically do not. They lack security training, security experience (although they gain it the hard way), and have essentially no access to security assistance. They typically use available platforms for organising, coordinating, and messaging (read: Facebook, Facebook, other social media platforms like Twitter, and mobile phones...with Facebook.)

There are very important reasons why dissident groups use Facebook (other than Metcalfe’s Law):

  • Exposure – Dissident groups must be visible and public, otherwise not only are they failing to reach their target audience but they run the risk of being labeled terrorists (so most covert communications technology is a terrible fit for them — they need billboards and broadsheets, not digital dead drops and chalk signs.) Of course, everyone deserves (and needs) the privacy of secure ephemeral communications.
  • Expansion – Dissident groups that aren’t growing are dying. Organisations have to persist, and there is natural attrition (even if they aren’t being hunted by death squads, like the Raqqa is Being Slaughtered Silently group.) People lose interest, work gets in the way, priorities change, etc. To simply persist and stay the same size, organisations have to constantly recruit new members (at least as many as leave.) For dissident groups that hope to effect change, they need to recruit more new members than leave.

This combination of exposure and expansion means that typically dissidents must accept the risk that the regime’s secret police will penetrate their ranks (there are 110 year old secret police manuals on this). Being a dissident means accepting the risk and trying to grow the movement to a point where it is large enough that it can force change (thus eliminating said risk.)

Dissidents — their strength is numbers and their safety is visibility.

Dissident tech is Facebook and YouTube, their crypto is TLS

Technology that empowers dissidents, and dissident groups, is almost always just going to be Facebook (and Twitter, and WhatsApp or whatever the dominant is messenger for their region [see: Metcalfe’s Law]). Security for dissidents comes from being in the public eye, protecting them against secret reprisals.

When the secret police move against dissident groups, the individuals are going to face coercion that is state level. They will vanish while traveling alone. They will kill themselves while in police custody “in order to embarrass the police.” They will throw themselves off tall buildings “rather than face arrest” — no autopsy possible, their bodies cremated within 24hrs as they always wanted. They will commit suicide by shooting themselves twice in the back of the head, just to be sure. If they survive secret police reprisals long enough, they will go to jail for decades…

The usual goal for a dissident who is captured is to remain silent for 24–48hrs, long enough to enable their comrades to escape. If there is some law governing their detention it may be “endure torture for 7 days, or jail for 30 years.”

At no point in time will dissidents think “if only my mobile phone was protected by an authentication mechanism that could not be tricked by physically forcing me to cooperate against my will.” In many cases, the coercion will be like a parent telling a child to go to their room. The weaker party will simply cooperate.

The strong do what they can, the weak do what they must.

Security technology is not without purpose or use

There is certainly a place for the role of technology to help protect dissidents, such as better protection of their Facebook accounts, some uses of Tor, and better mobile phone protections that protect the data from seizure and the accounts from takeover. But the capability of security technology to aid dissidents has to cope with the fact that some dissidents will cooperate with security forces, and some will be agents of security forces. The authentication mechanism of mobile phones is, quite literally, the least important area of a dissident’s digital life that needs to be secured.

One of the dumbest takes about iPhone FaceID is that it would enable human rights abuses by authoritarian regimes. This ridiculous opinion demonstrates a profound lack of understanding about authoritarian regimes. It’s a sort of fantasy idea rooted in the belief that political problems have technological solutions — then misapplied to the wrong part of the technology stack.

The risks that dissidents face in authoritarian countries are not going to be solved with a mobile phone authentication mechanism. The ability to coerce someone to unlock a device is a very generic capability, the specifics of the lock aren’t relevant. For example, when you arrive at the US border and the officer says “unlock your phone” — you either comply or you don’t. The specifics of whether your phone is locked with a 32 character passphrase, FaceID, or a four digit PIN is completely irrelevant. ¹

[1] Yes there are legal issues about biometrics vs PIN/passwords that are relevant in the US some of the time, but they are never relevant for authoritarian regimes.

Coercion is about power structures.

Technology can play an important role in safe guarding dissidents. However, the places where technology can provide relief do not include the authentication mechanism on a mobile phone. The confusion here may be because of inherent vulnerabilities with biometrics, the ability of corrupt officials to exploit those vulnerabilities, and the conflation of “corrupt officials” with “authoritarian regime.”

Biometrics identify, not authenticate, users

Biometrics authenticate the identity of a user, which is not the same thing as authenticating the user. They’re problematic in that way— they’re more suited to verifying identity than access. A fingerprint is a better username than password. ²

Some scenarios involving a corrupt official, an iPhone, and discreet physical coercion are easy to imagine. For example, during an arrest a finger could be forced onto a home button and, miracle!, “the suspect was seized with the device in an unlocked state.” That same corrupt security forces approach might be used with FaceID, because that’s a fundamental limitation of biometric authentication. The power of a corrupt official to unlock a device is partially reliant on secrecy and force. The tradeoff is against ease of use (thus higher adoption of any authentication security at all), and security for lost or stolen devices. In almost every case this is the correct tradeoff to make.

[2] The value of a fingerprint as a password is that it mitigates against shoulder surfing the PIN, a critical part of a robbery. Threat model, geez!

Coercive corrupt cops and state level coercion

This problem of biometrics being abused by corrupt security forces is not the same problem as authoritarian governments using coercion. In the first case— corrupt security forces bypassing authentication — the threat can be completely mitigated by simply disabling the biometric authentication mechanism. In the second case — authoritarian regimes — the method of authentication is not relevant.

Authoritarian regimes compel the user to provide access to the unlocked phone. They’re not physically, forcefully, manipulating the user into a position where they unwillingly biometrically identify (and thus authenticate) themselves to the device. Real coercion is not “hurt them until they comply” but being able to command obedience.

Coercion is applied to people, not technology.

Authentication is not the place where coercion can be mitigated — locking the Facebook account of an arrested dissident is more important than a “duress finger” option for a phone. Allowing organisations to securely compartment access to data, and remotely wipe a seized device, is more important than the limitations of FaceID.

Support more content like this.

--

--