What you didn’t dare to ask about trust

Jurjen Bos
7 min readMar 31, 2023

--

One of my favorite cookies is the famous Dutch cookie called “stroopwafel”. These delicious treats are widely available at supermarkets.

I don’t worry about the production process when I buy them, I just assume that the quality and safety of the cookies has been verified. I trust that the supermarket ensured the manufacturer’s reliability, that the ingredients are being monitored by the manufacturer, and also that the labels are being verified by laboratories, among other things.

Stroopwafels: highly recommended

On a higher level, I trust that I could verify all these aspects if I wanted to, knowing that everything is regulated by Dutch (or European) laws. I know that these trust assumptions may seem excessive for a simple snack, but bear with me.

In cryptography, trust assumptions are a crucial concept. Every cryptographic protocol, algorithm and system is designed to transfer trust in the requirements into the trust in the outcome. For example, the AES cipher is a computation with two inputs: a key and a plaintext of 128 bits (the key can also be longer, but that is not important here) and an output of 128 bits, called ciphertext. Formulated in terms of trust assumptions:

  • Initial assumption: we trust the key is secret, known only to the recipient and sender of the message.
  • Implicit assumption: we trust that the key is generated in such a way that there isn’t any a priori knowledge of the key (this is called “cryptographic random”: that is actually not easy at all).
  • Assumptions about AES: the process of designing and verifying AES is so that there are no design flaws that weaken it. You could see this as Kerckhoff’s principle: only the key needs to be secret.
  • Output assumption: we trust that the ciphertext doesn’t reveal anything about the plaintext, without knowledge of the key.
  • Output assumption: we trust that given the plaintext and the ciphertext, finding the key is not easier than simply trying out all possible keys (with 2¹²⁸ keys, this is “impossible” in any reasonable sense of the word).

The first two points are about “key management”, and the last two can be seen as the definition of a block cipher. So, the short form of all these assumptions is:

  • If you use AES with proper key management, your data is secure.

As a young cryptographer, I found this really easy to understand. It is just like the “preconditions” and “postconditions” that I was taught at the university: make sure the initial conditions are met, and you can conclude that everything works as described.

Later I realized there is a small flaw in this reasoning: there is an another unwritten assumption in cryptography: you assume that the computations are performed by a device you can trust. Let me explain this with a bit of historic context.

The Ordinator: a CP/M based multi user computer

During the eighties, together with a few friends, we built our own computer, called the Ordinator. (Here’s the story, including some embarrassing pictures of us as teenagers.) We built it from scratch: we designed the circuits and wired everything up; we even designed our own virtual memory hardware (allowing to connect 512 K of memory to the Z80 processor). The software was also written by us: of course we needed our own hardware drivers, then we rewrote the operating system, and then wrote an editor, assembler, linker and so on, all the way up to a LISP interpreter that I wrote and used to implement a primitive version of SASL (a very early predecessor of what is now Haskell).

We could trust this computer, because we knew and understood everything in there. The chips weren’t advanced enough to build in spy devices: there simply weren’t enough transistors in them. These days, you could actually see them using a microscope (here’s a blog about the Z80 processor we used at that time: I never knew it had a four bit adder!).

In that same time, Ken Thompson wrote his famous article “reflections on trusting trust”. My father found this article and gave me a copy (on paper: that’s what you did) because he thought I’d be interested. I sure was!

In that paper, Ken Thompson where explained that there is no way to trust software that you don’t write yourself.

And it hasn’t become any better: today, hardware has become so complex that his reasoning applies to every device you own (as long as there’s an IC in it).

There’s a computer in your processor (really)

We live in a world where we are dependent on others for everything we do. There is no easy way to check what’s inside your computer anymore.

You probably know that every device, except for the simplest, contains a microprocessor nowadays: not only smart watches, but also toasters, light bulbs, keyboards, headphones, washing machines, room thermostats, you name it. More complex devices have processors in their individual parts: your phone probably has a separate processor for the screen, fingerprint sensor and GPS. Cars are even networks of processors with their own network protocol: the CAN bus.

The microprocessor manufacturers went one step further; they put an entire computer in their processors: Intel calls it Management engine, AMD calls it Platform Security Processor, and the PowerPC has a Power Management Unit. This computer is hidden from the user, and the manufactures aren’t all that clear on what they do. These surprisingly complex internal computers have their own operating system and run completely independently of the processor they are in. The computers do have access to everything that the processor does, (including network connections), and run even when the processor is turned off!

Security specialists worry for the days hackers figure out how to break into these computers, allowing them to control everything without being noticed.

So if you thought the problem of trusting your computer was just figuring out the user interface, you’re not pessimistic enough: the trust problem goes all the way down to inside the chips. For example, Intel’s Management Engine runs Minix: apparently Intel trusts this operating system enough for controlling their processors from the inside (and the compiler used to compile its code, and so on).

What can we do about this?

As civilized humans, we have no choice but to rely on the products and services of others. Trust is part of that. You can’t check everything you use; this has been so for a long time. Sometimes it becomes political; if your router or virus scanner is manufactured in a government with different views that you, this could be a cause for concern. As Meta employee Artemis Seaford learnt in Greece, you can’t even trust links in emails from official government accounts.

All in all, you have no choice but to make trust assumptions for yourself. Note that trust is not an absolute property: it is an assessment of how likely it is that your specific interests are supported. You can trust a party for one thing, while you distrust it for another.

A personal example: I do own a mobile phone manufactured in China. I do trust that Google has worked hard to protect my data against theft, even from the manufacturer of this phone. That is because it is in Google’s interest to protect my data: it would be a disaster for them if the security was not strong enough. On the other side, I do not trust Google to keep my private information secret: that’s their source of income, so it is not something they will do in my interest.

Thinking in trust assessments: TLS

As mathematicians say it: “trust isn’t transitive”: you cannot trust someone to trust their friends for you. To show you what that means, let me explain TLS certificate chains in the language of trust assumptions.

Let say certification authority A has its root certificate in your computer (brand B), and you visit web site W, which has a certificate from A. Instead of saying “B trusts A, and A trusts W, so B trusts W", the situation actually is as follows:

  • B decided that including A in the list of certificates will protect its buyers sufficiently against web attacks, while giving them the convenience of visiting web sites with a certificate provided by A;
  • A has verified W against its own criteria (which are actually really hard to find; a positive exception is EFF);
  • you have no choice but to trust B’s selection and A’s verification to satisfy your security needs.

With respect to the last statement: to reduce my risk of being victim of root certificate misuse, I went through the drill of removing doubtful root certificates from my phone, which as a bit of work. One notorious certificate that I turned off is the “Hongkong Post Root CA”. This certificate is default available in most operating systems (!), and since I don’t expect to deal with Hongkong post anytime soon, I turned it off.

Global dependencies: not necessarily all evil

I’ll end this depressing article with an optimistic thought: often the requirements of the builders of your devices are completely on par with yours, even if you don’t agree with them. Not only is it in Google’s interest to protect my phone, you can even claim that most of the time, the interests of all the different parties that are involved in making your device are aligned with your interest most of the time. And the fact that there are so many parties does it also make it harder for criminals to abuse the trust.

All in all, it is not possible to trust everything, and it is probably best to just stay alert. It’s the world we live in, and we’ll have to live with that.
It may be better to be a bit worried all the time than it is to just distrust a product just because it comes from the wrong company or country. Unfortunately, we are seeing a lot of this complete mistrust lately, where companies can’t sell product in certain countries, which is more about politics than about computer security.

--

--

Jurjen Bos

Born in 1964, raised in the Netherlands, proud father of two daughters. Love to ride my bicycle. Also love to dive deep into little technical details.