What is “Zero Trust”

Alex Floyd Marshall
Secret Handshakes
Published in
9 min readMar 3, 2023

Imagine that you work in a large office (maybe you don’t need to imagine). It can be corporate, government, legal, nonprofit, doesn’t matter. Now imagine that you need some document hand delivered to a colleague, signed, and returned. The reason for this procedure isn’t important, so make up a scenario. Whatever the circumstances, though, this document must be delivered in hard-copy format and physically signed. Ok, you got all this in your mind? Let’s think about three ways this might play out:

Scenario 1: 100% Trust

Imagine that you personally take this document in your hands, walk down the hall to your colleague’s desk, hand them the document, watch them sign it, and then receive it back. This is what it looks like to have 100% trust in a process: there are no intermediaries, the document was never out of your sight, there is no room for ambiguity or uncertainty.

Scenario 2: Reasonably Good Trust

Now let’s add a little complication: you’re a big shot. You don’t walk it down the hall yourself (who has time for that?), you hand it to your administrative assistant. They take the document down the hall. Maybe they even pass it off to your colleague’s admin. Then that process gets reversed after it’s signed.

In this scenario, we have some intermediaries. But those intermediaries are pretty trustworthy: they are your ever present and highly competent admins. And the document never leaves your floor or building, so you’re pretty confident nothing could go awry. This is a scenario of “reasonably good” trust. In cyber security we might compare this to what is now known as the “perimeter security model.” In that model, computer networks are designed to function as closed, internal systems like the office we’ve described above. Information gets passed between individual systems on the network using a series of internal intermediaries, but they never leave the (virtual) building. So long as the internal network is secure, that information is, too. Could you have a secretly evil admin who is actually spying on you for a rival? Sure, thats a theoretical possibility. Similarly, in the perimeter model of security you could have compromised devices inside the network. But if you do a good job protecting that perimeter (a good job hiring your admins, in our analogy), the risk is low and you can operate with reasonable trust that the network is secure (just as you would reasonably trust your admins to safely deliver the document). Or at least, that’s the theory. We’ll come back to that in a few.

Scenario 3: Zero Trust

Now let’s get a lot more complicated. The colleague you need to have sign this document doesn’t work in your building. You work for a giant organization with lots of offices all over the region/country/world. Your colleague is in another facility altogether. To get the document to that facility, your admin calls up a messenger service. The messenger service comes to your building, takes the document from your admin, and leaves. Eventually, another representative of the messenger service appears at the building where your colleague works. They take the document through the building’s security checkpoint, up to the floor where your colleague works, through the front desk receptionist, to your colleague’s admin. Now we’re back to the “internal system.” When the document is signed we reverse the process back through the messenger service, your building’s security, and your floor’s front desk receptionist. A lot more hands have touched this document and inserted a giant black box in the middle of the process: what happens while the document is in the messenger service’s possession? It’s not that you don’t trust this messenger service at all (otherwise, why would you hire them?). But despite whatever trust you may have for them, you don’t have visibility into their internal process. How do they sort, prioritize, and handle deliveries? Do they do it all themselves or are there any additional intermediaries or subcontractors? How many people at the messenger service have access to your document while it’s in their possession? What are their internal policies around document handling? What is their hiring/screening process? Maybe your admin knows some of this or maybe your legal or partner relations team does, but I’m guessing you don’t.

This is what Zero Trust looks like. It’s relying on parties that are something of a black box: you know that you give them a message and it pops out the other side, but you have no idea what happens in between. In computer security, this is analogous to systems that run in the cloud or traffic that needs to traverse the open internet. As more and more work happens remotely, more and more systems fit this description, from email to internal databases or CRMs or project management tools or document drives, etc., especially if they need to be accessible by employees from remote locations. The messenger services in this analogy are the cloud and internet service providers that are hosting these services or carrying information to and from them.

Why do we call this zero trust? Obviously, we trust at least some of these cloud and internet service providers at least a little (otherwise we wouldn’t use their services). But there are some caveats to that trust: we may not even know we are using some providers (imagine our messenger service has sub-contracts for the legs between cities that we are unaware of) and still others we may not have a choice but to use (ie, your local internet provider might have a monopoly). Even when we have some degree of trust in a provider, we don’t control or fully understand their internal processes, and that black box effect should leave us with some concerns. Thinking back to our messenger service analogy, here are some “threats” that might arise:

  1. Someone in the course of the messaging service’s handling of our document reads it and learns confidential information. They might do that for their own benefit (insider trading, perhaps) or for someone else’s (corporate espionage).
  2. Someone in the course of the messaging service’s handling of our document makes a nefarious change to the document that alters its intent or effect. For example, if the document contains instructions to send money somewhere, they change those instructions to include their own bank account as the destination.
  3. Someone who knows we use this messaging service sends a message through them to your colleague purporting to be from you (or another colleague) and gets them to take action. In other words, a classic (but perhaps more sophisticated) con.

These same threats are analogous to what we face in a “public cloud” computer security context: if we are communicating, working, or transmitting data via intermediaries we don’t control, do we know that no one is reading what we sent? Do we know that no one is altering it? Do we know that no one is sending their own messages claiming to be us? Or accessing our internal services under our name? Because we’re relying on intermediaries we don’t control, we can’t have 100% confidence in the answer to these questions. So we assume they are all possible (or already happening) and take steps to protect against these kinds of attacks.

The first two concerns have fairly straightforward mitigations. In our messenger service analogy, if the document contains something confidential or sensitive, we could either seal our document in a way that would reveal tampering or write the document in some sort of code. We’ve known how to do both of these things with physical documents since ancient times, and either or both of these tactics make it much harder to read or alter the document while it’s in transit without the recipient noticing. These ideas translate readily to computer security, too: encryption is the equivalent of writing in code and cryptographic “signing” is the equivalent of placing a seal on a document to reveal tampering.

The final scenario — someone pretending to be us — is one of the principal concerns of zero trust networking. In the document analogy, our business would likely develop practices that would provide some degree of protection against this: alerting the recipient to expect a delivery on such-and-such day from such-and-such person, calling the sender to verify details of a document and its instructions, signing things in and out to create a custody trail, requiring a photo ID on each end to confirm the sender and receiver, etc. Some of these might be measures employed by the messaging service itself to improve security and raise our confidence level.

In a Zero Trust computer network, we rely on strong methods of “mutual authentication.” When someone logs into a system protected by zero trust, we don’t take their word for it that they are who they say they are, we make them prove it using Multi-Factor Authentication and we check for suspicious indicators, such as signing in from a location on the other side of the country/world from where they normally work. We also don’t make them take our word that the system they are connecting to is the one they think it is, the system furnishes “proof” (usually using something called a certificate) to show the user it is the right system. This typically happens “transparently,” meaning the user is only notified if something is wrong. We follow a similar process when systems talk to one another, making each one prove to the other its identity (again, using certificates). This gives us confidence no-one is sitting in-between those two systems reading and/or altering the information flowing between them.

Zero Trust Limitations

A good summary of zero trust could be this: encrypt and sign everything, strongly authenticate everyone. This results in a network that is extremely resilient to attempts by external bad actors to use our “messaging services” against us. This Zero Trust architecture isn’t foolproof (given an adversary with enough time and determination and they will eventually find a way in), but it’s very robust against most external attacks.

This comes with one potential drawback: it might make internal threats more potent. Imagine that your colleague in the other facility is the bad actor. They are trying to steal from your company. Since they know the process very well and have the required permissions to initiate a transaction, they can use your own internal processes against you. Given all the safeguards you’ve put in place, when you receive their message you have high confidence it is genuine (as, indeed, it is; its just also nefarious). So as long as the request isn’t obviously malicious in nature, that confidence may make you more likely to sign off on it. In other words, the Zero Trust process may actually lend credibility to a malicious insider’s actions because of all the boxes they check along the way.

From a cyber security perspective, this requires what we call “defense in depth.” We don’t just rely on our Zero Trust protections, we also implement separation of duties (also known as the “Principle of Least Privilege”), multi-party sign-off for certain actions, and monitoring for unusual or anomalous activity (even if it’s coming from insiders). Companies do similar things in the physical world, too, because every system has its weaknesses, so layering them on top of one another results in stronger overall protection.

Why Have Zero Trust Everywhere?

What we’ve come to realize in cyber security is that the “reasonable trust” perimeter model we discussed in scenario two above doesn’t really exist. Even when we think things are fully contained inside our own private networks, there are frequently openings in the armor that we aren’t aware of (either things that were accidentally opened up or things that have weaknesses we don’t know about). Once those openings are exploited, the perimeter based model makes it very, very easy for bad actors to run amok.

Increasingly, though, most things aren’t contained in our own private networks. For years companies have been moving more and more of their IT services “to the cloud” instead of managing them on internal data centers. That begins to introduce the “black box” effect we discussed. Add to that remote workers, a sector of the workforce that has exploded during the pandemic, and you’ve got more “black boxes” to worry about. Increasingly everything that your organization’s staff does is passing through multiple networks you don’t control and running on infrastructure that isn’t yours.

The point isn’t to cast aspersions on those providers. Hopefully you’ve chosen a cloud provider you reasonably trust. Hopefully your internet providers aren’t evil. But even in the best case scenario, you’re relying on them. What happens if they get breached? What happens if they don’t realize they’ve been breached? Zero Trust is a way of mitigating not only any concerns you might have about your providers themselves but also protecting you against any malicious activity directed towards them that you might be caught in the middle of. Combined with defense in depth, it creates a very robust security posture.

Summary

Zero Trust networking architectures are motivated by the “black box” effect of relying on providers you don’t control (ie, cloud services or the public internet). Because we lack visibility and control over these services, we can’t know who might be reading or interfering with our activities. To protect against this we implement a combination of encryption (transmitting our activity in code), cryptographic signing (letting us detect tampering), and robust mechanisms for mutually verifying the identities of both sides in a virtual interaction. This gives us very high confidence that activity on our network is “genuine,” providing robust security against all but the most determined external attackers. That confidence could be exploited by insider threats, so we still need other security measures — most importantly the Principle of Least Privilege and monitoring for unusual behavior — to round out our security posture.

--

--

Alex Floyd Marshall
Secret Handshakes

Lead Cyber Security Engineer at Raft, a new breed of government tech consultancy. Member of the CNCF Security TAG. Freelance writer and occasional blogger.