Is it even possible to be “completely secure”?

“The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards — and even then I have my doubts.” — Gene Spafford

If you’ve ever wondered “how much security is enough?” you are not alone.

More and more personal information security measures seem to be mandatory — passwords, anti-virus, one-time code generators, SMS message codes, home firewalls, security webcams, smart doorbells, anti-ransomware, disk encryption…

Why, with all of this technology, are we still not secure?

The reason is both simple and complex.

The simple answer is that nothing is perfect. Can anything ever be perfectly round? Can a road ever be built completely straight? Do simple engines and non-computational machines ever fail? Do the strongest metals rust? Is a measurement ever exact?

Computing devices are extremely good at certain tasks. They are generally accurate, fast, and extremely good at storing data (much better than our brains).

The more complex answer to this question involves fallibility, probability, and opinion.

Computers run on coded instructions. This code is created by humans. Even code written by computers was ultimately originally designed by humans. Code is generally how security vulnerabilities come into existence, and as such is what security aims to protect. Code can never be perfect, so security can never be perfect.

Perfect code with perfect design and architecture is impossible in part because two humans may not agree on what that means, but also because people make mistakes. Code can be without known defects, but since it is impossible to know whether undetected defects exist, that does not mean it is perfect.

According to Steve McConnell’s book, Code Complete, the industry average is “about 15–50 errors per 1000 lines of delivered code.” This is what is referred to as the defects per KLOC (1000 lines of code).

Software defects can fall into one of several categories, any of which can become a security vulnerability. For probability reasons, this is a helpful statistic to know for software creators who want to improve code over time. But for practical purposes, it also helps explain how code cannot ever be totally secure.

Production defects are not inevitable, but achieving a low KLOC is extremely expensive. Consider three versions of the NASA Space Shuttle program — each 420,000 lines long, which had just one error in each version of code. In 11 versions of the software, there were a total of 17 errors. Commercial programs of equivalent complexity would statistically have 5,000 errors. 1 error in 420,000 LOC is Impressive as heck, but still, not entirely perfect.

The very first recorded computer bug

Beyond code, software and hardware each have design and architecture that has been created by humans. One line of code that isn’t prepared to accept really long text can result in an overflow condition, creating potential for a vulnerability.

Let’s say, for the sake of argument, that the operating system (OS) code running on a computer is as defect-free as possible. The components which enable the OS, the software that runs over the OS, as well as the network the OS is attached to can themselves be imperfect and cause insecurity for the OS. This is true if you pick any of the components and consider the other pieces that enable and connect to it.

Computing hardware can have a mis-soldered pin on the circuit board or a poorly chosen resistor, memory can have timing errors, and drivers for hardware can have defects. Even dust can be an issue.

The software that is run on top of the OS creates interactions never before seen or tested. Logic, connections, elevated access, OS hooks, driver interactions…

When you combine all the hardware, software, and network components required for a computing device to function, you end up with a sort of cadavre exquis — a computing environment full of code that no one could possibly foresee being used together, and which may not work well together. It is impossible to test every combination of hardware, software, and drivers, so each is reliant on the others to be as secure as possible.

Connecting a computer to a network brings an entirely new set of outside threats, but a disconnected computer still has flaws. The USB port can be used to execute code or exfiltrate data from a target computer that isn’t on a network. Local user login screens can be bypassed. Smart assistants may be too helpful.

Last but certainly not leastEven if it were possible to write flawless code and run it in a totally trusted, tested, perfectly designed and architected computing environment, a secure computer can still be operated in an insecure manner.

Humans are the biggest threat to security. Users may disable security features or circumvent firewall rules that prevent them from visiting harmful sites. Users can decide not to follow policy or process designed to protect data. People can connect vulnerable personal devices and phones to a protected network. Seemingly innocent questions asked via the phone may lead to sensitive information disclosure. Some will even plug in a random USB device they find on the ground just to see what’s on it. Curiosity, laziness, and performance pressure can lead a user to do whatever they want, even if it jeopardizes security.

So, why even try to secure our assets?

“You don’t have to run faster than the bear to get away. You just have to run faster than the guy next to you.” — Jim Butcher

The global supply chain consists of corporations, governments, and other organizations, as well as the third-parties, partners, and services they use like waste collection, office cleaning, package delivery, phone services, etc. We, too, are part of the supply chain, either as consumers or employees or perhaps even employers. As we connect and interact with other links in the chain, we are putting a lot of trust in those links.

One of the classic examples of third-party risk is the 2013 Target breach:

“Attackers backed their way into Target’s corporate network by compromising a third-party vendor. The number of vendors targeted is unknown. However, it only took one. That happened to be Fazio Mechanical, a refrigeration contractor.”- Michael Kassner, https://www.zdnet.com/article/anatomy-of-the-target-data-breach-missed-opportunities-and-lessons-learned/

The lesson from this breach is the answer to why we should continue to try to maintain security: We must protect ourselves to protect others. If we are breached, we may lead to the breach of others. Our attention to security is a social responsibility, a safety concern, and a liability issue.

Security is not an absolute and it’s not binary; You are not “secure or insecure”

You may, however, be more secure than someone else (like Target’s refrigeration contractor), and at the same time, one of your devices may be more secure than another of your devices.

Security is the culmination of all of the measures taken to be “as secure as possible” and/or “as secure as is necessary.”

Being “as secure as possible” means you follow recommendations like having good passwords, patching your systems, and keeping anti-malware software running and up to date.

Being “as secure as is necessary” means not spending more on security than the data is worth. If your offline e-reader only contains books and no personal data, you probably don’t want to spend thousands of dollars on fancy software and hardware to protect it. You accept the risk of someone accessing your book data or having to reset your device because it’s not irreplaceable data and it won’t cause harm to anyone if it is compromised.

However, if your laptop contains sensitive client information, you are going to want to encrypt your hard drive, have a BIOS password, spend some money on premium anti-malware and anti-ransomware protection, use an encrypted password manager, and take other precautions because the data is more valuable. You have to consider that the risk is much higher than other devices, and protect appropriately. Failure to protect that sensitive client data may even make you negligent.

Ultimately, creating a balance of security and usability is key; this will prevent users from circumventing protections and ensure the appropriate measures for a given computing device are in place. It can be a fine line between over-securing and under-securing data, but in this dangerous, connected world, I embrace wabi-sabi and err on the side of caution. Even if it’s never “completely secure,” I do the best I can.