Escaping Dark Age Cybersecurity Thinking
Much of what’s wrong today in the realm of security, especially “cybersecurity” comes from thinking that is rooted in the Dark Ages.
“Motte and Bailey” Thinking
There are several problems with the way we think about security and handle it today, but it seems to me that they can all be traced, at least in part, back to one fundamental mistake: we use the wrong model when thinking about security, especially computer and network security, one that I’ll call the “Motte and Bailey” model.
Motte and Bailey is the name of a design for building castles that goes back to the days of William, the Conqueror, a millennium ago. The bailey is a courtyard surrounded by a wall or palisade. It is built upon the motte, an earthwork mound, usually a natural hill that has been artificially enhanced, leaving a surrounding ditch. The idea is that the motte sets the castle well apart from the surrounding countryside and that the palisade atop it creates a safe zone within it. Over the years, the ditch around the base of the motte evolved into the water-filled moat we associate with castles today.
The “Motte and Bailey” model of security suggests to us that what we are defending is set off from the rest of the world and that along its boundary we have built a protective wall, within which there is a zone of safety for us and ours. It’s fairly easy to see how this model applies to security mechanisms. My account on a multi-user system is set apart from everyone else’s and is protected with a password. My Local Area Network (LAN) is separated from the internet at my router and contained within my firewall and so on.
The Motte and Bailey castle and the designs that grew from it, along with the Roman walled city became the model upon which the cities of Europe were built. As the cities grew, the problem inherent in the model, even as applied to cities, became obvious. Once the city was big and complex enough, once it had enough commerce with the outside world, the notion that it was safer within the palisade became less and less true.
To continue my historical metaphor, let us fast forward six or seven centuries to visit the Paris of Nicolas-Gabriel de la Reynie, the man often credited with the creation of the modern police force, although what he did was a good deal more than that.
By de la Reynie’s day it became clear that the modern city was often less safe than the countryside around it. It was not enough to be set apart from the rest of the countryside on the Île de la Cité, or to have a wall. Chaos within the city, and the dangers that it brought had to be addressed.
First, as Lieutenant, and then Lieutenant-General of Police, he oversaw what would today be the police and fire departments; the courts; the departments of public works, sanitation and public health; the zoning commission; the bureau of weights and measures; the coroner’s office and more. His duties included responsibility for the hospitals and prisons; regulating publishers, printers and book sellers as well as the food supply and prices; inspection of markets, fairs, hotels, boarding houses, gambling houses and brothels; overseeing the elections of masters and wardens of the six merchant guilds. He was responsible for the construction of a bridge over the Seine, requiring the alignment of houses in a regular plan, and an extensive system of street lights, leading to Paris being called “the City of Lights”.
Cities, once walled and built upon hills or islands for safety, became systems far too complex and too well connected to the outside world to be protected by a simple barrier. Systems thinking and an infrastructure designed to allow both the detection and management of problems became necessary. Today, like de la Reynie’s Paris, the computer has outgrown the Motte and Bailey model. There is no clear boundary between the safe inner bailey and the hostile outside world.
Both cities and computer systems are highly complex internally and have large and growing numbers of connections to the outside world. We even use the same language. The word “port”, meaning “gate”, once referred to the gates in a city wall. By analogy, it extended to the harbor cities that were the gateway between a country and the world. On computers, ports are the connections to external devices and networks. In each case, they became crucial to accomplishing the work for which the castle, city, nation or computer exists.
The complexity within the walls and the number of connections through it both break down the isolation of the motte and palisade. The city of Paris grows far beyond the motte, and then the shores of the island, until its neighborhoods and suburbs completely obliterate any sense of boundary upon which the wall can be built, and what wall there is is broached by more and more gates and bridges.
Similarly, the boundaries of the “local network” are blurred, both as “cloud computing” and “eCommerce” draw our valuables out onto the internet, and as mobile and “Internet of Things” (IoT) devices draw the Internet into our homes and businesses. On the computer, the BIOS becomes as complex as an operating system, and we run more and more background tasks, and more of those background tasks are connected to the internet and the cloud. The infrastructure within the boundary becomes larger and more complex. All of this, Smartphones, flash drives, the IoT and WiFi access points open more and more holes through the wall, creating new vectors into our LANs and our computers.
And so, like the Paris of three centuries ago, the domain of cyber security has become far too complex for the Motte and Bailey model. Simple barriers won’t keep the bad things out. Isolation isn’t enough. This has been recognized for some time in large enterprises, by security aware IT departments. In this environment firewalls are joined by other technologies and strategies, many of which parallel de la Reynie’s strategies.
The modern Chief Information Security Officer (CISO) has many tools and strategies: Intrusion Detection Systems, Intrusion Prevention Systems, Defense in Depth, Active Defense, Red Teams, Penetration Testing, Bug Bounties and the like. All of these make the CISO role similar to that of Lieutenant-General of Police, but therein lies a problem. The Sun King solved the problem of security in Paris by putting de la Reynie in charge of everything, controlling the many constabularies, the courts, the censors, the markets, the guilds, sanitation and city design.
That worked for a city of a few hundred thousand, but not so much for larger and smaller communities. A CISO with a suitable staff, well equipped, is a valuable resource for a major enterprise, but as the risk grows, or as a solution for thousands or millions of businesses, hundreds of millions of households, that strategy has its limits. It is, in many ways, just a sophisticated elaboration of the notion of defending a well delimited territory. It is a real improvement, a step up, but just as no one runs a city like 17th and 18th century Paris any more, it is of limited applicability.
The Motte and Bailey model suggests that there is such a state as “secure”, that if only our firewall were impenetrable enough, we could have complete safety. With individual malware systems being capable of compromising a million or more computers in order to use them for their own purposes as parts of a “botnet”, it is pretty clear that we will never see “safe” again, that the “war” against botnets of “zombie computers” will never be won, any more than we will ever find a multicellular animal uninfected by bacteria and viruses.
What we need in cyber security is not a firewall, not a motte and palisade to keep our bailey inviolate, but an immune system. We need to realize that no system will ever be fully secure, but rather that as threats arise, it can remain healthy and viable, that it can shake off any ill effects before they become life-threatening. It is not a binary question of whether the system is secure or not, but a question of degree, of how vulnerable, how healthy, how well protected the system is.
The Motte and Bailey model colors our language. We think in terms of security “breaches” as the wall is penetrated. We talk of threats and attacks, drawing up sides between hostile outsiders and trusted insiders. A more functional risk analysis would examine all of our information flow, both external and internal, looking not only at how outsiders break in, but who has access to information, how sensitive information is handled, whether public and private information and access are intertwined, and so forth.
A number of problems arise from thinking of security in terms of boundaries and barriers, especially as the area within the boundaries becomes larger and more complex.
Security becomes separate
It’s hard to say that a book like Ross Anderson’s Security Engineering was a symptom of what we are doing wrong, given how valuable a resource it has been for understanding how to build reliable systems, or the ways that security is breached, and even harder to cite Bruce Schneier’s forward to it as symptomatic, yet Bruce wrote in that foreword that:
Programming a computer is straightforward: keep hammering away at the problem until the computer does what it’s supposed to do. Large application programs and operating systems are a lot more complicated, but the methodology is basically the same. Writing a reliable computer program is much harder, because the program needs to work even in the face of random errors and mistakes: Murphy’s computer, if you will. Significant research has gone into reliable software design, and there are many mission-critical software applications that are designed to withstand Murphy’s Law.
Writing a secure computer program is another matter entirely. Security involves making sure things work, not in the presence of random faults, but in the face of an intelligent and malicious adversary trying to ensure that things fail in the worst possible way at the worst possible time . . . again and again. It truly is programming Satan’s computer.
Security engineering is different from any other kind of programming. It’s a point I made over and over again: in my own book, Secrets and Lies, in my monthly newsletter Crypto-Gram, and in my other writings. And it’s a point Ross makes in every chapter of this book. This is why, if you’re doing any security engineering . . . if you’re even thinking of doing any security engineering, you need to read this book. It’s the first and only, end-to-end modern security design and engineering book ever written.
This passage exhibits one danger of Motte and Bailey thinking in that it allows us to think of security as a separate thing, and not an essential part of every working system. “Security engineering is different from any other kind of programming” only if there is some safe place that insecure programs can survive. But today, as networked computers appear everywhere, there no longer is a place for insecure programs. In November, hackers demonstrated the ability to take over IoT lightbulbs, showing that even the code that runs in a lightbulb requires security engineering. A month before that, one of the largest cyber attacks ever, which affected Twitter, the Guardian, Netflix, Reddit, CNN, Wired and many others, was launched using a botnet of compromised security cameras and DVRs.
Compare what Bruce wrote above to the following quote from the introduction to a draft version of one of the whitepapers created for the Department of Energy’s “Transforming Cybersecurity” initiative a few years back:
We must break the escalation cycle that locks cyber intruders and their targets in a state where targets are perennially resigned to attacks and intruders are at liberty to exploit and disrupt networks without much risk of suffering consequences, and we must act offensively by directly addressing the “elephant in the living room:” malicious threats are the norm, not the exception. This places us at an advantage because it immediately provides a new context of looking at the pervasive problem.
If we perceive threats as the norm, as they are in a city or an ecology, then it becomes far less acceptable to ignore security, to try to tack it on later. Security engineering needs to go from being different from every other kind of programming to being an element of every kind of programming.
Security is reactive, not proactive
One of the biggest failings of the way that security is currently handled is that so much of it is reactive, and while it’s not the only cause, the Motte and Bailey mindset contributes to the problem in a number of ways. Taking a reactive stance puts the defender at a disadvantage. The attacker need only succeed once, while the defender must succeed every time.
Threats are not seen in advance
If success or failure is viewed in terms of whether the wall maintained its integrity then all of our attention is focused on building and maintaining the wall. It would be far better, on the other hand, to take a more holistic approach, to familiarize ourselves with all of the known problems both beyond and within the wall. What are we doing here “inside” that may be making an attack more likely to succeed or more harmful if it does?
Returning to our friend M. de la Reynie, one of the things that he did was to mandate that the houses of Paris be laid out according to a regular plan where the streets were all straight, allowing the citizenry and the constabulary to see further ahead and behind. He mandated street lights, to deprive criminals of shadows in which to hide. What can we do to the infrastructure of our local networks and our computers to keep any intruders from being able to do major harm? What have we been doing wrong that gives them an advantage?
Complete safety is assumed to be possible
If only the wall is tall enough, thick enough, impenetrable enough, the thinking goes, then we would be completely safe. Any single breach, any one attack is seen to be a defeat, an unacceptable loss. Every flaw, every weakness becomes existential, and must be addressed, and in the end, that is an impossible task. If, instead, we realize that there will always be problems, then the task becomes identifying the severity, probability and cost of remedying or mitigating each. We, thus, turn to risk analysis and trade-offs. In addition to focusing on how to prevent all breaches, we can develop a plan for minimizing them as well as plans for how to deal with the inevitable failures.
We get caught up in arms races
We apply the wrong tools, and when they don’t work we build a bigger hammer. Since the problem is still there, the other side just looks for its own bigger hammer or a different angle. This is a futile effort, one based upon false assumptions and wrong thinking.
As I wrote in my recent article, The Ancient Art of Cybersecurity, “The security that can be installed is not the true security.” True cybersecurity is a skill, and while “there are tools and weapons that can be used in aid of the skill, … they are not, themselves, important.” If we take a systems view, rely on threat analysis and awareness, teach ourselves, our users, and customers the skills of threat avoidance, then we can escape the cycles of a cyber arms race.
Cybersecurity for the new millennium
So, what are the lessons to take away from this? First, that the comforting notion of an impenetrable barrier as the basis of security is no more sensible in the realm of cyber security than it is for the modern city or state, nor for a living body. There are far too many “ports” in any given “wall”, and the “interior” is far too complex, with too many actors for that to be a viable strategy. Further, simple models of untrusted enemies, and innocent denizens don’t cover the reality. Threats from inside, both witting, and especially unwitting, are a source of a large portion of the risk.
The design of the infrastructure “inside the wall” is important. Businesses should compartmentalize, separating servers, networks, computers and accounts that are outward facing and most vulnerable from systems, services and accounts dedicated to internal use, and containing key assets. It is inevitable that any enterprise will be breached. The only question is, what assets will be vulnerable when that happens. Defense in depth, with layers of protection is important.
This applies to our homes as well. As small, cheap, limited capability IoT devices proliferate, we need to segregate, using smart hubs or multiple routers to limit the access that a compromised webcam, DVR, thermostat or lightbulb has, and the damage that it can do.
Similarly enterprises, small businesses and families at home need to realize that insiders are one of the biggest risks. Social engineering, “phishing attacks”, attractive apps with trojan horses, and careless talk all put the integrity of our systems at risk. Sharing passwords, opening attachments, downloading software that comes unsolicited and the like are threats that generally outweigh the “barbarians at the gates”.
I offered some thoughts on the nature of True Security, a few weeks back in my article, The Ancient Art of Cybersecurity, and I expect to expand upon what I wrote there and here in the future. Until then, remember that walls and firewalls, and third party add-on security software are not what protect us.