Charles Kapelke
Jan 8 · 17 min read
Post Mortem 2020: Looking Back on CLTC’s Scenarios from 2015
Post Mortem 2020: Looking Back on CLTC’s Scenarios from 2015

By Steven Weber

In 2015, the Center for Long-Term Cybersecurity developed a set of scenarios depicting various “cybersecurity futures” for the year 2020. Now, as the year 2020 has arrived, Professor Steven Weber, Faculty Director for CLTC, reflects on what we foresaw — and what we didn’t.

For the 25 years that I’ve been practicing scenario thinking, I’ve been telling clients, colleagues, and students that “scenarios are not predictions because you can’t predict the future.”

It’s not a message that people generally enjoy hearing. Decision-makers in firms and governments want nothing more than a clear snapshot of the future world in which their decisions will play out. Researchers want falsifiable hypotheses to test, and there’s nothing better for science than a prediction on a variable that cannot reveal its value until after that prediction is made.

But in complex issue-areas like cybersecurity (however you might define the boundaries of that term, which is part of the challenge), most of the predictions you can confidently make are not very interesting from either a decision-making or theory-building perspective. One of Phil Tetlock’s “super forecasters” might be asked if there will be more or fewer data breaches reported in 2020, or if Russia will unleash a cyberattack against a U.S. power grid component in 2020… and she might even get the answers right when the evidence arrives. But so what? Decision-makers aren’t going to act based on these kinds of point predictions. And while a very large number of such predictions that can be labeled as ‘right’ and ‘wrong’ might over time be compiled as a training data set for a prediction algorithm, in practice, such correlations are messy and unreliable.

The point is to learn about the evolution of the cybersecurity world — and about ourselves — by assessing what kinds of causes and implications we saw clearly and, more importantly, what we failed to foresee and why.

That’s because complex conjectural causation gets in the way. In plain English, when the outcomes we care about are the result of causal pathways that are extremely complex, then the dream of predictive accuracy starts to seem a lot less fantastic and seductive. This complexity might result from equifinality — when different routes lead to a single outcome, as with a successful attack on a power grid — or multi-finality, when a single causal pathway can lead to multiple outcomes because of a random intervention (the attack would have been successful, but the infected server was shut down by a coffee spill).

Scenario thinking is a methodology that acknowledges these (inconvenient) truths and tries to achieve something more practicable — and useful. The purpose of scenario thinking is to articulate multiple possible futures that are different both from the present and from a linear extrapolation of the present (i.e., when the future is like the present, just “more so”). When these alternative future worlds are portrayed with causal narratives (“here’s how it happened”) and early indicators (“these are the kinds of things we will start to see if the world is moving in this direction”), along with implications, then decision-makers and theory-builders can engage in a kind of disciplined differential diagnosis over time and get smarter and more confident in their understanding of which driving forces of change matter and which do not.

We can also start to see overlaps or permutations among seemingly disconnected or unrelated drivers of change. The world is never shaped only by technology, human behavior, regulation, or any other single category of cause. It is shaped by all of them at once, and most importantly, by the often-surprising ways in which they overlap with each other. At its core, the scenario methodology is a means for forcing disciplined imagination and modeling of what happens at those overlaps.

I consider the lessons that come from this work as foresight, rather than prediction, and foresight is more useful to just about everyone who thinks about cybersecurity. So when we launched the Center for Long-Term Cybersecurity in 2015, one of the first things we did was build a set of scenarios that depicted multiple cybersecurity landscapes for the year 2020. The point of this exercise was foresight, not prediction. The goal was to help see more clearly how the future could be meaningfully different from the past, and then — most importantly — to use those insights to channel our research initiatives toward the problems, challenges, and opportunities that we saw emerging just over the horizon. We did that faithfully, and much of our research agenda, as well as many of the collaborations and projects we funded over the last five years, emerged in part from those initial scenarios.

But now it’s 2020. And since the scenarios we wrote in 2015 were aimed at 2020, it makes sense to look back and assess how our modeling exercise turned out. The point of this assessment is not to score the accuracy of our predictions, since that was never the purpose of the work in the first place. The point is to learn about the evolution of the cybersecurity world — and about ourselves — by assessing what kinds of causes and implications we saw clearly and, more importantly, what we failed to foresee and why. What did we overemphasize, and what did we underemphasize? Do we have systematic blind spots, and can they be corrected? What are the most important hypotheses that we can take forward as we construct research with an eye toward 2025?


Our 2015 scenarios depicted five cybersecurity futures in considerable detail, and you can read the full report here (PDF). Here are brief summaries of the high-level narrative for each scenario:

1. The New Normal: This scenario depicted a world in which, after years of mounting data breaches, internet users flipped their baseline belief that the internet is a basically safe “neighborhood” unless you did something stupid, to a baseline belief that it was a very dangerous neighborhood in which you constantly had to look over your shoulder and worry about where you were. Cyberspace is the new Wild West, and anyone who ventures online with the expectation of protection and justice ultimately has to provide it for themselves.

2. Omega: This scenario depicted the security implications of a world in which data science developed profoundly powerful models capable of predicting — and manipulating — the specific actions of single individuals with a very high degree of granular accuracy. The ability of algorithms to predict when and where a specific person will undertake particular actions is considered by some to be a signal of the last — or “omega” — algorithm, the final step in humanity’s handover of power to ubiquitous technologies.

3. Bubble 2.0: This scenario depicted the implications of a world in which the valuations of major internet platform businesses collapsed as the advertising business model fell apart, leading to a panicked market scrum for the data assets those firms had collected. It’s a “war for data” under some of the worst possible circumstances, with financial stress, ambiguous property rights, opaque markets, and data trolls everywhere.

4. Intentional IoT: This scenario depicted a world in which “internet of things” (IoT) systems deployed by public authorities had a powerful positive impact on public goods like education, environment, health, and personal well-being. At the same time, critics cry foul as “nanny technologies” take hold, and international tensions rise as countries grow wary of integrating standards and technologies. Hackers find new opportunities to manipulate and repurpose the vast network of devices, often in subtle and undetectable ways.

5. Sensorium: This scenario depicted a world in which bio-sensors and other data sources enabled the measurement and tracking of human emotional states — how we feel, not just what we do — at an almost unimaginable level of precision. These technologies allow people’s underlying mental, emotional, and physical states to be tracked — and manipulated. Whether for blackmail, revenge, or other motives, cybercriminals and hostile governments find new ways to exploit data about emotion.

So how do these scenarios look now, from the perspective of the actual 2020 landscape? We’ve organized these post-hoc observations into driving forces whose importance we overstated and those we understated; and phenomena we saw fairly clearly, as well as some we clearly missed (and still don’t have a reasonable grasp on).


What We Didn’t Foresee Clearly

There are a number of factors whose importance we overestimated. At the highest level, we postulated an overall rate of change that was faster than what we actually observed. (This is uncommon in scenario thinking, where it’s more often the case that the world changes faster than expected.) This is an important observation about the inertia of human behavior and installed bases of technology in the world of cybersecurity: change often feels very fast on the surface, but the deeper “big picture” trends haven’t created equally big discontinuities over five years. Many of the basic issues that cybersecurity professionals dealt with in 2015 (such as weak passwords, data breaches, ransomware, insider threats, phishing for credentials, and failures of basic digital hygiene) are still the key challenges in 2020; they’ve become more intense, rather than truly different. And while new issues have certainly arisen at the margins, with the exception of a few elements (which we’ll talk about below), the game has not been remade in fundamental ways.

We also overestimated the market value of data (which is still the most important asset cybersecurity is trying to protect). We did foresee the rising value of data in artificial intelligence (AI) and machine learning (ML) applications, but we didn’t quite understand how issues about bias and other flaws in data sets would become so prominent and begin to restrict how firms and government agencies would be ‘licensed’ by many societies to use these technologies. At the same time, we overestimated the degree to which public infrastructure would seize the opportunity to incorporate IoT and related digital technologies to improve public services. The potential remains, but experimental deployments have moved more slowly , the private sector has been more central in implementation, and public push-back on privacy-related issues has been more intense than expected (witness the Sidewalk Labs saga in Toronto). The upshot? The data economy — and the security issues that underpin it — are still in a stage closer to infancy than adolescence.

There are two significant driving forces whose importance we understated. First, we understated how strongly traditional geopolitics would shape the digital security world. This was partly intentional: at the time we wrote the scenarios, we bet that other cybersecurity researchers were focusing too heavily on geopolitics and that our models would perform better if we did the opposite. That turns out to have been a poor modeling choice.

As a result, we missed a major shift in the cybersecurity environment that followed — directly and indirectly — from digital interference in the 2016 election. We didn’t foresee the deepest vulnerabilities, which turned out to be not voting machines, but public discourse on social media platforms and its relationship to “truth.” This wasn’t about stealing data or money, but leveraging cyber-insecurity for the pursuit of traditional geopolitical goals, influence, and power in a competition among states. Our scenarios would have been more valuable if we had foreseen that cybersecurity would become entangled with debates about existential threats to democracy, because we might have prompted ourselves and others to try to anticipate the consequences of that relationship.

We also underestimated the presence and prevalence of state-based attackers and defenders by paying too much attention to non-state-based criminal networks. And so we underestimated the digital component of the “gray wars” between the United States and China, Russia, Iran, and between other dyads that are less visible. We did not foresee the speed with which digital technology supply chain issues and ML research and products could become the center of new Cold War geopolitics, between the U.S. and China of course, but involving many other states as well. The race for digital primacy has not produced a “Sputnik moment” for the U.S. quite yet, but it’s not far off — and it probably has produced the equivalent for China and possibly other nations.

We also understated the stubborn robustness of existing institutions in the digital security world. In the private and public sectors, the big, powerful institutional actors of 2015 — Apple and the NSA, Alibaba and the Cyberspace Administration of China — are for the most part still the big powerful institutional actors at the dawn of 2020. For all the talk about disruptive innovation, the incumbents have proven themselves stickier and more capable of absorbing innovation than expected. (Witness the decline in U.S. start-up formation and successful IPOs.) The incumbent firms at the cutting edges of digital technology may have seen their license to operate become more restricted, but their power relative to competitors and upstarts has increased, not declined. Many large firms that were incumbents in the pre-digital era — and government agencies with similar birthright — once lagged behind, but they have been able to catch up enough through a kind of minimum viable adaptation to stay prominently in the game.

It wasn’t that long ago that Silicon Valley firms scoffed at the role of federal regulators. Now these companies are among the largest lobbyists in Washington, DC….

One of the most important features of the cybersecurity landscape in 2020 is that the leading firms of the digital revolution not only have extended their market dominance, but have started to look and act much more like “normal” companies in their behaviors, strategies, regulatory relationships, and corporate cultures. It wasn’t that long ago that Silicon Valley firms scoffed at the role of federal regulators. Now these companies are among the largest lobbyists in Washington, DC, as well as in Brussels and other capitals. Meanwhile, government regulators in 2020 have been re-invigorated by the passage of the GDPR, CCPA and AB 5 in California, as well as new taxes in Europe. Central banks are (at least for now) winning the battle between sovereign currencies and cryptocurrencies. And law enforcement and intelligence agencies have continued to fight essentially the same encryption battles with technology firms that have been going on for decades.

No one knows how these legal and regulatory fights will play out in the early 2020s, but it’s notable that the protagonists are mostly the familiar big players, along with the conventional support systems, such as law firms and public relations and communications agencies. This could just as easily be elite corporate-government politics of 2000 or even 1990. Disruptive innovators and social movements outside the boundaries of conventional organizations do not appear likely to have much influence on what happens here relative to the familiar dynamics of court and public opinion battles between the largest firms and the largest governments. Plus ça change, in this respect, is pretty accurate.


What We Did Foresee

Now let’s look at the important driving forces and consequences that we were able to foresee with some clarity and accuracy. We foresaw the rise and evolution of algorithmic authoritarianism — the use of digital systems for surveillance and control — in China and other nations. We argued that a set of techniques, tactics, and technologies would be packaged into an exportable tool kit that would have widespread appeal among governments, and that has indeed happened.

We foresaw many aspects of the push-back against the large platform firms, even though what is now called a “techlash” was barely nascent in 2015. This insight was tied to our view that the era of permission-less innovation — the idea that, in the digital environment, private firms could do pretty much whatever they wanted until someone could show that what they were doing was dangerous — was coming to a close. The digital world hasn’t reversed all the way to the other side of the continuum — as in the precautionary principle, where you can’t introduce innovation without proving beyond a reasonable doubt that it is safe — but the burden of proof is moving in that direction. This is part of the process of major firms becoming “normal” in the eyes of employees, customers, regulators, and (probably soon) financial markets. This transition isn’t necessarily bad or good, but it implies mounting burdens on the largest technology firms to pay greater attention to their broad social-political as well as economic impacts, all of which touch on security. The looming fight around section 230 of the CDA — which distances online intermediaries from legal liability for most of the content posted by their users — is likely just the tip of that iceberg.

We also foresaw the emergence of “emotion data” as a crucial new battleground for digital security. People now leave a trail of digital exhaust tracing not only what they do, but also what they feel — and the data about those feelings and emotional states can be much more valuable than simply a record of clicks and decisions. We envisioned that emotion data could unlock extraordinary value for improving the human experience, but that it would also create entirely new attack vectors that took advantage of emotional frailties, insecurities, and fears for individuals and social groups. In fact, manipulative and insidious uses (selling and the seeding of hate) have so far vastly outpaced the positives. Our scenario about emotion data was on balance pessimistic, but it now seems insufficiently pessimistic in many respects.

We made the broad argument in 2015 that cybersecurity would gradually lose its “cyber” prefix, as security in the digital environment would become indistinguishable from just plain “security,” while cyber-physical boundaries dissolved, at least conceptually. This was overstated. But cybersecurity has indeed moved dramatically to become one of the top two or three priorities on governments’ and firms’ list of top risks, which was not the case in 2015 (though it probably should have been). Our scenarios also included prescient stories meant to illustrate just how easily people could be manipulated in digital environments, as if the evolution of human thought and behaviors hadn’t had a chance to catch up to the actual risk.

It was easy, then, to foresee the widespread creation and dissemination of fake news, fake video, and just about every other manifestation of digital fakery, but it was also too easy to focus on the higher end, more technologically interesting and advanced ways of doing fakery. In practice, relatively simple and unsophisticated technologies (including what is sometimes now called “flooding”) were more than sufficient to manipulate people and groups of people into believing half-truths and falsehoods and spreading those beliefs to others. It’s partly because we were looking for the more sophisticated “bright shiny objects” of disinformation technologies that we failed to see how easy it would be to interfere with democratic processes and elections with lower-tech tools.

This points to something important about the intersection of humans and digital technologies that we clearly didn’t have a strong understanding of at the time, and which researchers and practitioners still don’t fully grasp: how to see and parse the “big-picture” story of cognitive and emotional connections that people maintain to the digital world, in all its varied glory. People do remarkably irrational and sometimes very stupid things when they use digital technologies. They compromise their security and that of those around them in ways they would never imagine doing with mundane physical security, like the doors and windows of their homes. They continue to push digital products to do things beyond the horizon of what we know can be safely achieved, the equivalent of accelerating your car beyond the red line on the tachometer because you just want to go faster. And they continue to describe their preferences, hopes, and fears around issues like security and privacy in ways that are remarkably inconsistent with their actual risky behaviors.

Security professionals often remind us that the human is the weakest link — but in practice, the human is also the most complicated link and the hardest to understand.

It’s a strong reminder that creating useful foresight models at the intersection of technology and human systems is one of the hardest challenges in modern social science. Roy Amara famously said that we tend to overestimate the impact of technology in the short term and underestimate its impact in the long term. With some caveats, our scenarios show the impact of “Amara’s Law.” But there are two other systemic “laws” that seem to be operating at least as powerfully, those coined by Gordon Moore and Frederick Brooks.

Moore’s Law captures the rapid reduction in price and increase of processing power, a core feature shaping the security environment. But Moore’s law doesn’t appear to be as consequential in the security world as is Brooks’ Law, from The Mythical Man Month, which captures the complexity of human communication about technical artifacts like software. As Frederick Brooks put it, “adding manpower to a late software project only makes it later.” It’s not a precise formulation (of course, neither is Moore’s Law), but Brooks captured something very important about the limitations of human understanding and communication, and about the uniquely complex layers that digital technologies create around human experience. We are not yet near the point where technology can reliably overcome the complex foibles of human behavior. More often, technology magnifies them.

So now what? What does this analysis of our past scenarios tell us about what we should do to be better prepared when we wake up on Jan 1, 2025? I, for one, am going to be paying more attention to the ways in which information and digital technology (IDT) firms reconstruct the corroding foundations of their social license to operate, because whatever that new foundation is built of will be the source of new vulnerabilities that the cybersecurity world will have to manage. I’ll try harder not to be overly distracted by the bright shiny objects of emergent technologies and pay just as much attention to quotidian security issues as they morph and change shape. I’ll be thinking much harder about how traditional geopolitics shape the digital world, just as much as the digital world reshapes geopolitics. I’ll have my eyes out for the broad-based social movement(s) — not yet visible, but possibly nascent — that would remake cybersecurity by adding unconventional and unexpected players to the game.) And I resolve to spend more effort on the human dimensions of technology, including the emotional component of that landscape. Security professionals often remind us that the human is the weakest link — but in practice, the human is also the most complicated link and the hardest to understand. No matter how many billions of transistors can be put onto a chip and how many billions of data points can be processed in a machine-learning training set, in 2025, human beings will still be the most complex variables in the cybersecurity equation.

It would be easy to conclude that the analytic problem here is just too hard to solve — that human beings are too complex and varied and that digital technologies change too quickly to do better. But it would also be intellectually lazy to give in to that impulse. Worse, it would be dangerous — for business models, political systems, and societies as a whole.

The digital world is largely offense-dominant. It’s still easier in most cases to attack than it is to defend and easier to do bad things than it is to do good things. Attackers only have to succeed at a minuscule fraction of the rate of defenders. With that recognition comes a necessity for foresight as one modest means to level the playing field and give the defenders a fighting chance. The only way to get better at foresight, and at prompting action based on foresight in the face of uncertainty, is to practice. A crucial part of practice is post-hoc analysis of where models of the future do relatively well and where they fail and why. It’s an imperfect process, but we hope this exercise can help us do better in the next round — and help others do the same.


Professor Steven Weber
Professor Steven Weber
Professor Steven Weber

Steven Weber is Professor in the School of Information and Department of Political Science at the University of California, Berkeley. He is a global leader in the analysis of issues at the intersection of technology markets, intellectual property, and international politics. His latest book, Bloc by Bloc: How to Organize a Global Enterprise for the New Regional Order, explains how economic geography is evolving around machine learning, and the consequences for multinational organizations in the post financial crisis world. Previous books include The Success of Open Source and, with Bruce W. Jentleson, The End of Arrogance: America in the Global Competition of Ideas.

CLTC Bulletin

The CLTC Bulletin brings you the latest news, research and…

Charles Kapelke

Written by

CLTC Bulletin

The CLTC Bulletin brings you the latest news, research and opinions from the Center for Long-Term Cybersecurity at UC Berkeley.

More From Medium

More on Artificial Intelligence from CLTC Bulletin

More on Artificial Intelligence from CLTC Bulletin

Adversarial Machine Learning

More on Cybersecurity from CLTC Bulletin

More on Cybersecurity from CLTC Bulletin

More on Cybersecurity from CLTC Bulletin

Announcing the Cybersecurity Imagery Dataset

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade