How to Predict the Future(s) and Create Resilient and Effective Societies and Organizations

An Interview with Futurist Jeremy Pesner

Carbon Radio
Mar 2 · 12 min read

Jeremy Pesner is a multidisciplinary technologist, policy analyst and current PhD student in technology and public policy. He focuses on Internet & ICT policy, innovation policy and technology forecasting. You can read more about him and reach out to him on his website. Carbon Radio caught up with Jeremy, nearly 3 years after his TEDx talk on futurism, to learn more about the field and how his insights have developed.

Like many broad, interdisciplinary fields, there is no single clear, concise definition that is universally accepted. To try and give a succinct explanation, futurism is the practice of contemplating, exploring, discussing and suggesting what will happen in the future. But that alone isn’t a complete answer. What’s likely more important than any particular futurism method or practice is the mindset that a futurist adopts; this is what distinguishes a futurist from an average person who is considering the future. Several futurists have described their take on this mindset, from Andrew Hines & Peter Bishop to Paul Saffo to Cecily Sommers, but generally speaking, it involves thinking in a nonlinear, broad and interdisciplinary fashion that looks at not only the future but how a given event or pattern might fit in the larger picture of history. This may not sound difficult, but it takes a good deal of practice to truly adopt this mindset, especially in a field that you lack expertise in. This allows for a conception of future events that is not path-dependent from our present state but instead can move in a number of different directions depending on high-level trends and events.

It is important to distinguish between “futurism” and “forecasting.” The former explores the range of possible futures that can emerge, usually at a fairly high level, while the latter is focused on attempting to anticipate specific developments and timelines in given domains based on trends and data (e.g. technology forecasting). Like everything in this field, there are no bright lines between them, and some less exacting practitioners will use the terms interchangeably, but the distinction does serve to clarify the different purposes that this field can serve. In this context, forecasting is usually focused on the change in precise details of a particular object or forum (e.g. how many transistors will fit on a microprocessor in 2025?). This is certainly useful for targeted applications in which the factors and limitations can be readily identified, but when we expand out of narrow focuses and into the more general questions of what our world may look like, the question of prediction becomes a lot less cut and dry. For example, the World Future Society did predict that terrorists might attack the World Trade Center, but the details of the attack itself still took the organization’s president by surprise. In this broader context, futurism is more useful for understanding the broad contours of tomorrow than the precise details of what, when, where and why.

There is no question that we need to consider the long-term future when making decisions in the present. The evidence is overwhelming that human activity over the past two centuries is bearing consequence today, and that ignoring the long-term future today will result in significant consequences then. Climate change is the most often cited example of this, but McKinsey analysts have concluded that a lack of long-term thinking hurts the profitability of businesses as well. Not only does our present directly affect the future state of our society and planet, but many people look to futurism to get some sense of comfort and security about the future, even if the particular prognostications don’t pan out. Clearly, futurism fills a deep need and desire within humankind to look ahead and imagine what is coming. But because the future is inherently unknowable, the field of futurism itself is useful for this purpose because it provides a wide berth of flexibility in exploring it. The large array of methodologies under its tent are connected in purpose — exploring and understanding the future — but diverge wildly in structure and execution. Whether through using hard quantitative data, gathering expert opinions or imagining a future through narrative, the field accommodates just about any kind of future-oriented practice. Rafael Popper’s Foresight Diamond demonstrates this nicely:

Rafael Popper’s Foresight Diamond

The term was coined by Nicholas Nassim Taleb in his eponymous 2007 book. Black swans are large-scale events that are highly improbable, very difficult to anticipate and change the world as we know it. These events often cause a major shift in worldviews: consider that until the discovery of Australia, people believed that all swans were white, and all it took was one sighting of a black swan to undo centuries of preconceptions. In that context, black swan events are not simply events that an average person wouldn’t anticipate — these are the occurrences that no one seemed to see coming, that little of the data pointed to and the causes of which are usually only clear in hindsight. Many historical major events can be characterized as black swan events, because people at the time likely didn’t anticipate them, and even when we study them we likely don’t possess all the pieces to perfectly understand how the event came about. Taleb uses this phenomenon to assert that humankind has fundamentally overestimated what it can possibly know and understand. Therefore, rather than trying to better predict such events, he advises that organizations become more robust — in other words, more humble and open to errors in any kinds of predictions they make — so that they can recover from black swan events more quickly.

The turkey example has all the qualities of a good parable: it’s short, direct and demonstrates a clear lesson. The story was initially told to demonstrate the logical fallacy of inductive reasoning: a farmer feeds his turkey every day at the same time, and it soon becomes accustomed to the pattern, soon believing that because it was fed the previous day, it will be fed today as well. Then one day, instead of feeding the turkey, the farmer kills it and serves it for dinner. Obviously, it wasn’t in the turkey’s interest to expect that day to be like all the ones before it, but it had no way to expect such a change. This notion effectively translates to the black swan context: people are often so used to the way things are every day that they don’t — or can’t — anticipate how easily their situations could suddenly and dramatically shift with little to no warning. It’s also important to note that the concept of a black swan is relative: what was a black swan to the turkey was not necessarily one to the farmer. The farmer had his own set of circumstances and events that led up to him making that turkey dinner, and to him killing the turkey may have been a clear and logical consequence. There are different arguments as to how precisely to apply this to futurism, but it’s clear that no one will successfully plan for the future by imagining it as a linear and gradual extension of the present. A graph of the turkey’s well being shows this very viscerally:

The Turkey Example

This is an interesting question. In some ways, the two fields are very similar: they were both developed in part through research at the RAND Corporation, they were both birthed from nonlinear systems perspectives, and they are both interdisciplinary fields that allow for broad interpretations and different methods to undertake research. But there are also significant differences: futurism as a field has evolved in a more professional context — there are only two academic programs in the US focused on futurism. Complex systems, by contrast, largely developed in academia, and while not a very prevalent field, there are academics, departments and institutions throughout the world (most notably the Santa Fe Institute) that focus on social network analysis, agent-based modelling and other dynamic systems approach. (It is worth noting that Nassim Nicholas Taleb is co-faculty at the New England Complex Systems Institute.) Research in futurism is also more topic-driven (a futurist can employ a number of different methods to explore a single topic, such as the future of biotechnology) while that of complex systems is more method-driven (complex systems researchers often build similar types of models to study a wide variety of phenomena). Because of all this, the two are not often used in tandem, although there is no reason that they couldn’t be. Futurism is more likely to give a sense of possible futures in the context of lived experience, while complex systems models can provide insight into the underlying structures and relationships that give rise to such futures.

Futures studies have actually been applied to this issue for quite a while now. The US Coast Guard has undertaken regular scenario and strategic foresight development since 1998, in an initiative called Project Evergreen. It’s considered to be one of the strongest government foresight programs, and its members are often fixtures in the Federal Foresight Community of Interest (see next question). Because it’s an ongoing project and was not conceived as a one-off “strategic update,” its results are taken seriously within the organization and are combined with other factors to influence the Coast Guard’s ongoing strategy. This practice has inspired the Federal Emergency Management Agency to undertake their own strategic initiatives, and while not explicitly disaster-related, the UN has published a report on using foresight to help achieve the Sustainable Development Goals. The Center for Homeland Defense and Security has even put together an entire educational module on the topic. Within academia, there is some literature on the topic, but perhaps the best example is a special issue in the academic journal Technological Forecasting and Social Change published in 2013. You can even give the process a try for yourself if you like.

There are a variety of organizations in the futures studies field, although they’ve developed from different contexts and in a fragmented fashion. The field of futurism initially emerged in the 1940s in the context of anticipating geopolitical events as the Cold War began. The earliest research on the topic was conducted at the RAND Corporation, which grew out of Herman Kahn’s work on game theory and systems analysis. The World Future Society was founded around the same time as a way to bring people who were thinking about the future together. This organization has evolved significantly in the last few years and has made a conscious effort to encourage younger and more diverse additions to its membership community. There are also futurist organizations that have developed for more specialized purposes. The World Future Studies Federation grew out of similar initiatives in Europe and is more tied into governance bodies like UNESCO and the UN. The Federal Foresight Community of Interest is a group for employees of the US government and adjacent organizations who are interested in using foresight to help improve government decisionmaking. The Association of Professional Futurists is an organization specifically for those who make their living as futurists. Employees of futurist consulting organizations such as Toffler Associates (founded by famed futurist Alvin Toffler), Kedge and Forum for the Future are often involved in this community.

As fellow futurist Travis Kupp and I recount, it isn’t always easy for who are new to the field to simply join one of these groups and immediately know what is going on. I personally became gradually more involved with the World Future Society over a period of years, and that was only after I had already taken a class in the subject. A meetup community called Speculative Futures, and the resulting nonprofit Design Futures Initiative and conference PRIMER, has emerged from grassroots organizers in various cities over the past few years. It has been largely centred around designers and encourages participants to make “future artefacts” (conceptions of what particular objects in the future might look like and how they might function), rather than only discussing theoretical ideas and concepts. But the community is open to different ideas and perspectives — this was clearly reflected in the theme of PRIMER’s 2019 conference: Futures for All. That motto is apt for the entire field, as anyone who wants to learn more about the field and find their place in it will ultimately be able to do so, whether through one of its many communities or even through their own individual exploration. The upside of a field as broadly-defined as this one is that it’s easy for people to chart their own path within it.

This question is asked a lot, although my answer may be less exciting than some would hope for. Ironically, when we examine how the field has evolved to today, it hasn’t really departed very far from its origins. Many of the same methods that were created when the field was first developed, such as scenario planning and delphi polling, are still used today in the same fashion they were then. I think there are a couple of reasons for this: first, the process by which we can imagine a broad future can only get so specific. While individual practitioners may have their own take on how to apply these methods, there is no clear and objective way for the practice to evolve. But I believe another reason is because of what I mentioned in the previous question: the field has traditionally been insular and not actively recruited to grow its community, so it was largely composed of older white men. When I first became aware of the World Future Society in 2012, I found it to be a bit troubling that its website hadn’t been updated since the 1990s. Recent leaders of the organization have made active efforts to bring a wider base into the group, so I hope that between this increased diversity of WFS and the greater diversity of groups I mentioned in the previous question, the next 50 years of futurism won’t be like the last 50.

One prediction I’m fairly confident about is that machine learning and related techniques will come to play a much more central role in forecasting. I have worked on some technology forecasting at the Georgia Institute of Technology, which relies on datasets of academic publications on various science and technology research topics. The implications of this kind of analysis are fairly short-term, in the 3–5-year timeframe, but it’s entirely possible that these data-driven models could lead to more generalized models — such as complex agent-based models — that could be used to anticipate the longer term.

I discussed the broad importance of long-term thinking to our society in Question #3, so I will give a more focused response here. Dwight Eisenhower once referred to a college president who said “I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent.” Stephen Covey, A. Roger Merrill, and Rebecca R. Merrill operationalized this dichotomy in their 1994 book First Things First with the Eisenhower Matrix, which identifies the proper actions to take for different types of tasks:

The Eisenhower Matrix

Although this book was written to guide people in managing their own personal and professional lives, the framework is very applicable to how and why we practice future thinking on a larger scale. The long-term future is decidedly important, but because it’s far away from our immediate concerns, it is not urgent, and thus belongs in Quadrant #2, which the authors call the “quadrant of quality.” Unfortunately, it’s precisely this class of tasks that we are most likely to neglect. We spend a lot of time on tasks we believe are urgent, whether they are important or not. This is not just because the tasks seem so immediate, but because of the adrenaline rush and exhilaration we often feel when working on them — the authors call this “urgency addiction.” However, this usually means that long-term important tasks are not addressed unless and until they become urgent.

There are certain tasks that are both urgent and important, and therefore Quadrant #1 does demand a solid chunk of attention. However, those operating with an “urgency mentality” will drop into Quadrant #3 when the tasks in Quadrant #1 dwindle, while those operating with an “importance mentality” will move to Quadrant #2, which gives them more time to anticipate and structure plans that will ultimately assuage Quadrant #1 tasks. These concepts can be effectively applied to any problem or level of society, and in just about every case spending time in Quadrant #2 will lead to more resilient, balanced and effective societies and organizations.

The Startup

Medium's largest active publication, followed by +611K people. Follow to join our community.

Carbon Radio

Written by

Sustainability starts with us. www.carbonradio.com

The Startup

Medium's largest active publication, followed by +611K people. Follow to join our community.

More From Medium

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade