June 2023 Public Strategy Update

Leverage Research
13 min readJul 11, 2023

--

The following are Executive Director Geoff Anders’ prepared remarks for Leverage’s June 2023 public strategy update, which was held on Friday, June 23 at the Leverage online office. Audio of actual remarks and Q&A is linked just below; actual remarks followed written remarks fairly closely.

Leverage’s strategy presentations are meant to inform the public about the institute’s perspective on strategic issues of public relevant pertaining to science and technology. For more information, please see our collection of strategy-related resources on the Leverage website.

Introductory Remarks

Hello everyone, welcome to Leverage Research’s first public strategy update in 2023, and third overall. Our first two strategy updates focused on laying out some part of our overall strategic picture. We covered lots of topics, and for people interested, we have some links available, including text and audio from previous updates, as well as some internal documents we decided to release. (Links: Jun ’22—remarks, audio, internal doc; Dec ’22—remarks, audio, internal)

In this update, we’re going to talk about the basic question of whether strategic thinking is needed in the domains of science and technology, and also with respect to public engagement. It’s a strange idea — usually when people think of strategy, they think of war or business, and maps or business plans, or else heroes and villains battling each other, where in popular culture it’s more often the villains who have strategies.

Our perspective, as we will explain, is that strategic thinking is necessary and useful in the domains of science, technology, and public engagement. That’s why we have strategies and why we’re doing public strategy updates. It’s important for us to convey why we have this focus, since it involves important questions that people need to think about.

Today’s presentation will have two parts. In this first part, I’ll talk about Leverage’s perspective of strategy, which will include thinking about the topic of strategy in a very abstract way. Then I’ll give two concrete illustrations of the importance of strategy, focusing on domains in which Leverage works, namely (1) history of science research, and (2) public communication about artificial intelligence.

This part will take thirty minutes. I’ll pause now and then for questions, so if you have questions relevant to what I’m saying, you are very much invited to ask. After that part, we’ll then move on to the second part, which will be a more open discussion and conversation about the topics discussed. We have a number of guests here today which I’m happy to welcome, in particular Samo Burja, CEO and founder of Bismarck Analysis, a global consultancy that focuses on institutional adequacy, player analysis, and civilizational prospects, and Nevin Freeman, CEO and founder of Reserve, a cryptocurrency project that is aiming to create a stable decentralized currency — the original vision of crypto.

We’re looking forward to hearing everyone’s thoughts and opinions on the topics being discussed.

For anyone who is interested in continuing the conversation after the hour, you’re invited to join us in our Discord for informal conversation after the public update.

Strategy in Theory

We’re going to start off abstractly. I mentioned war and business and maps and heroes and the like. What actually is strategy? And why does it matter, especially in the context of an institute that studies and communicates about early stage science?

Let’s take these questions in turn.

What is Strategic Thinking?

If you search Wikipedia or Google, you’ll find people defining “strategy” in terms of plans, objectives, and uncertainty, and also war or military operations. People talk about business or “evolutionary strategy.” Midjourney produced a pretty hilarious image, basically a big mashup of chess pieces and castles.

Our perspective is that strategy is about navigating a landscape. You can think about landscapes literally or metaphorically, as in, figuring out how to cross over a mountain or navigating the “business landscape.”

Strategy comes in when landscapes are difficult in certain ways. It’s not just that a landscape is hard to pass through. You might imagine going uphill against the wind and there’s no real strategy needed, just “keep going.”

Rather, strategy becomes necessary when you can’t trust your natural instincts or impulses. If you’re crossing a swamp, and you can’t tell where you can step or where you’ll sink in, you may need a strategy to figure out how to cross.

The need for strategic thinking sometimes comes from the environment, as with the swamp. Sometimes it comes from oneself, though, or rather, how one expects oneself to interact with the environment. A good example here is Odysseus, who tied himself to the mast so he could hear the siren’s song without getting drawn in. That’s a strategy — his natural impulses would be wrong, so he took steps to ensure he could still successfully navigate the landscape.

One type of case where it’s wrong to trust one’s impulses in when the landscape is treacherous. When we say something is “treacherous,” we mean that it is apt to betray you. People can be treacherous, but so can a mountain cliff, where a cliff might be treacherous if it appears safe in ways in which it is not.

This explains why strategy is often associated with war and business, or with villains rather than heroes. In adversarial circumstances, where people are trying to mislead one another, strategy may be necessary in order to not be misled or to prevail over an opponent.

Our preference is for heroes rather than villains, perhaps unsurprisingly. And that’s how strategy can be deployed for good — when either a landscape is treacherous, or when your own impulses are wrong, or when you’re defending against people who are trying to trick or mislead you.

Why Does Strategy Matter?

Given this explanation of strategy, it may seem obvious why strategy matters. Lots of landscapes are tricky. Our natural impulses are sometimes wrong. There are, unfortunately, also many circumstances where people are trying to trick each other — or at least aren’t doing their absolute best to communicate the truth.

Nevertheless, people don’t focus on strategy and strategic thinking nearly as much as they could. Why is that?

Here’s one reason. Thinking about strategy involves (1) thinking about danger, but (2) thinking constructively about how dangers will be handled. This is hard to do. It’s easy to be an optimist and not focus on the dangers. It’s easy to be a pessimist and not focus on constructive solutions. It’s hard to get the right mix of optimism and pessimism to let you both take the measure of dangers and also figure out how to overcome them.

This is a partial explanation. But it’s not a complete one. People have become quite good at thinking about the dangers from chemistry, for instance, or with operating heavy machinery. But people are less good at thinking about dangers from people, especially the ways that otherwise friendly or well-meaning people can work together to create illusions.

Strategic thinking is easiest when one has a clear opponent, an enemy. This happens in war. Something similar can happen in business. The fact that people’s goals are opposed, and obviously opposed, makes it easy to identify that the landscape is treacherous, that there are dangers one might not be seeing.

But think about a phenomenon as simple as groupthink. Groupthink is a deadly phenomenon. When projects fail, groupthink is a common cause. If your group is succumbing to groupthink, sound the alarm, take evasive action immediately.

Groupthink is a well known, common, and high-stakes problem. Why then do people still fall victim to it? The answer is that it can be hard to identify ways that good people — including oneself — can work together to create highly misleading illusions.

This helps to explain why strategic thinking is important. There are dangers, even in friendly, seemingly non-political environments. That’s not to say that every environment is dangerous or that one needs to have one’s strategy hat on all the time. But it’s important to recognize that it’s not just war, it’s not just business, it’s not just treacherous mountain paths. It’s people, and frequently it’s ourselves. If we don’t take account of the real dangers, we won’t recognize when we need to think strategically, and things might go wrong.

The stakes are also quite high, as we’ll see. It could be possible to sink hundreds of person-years of effort into studying the history of science and still get wrong answers. It might be possible to spend years working on getting good outcomes with respect to AI and have no impact — or worse, produce the opposite of the intended effect.

The stakes are high, and there are dangers, but the dangers are, in our view, ultimately tractable. That’s why strategic thinking is important. If you’re on the relevant sort of landscape and you’re not thinking strategically, it’s really hard to ensure you’ll get a good outcome. But if strategic thought is necessary, frequently reality is not so complicated as to foreclose a solution.

There are dangers, and strategy is a way to meet some of them.

Brief Q&A

— — — break for Q&A — — —

Strategy in Practice

Why Does Leverage Engage in Strategic Planning?

It may now be clear why Leverage engages in strategic planning. For the past decade, Leverage has worked in the areas of science and technology. More recently, the institute has expanded to include public communication. Each of these areas contain important dangers, some of which can be addressed by thinking strategically.

I’ll note three important dangers, each of which calls for advance planning. These are hype, unintended consequences, and overreliance on institutions.

Hype. There’s a lot of hype in science and technology. This is well-known — just look at the “science news” from a few years ago and see whether everything they were talking about has actually happened. Or you could go back earlier, and see the 1950s advertisements for flying cars. We’re still waiting.

Hype is a serious danger, because some very popular things don’t have substance, and some very unpopular things are actually quite promising. You may have heard that introspection as a research method doesn’t work. Turns out that’s wrong — an important and misleading piece of hype.

The most dangerous circumstance for hype, however, comes when there is some substance and some misconception. Those together can yield what seem like promising projects and avenues forward that will ultimately just waste a lot of people’s time.

Unintended consequences. Science and technology also sometimes yield unintended consequences. Gain-of-function research was never meant to be a public danger. The pursuit of nuclear energy also yielded the atom bomb. Dynamite, the Internet, the mobile phone — these things have enhanced our lives and also caused harm.

The problem with unintended consequences is, quite simply, that they are frequently the opposite of what we want. When people set out to invent new science and technology, they are frequently doing so with the intention of benefiting the world and benefiting themselves. The scientist whose work is then used for military purposes they did not intend, or who is killed by their own invention, are now clichés. [add links]

It may seem completely infeasible to predict the effects of new science and technology, especially hundreds of years in advance. Nevertheless, our natural impulses, which frequently are simply to discover and invent, are also misleading. “[Y]our scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” (That’s from Jurassic Park.) Planning in advance is necessary, even if it seems incredibly difficult.

Overreliance on institutions. A third major danger in the realm of science and technology is the overreliance on institutions. This can come in many forms. People may expect that the top journals in a field represent the actual state of the art. People may think that scientists are devoting their resources to the most important problems. More deeply, people may believe that their understanding of science, itself derived from institutions, is an accurate characterization of this special form of knowledge and social institution.

Unfortunately, we live in a time when institutions are less reliable than people think. In some cases, there are promising research avenues, or even entire new fields or subfields, that are not being explored or given institutional support. There are even misconceptions about the aspects of the scientific process — from randomized control trials to the relation between academia and the scientific enterprise — that institutions are not taking careful steps to correct.

Happily, it is part of the story of science that it is self-correcting, including by enterprising people who act from the outside. Acting outside of present institutions, and seeking to create new ones, is a challenging path. It calls for strategy. But we believe it is possible, in both idea and concrete reality.

Brief Q&A

— — — break for Q&A — —

Studying the History of Science: Dangers and Strategies

Now let’s look at some specific areas. We’ll first look at the history of science, where Leverage has its History of Science program, and then artificial intelligence, where Leverage is launching a new initiative aimed at public communication about AI.

Leverage’s History of Science program is studying science, with the idea that a better understanding of how science was done in the past will help do better science today. In particular, we’re doing case studies on how the major discoveries in the history of successful sciences were made.

There are a number of challenges here, including raising money for the research and helping people understand why the history of science is important. In terms of problems that require strategic thought, however, the main problem we expect to encounter is bias, or rather, the difficulty identifying and challenging our own assumptions.

When you imagine researchers studying the history of science, what is the main thing you imagine going wrong? The researchers go in with assumptions that they fail to identify, these assumptions shape which evidence they examine, and as a result they end up reaching the wrong conclusions. That’s the primary danger that we need to avoid.

In broad terms, our solution to this problem is to have our researchers’ biases cancel each other out. In a single word, it’s multidisciplinarity. It may sound trite, but you really can do a lot to overcome bias by having people with different perspectives work together on something.

The main challenge then is which disciplines ought to work together to study the history of science. History of science is a natural choice. Historians, however, are not naturally trained to identify assumptions, make careful conceptual distinctions, and so forth. (In fact, people in general are not normally trained to do this.) That’s something philosophers do. So we decided to balance history of science with the philosophy of science.

As it turns out, there is already a small discipline, the History and Philosophy of Science (HPS) that already exists, and so that’s perfect. The philosophers can help to identify assumptions and think clearly about what science is, balancing the historians. The philosophers on their own would likely be too abstract and fail to agree about things. So, the philosophers can be balanced by the historians, who can keep things focused on determining concretely what happened in history.

This solution worried me, though, because I could imagine history of science and philosophy of science people getting into the wrong sort of pattern. So we decided to balance them with social science. Many social sciences are not fully functional, but there are still social sciences that can help people have different valuable perspectives. If you imagine having a linguist or an economist on the team, you imagine them paying attention to very different types of evidence.

Concretely, and this is where you can see a specific attempted solution to a strategic challenge, we are aiming to hire researchers who have expertise in at least two of the three areas: history, philosophy, and social science. We’re not fully sure it will work, but there are a lot of ways to see how well it is going, and we’ve thought of a variety of solutions in case this doesn’t address the danger of bias.

Communicating about AI: Dangers and Strategies

Let’s now turn to artificial intelligence. As many of you will know, there have been a lot of developments in AI over the last few years, and now discourse about AI has spilled onto the public scene. Leverage is now planning to launch an initiative pertaining to AI which will focus on public communication and have the goal of helping to ensure positive outcomes with respect to the new technology.

As we’ve started preparing for this, one thing we’ve been working on is identifying the different dangers of the initiative. So far, we’ve identified three. This means that by comparison, the AI communication area is much more strategically difficult than history of science research, for instance, which has only one.

The first danger is understanding how to cause the right outcomes. The AI situation is extraordinarily complex. There are many different parties involved, and society itself is poorly understood. (Social science is an early stage science, which we strongly support!) We expect most people’s projects in this area to have no effect, and of those that do, many to have effects that are the opposite of what is intended.

The second danger, or challenge, I should say, is getting people to work together. There are communities that have been thinking about AI for more than a decade, and there are still big disagreements about what is dangerous, and how dangerous, and on what timeframes. Having people work together when they disagree about important things is very difficult, and many of people’s natural impulses for how to do this will be mistaken.

The third difficulty is getting stuck in narratives around AI, including pessimistic ones. I spoke about hype. There is a lot of hype in this area, and many of the possibilities being considered are quite alarming. Even if one gets people to be able to work together despite disagreement, there’s the danger that people will tilt too negatively overall. It can be hard to think clearly about very dangerous possibilities.

I will briefly sketch the solutions we’re considering to handle these problems. With respect to the first danger, at least part of our strategy will be to study the history. This is what happens when people get serious about causing an effect: they study relevant parts of history. Here, there are a number of examples of how society’s reaction to technology and risk have been shaped through deliberate effort.

Regarding working together, one important idea we’re exploring is that of decentralization. The term “decentralization” may be misleading, because there is a certain amount of centralized structure needed to enable decentralized coordination. There may need to be shared goals or principles; even open source projects have many shared centralized resources they use. Our hope is that there will be some way to reach agreement on enough things that it will be possible to have people coordinate in a less centralized way, even despite disagreement.

With respect to not getting drawn in by misleading narratives, one thing the institute is doing is making sure we devote enough of our effort to other things. We’re never planning to spend more than half of our time on AI; in all likelihood it will be much less. That can help with dealing with potentially misleading narratives or overall pessimism — it’s harder to get sucked in when you’re spending less of your time on it.

General Q&A

Alright, those are my remarks. Let’s open for general discussion!

— — — break for Q&A — — —

To learn more about Leverage, visit our website, read our Introduction to Leverage series, or see our FAQ. To contribute to our thinking on strategic topics, join the discussion in our Discord community.

--

--

Leverage Research

Research institute supporting scientific progress by studying early stage science and conducting exploratory social science research. www.leverageresearch.org