June 2022 Public Strategy Update

Leverage Research
16 min readJul 1, 2022

--

The following are the prepared remarks on Leverage Research’s strategy for positive impact, made by Executive Director Geoff Anders, presented at Leverage’s June 2022 Public Strategy Update on Wednesday, June 29 at the Leverage online office. An audio recording of actual remarks, including initial Q&A sections, is available here; actual remarks followed prepared remarks fairly closely.

Introductory Remarks

Welcome to Leverage Research’s first public strategy update. We’re looking forward to sharing more and more of our perspective over time. One part of that will be sharing how we think about the world and the strategies we think will be effective in causing positive change.

We have two hours blocked out for today, though of course we may not end up using the whole time. I’ll start by outlining Leverage’s general strategy for impact and the rationale behind it. I’ll then take questions. Next, I’ll give what will ideally be a short presentation on our overall strategic picture, then brief Q&A again. Finally, I’ll state some concrete expectations about the future, and then talk about some of the challenges we face and how we expect to address those. Then we’ll open up for a general Q&A.

As far as topics, anything related to Leverage is within bounds, though of course I’d like us to stay pretty close to the topics at hand. We’re going to do an audio recording for the main parts of the presentation. So that people feel free asking whatever questions they would like, we’re not planning to record the general Q&A.

Since this is the first public strategy presentation I’m giving, it’s possible that some parts may be too long or too short, or not explained well enough. Please feel free to ask questions, including questions like: “Why do you believe X?” or “Could you say X in a different way?” We believe that discussion and argument are often valuable means of reaching the truth, so please also feel free to voice disagreement. For people new to Gather, there’s a button to raise your hand, it’s the emoji button, or you can type questions into the chat. For the chat, make sure it is set to “Everyone” so everyone sees the question you ask.

We’re providing access to the internal Leverage strategy document this presentation is based on. There’s a link posted in the Gather chat as well as one posted on Twitter. This was not originally written for public presentation, but ended up sufficiently polished that we felt that giving access to it was appropriate. I’ll describe many of the same themes in my presentation, though in a different order and sometimes with different language.

General Strategy

The Plan

As some will know, Leverage was founded in 2011. At that time, we had a big plan which you can still find on the internet. People generally referred to it as “The Plan.” Essentially it was a description of a research program for making progress in the social sciences, speculation on the potential applications of breakthroughs in the social sciences, and an analysis of the components necessary to make the world much better than it is now.

Differential Technological Progress

More generally, however, that plan was based on the idea of differential technological progress. Over the course of centuries or millennia, we expect humanity to make technological progress. Civilizations rise and fall. The technological height of one era is not necessarily preserved as we move to the next. So we do not believe in monotonic or “linear” technological growth.

However, as a general pattern in human history, it does appear that new technologies are developed over time, and with them greater understanding of the world in which we live. Sometimes that understanding is called “natural philosophy.” Sometimes it is called “science.”

There is then a question of whether individual humans, or small groups, can make choices that affect the overall trajectory of technological development. I think the answer is yes. Advances in science and technology are very frequently made by small groups and individuals, and in some cases it is plausible that the advances made are such that without those specific individuals or those specific groups, they would not have occurred otherwise for some number of years.

An example from our recent study of the history of science is the discovery of electromagnetism. By 1802, there were current-carrying wires with sufficient amperage that if you brought a magnetized needle nearby and ran current through the wire, the current would deflect the magnetic needle and you would have discovered electromagnetism. Electromagnetism was in fact discovered in 1820 by Hans Christian Ørsted, who was investigating hypotheses related to the unity of forces, and saw that when he ran current through a wire, a nearby magnetized needle was disturbed.

It seems inevitable that once we have batteries, once we have wires with sufficient current, someone somewhere will put a magnetized needle close to a current-carrying wire. It is then surprising and notable that in this case it took 18 years for that to happen.

Why it took 18 years, what led Ørsted in particular to make the discovery, and some information about why it was not discovered earlier are covered in our case study on the discovery of electromagnetism, which is published on the Leverage website.

This is one simple example where we can imagine that a concerted effort might have led to faster technological progress. Had someone discovered electromagnetism earlier, the entire timeline might have been shifted forward.

Of course, that’s not necessarily true. It’s quite possible that there were other factors bottlenecking future developments. This is something we hope to ascertain through our History of Science program. But the idea should be clear enough: there may be advances, in science or technology, that can be made by small groups or individuals, which can change overall timelines.

If that is true, then there is a question about which technologies should be developed at which times. It may be beyond the reach of an individual or small group to prevent a technology from being developed at all — that may be the province of humanity as a whole. But it is within reach for individuals and small groups to take steps which change relative timelines.

Wisdom and Power

From 2011 through 2019, Leverage’s focus was on the social sciences. It is commonly remarked that humanity’s power has grown more quickly than its wisdom, and that this presents us with an increasingly dangerous circumstance over time. Motivated by this thought — and others — we spent substantial time investigating topics in the social sciences, including many aspects of human nature.

Our hope was that by advancing knowledge of the human mind, of people, individuals and groups, and how they function in societies and social arrangements, that we would be able to benefit the world. This idea — differential technological progress in the social sciences — encapsulates the original Leverage strategy.

Of course, people often do not think about the social sciences like psychology and sociology as pertaining to technology. “Technology” is often thought to be synonymous with “material technology,” like cars and planes, or “information technology,” like computers. We even use the light bulb to symbolize having an idea. But if we understand technology to refer to the concrete instantiation of techne, or of technical knowledge, then we can imagine mental and social technologies as well.

The most obvious examples of mental technologies include techniques, like mnemonics for remembering or study skills from skill. The most obvious examples of social technologies include simple social forms, like the handshake; money is another potential example. Effective therapeutic techniques might be mental technologies or mental/social hybrids. The point is that we certainly can think in terms of mental and social technologies, at which point it makes sense to think about differential technological progress in the social sciences.

On the research dimension, we were exceedingly happy with the results of our research into the social sciences. It is now the work of our Exploratory Psychology program to distribute to external researchers the information they need to validate, test, replicate, and ideally extend the results we reached.

The idea here is that through a better understanding of people, it may be possible for humanity to become wiser, and through increased wisdom better chart a course through the future, making considered choices about which technologies to develop, and how.

Science and the Future

Leverage reorganized in 2019. At this point we adopted a new focus: early stage science. More generally, we now expect to be focused on questions pertaining to science and the future. Just as there is a fundamental question about humanity’s wisdom to wield its power, there is another important question about how humanity should choose to navigate the technological future.

Technology is sometimes seen as an inevitable force. Similarly, advances in science can seem like they happen automatically. However, this is not necessarily the case. The history of science shows its contingency, at least with respect to timing and order, and it may be substantially better if humanity chooses to make some advances sooner, others later. In fact, if we can equip humanity to think properly about science, technology, and the future, and make good decisions, it’s quite possible that the future we expect, borne of technology and knowledge, can be a brighter one for all.

Science and the Public

We expect that a bright future, borne of science, technology, and wise human agency, will involve a better understanding of science, a better understanding of ourselves, and further progress in many fields. We also suspect that it will involve tackling the question of who exactly should be making decisions about progress. This is something we look forward to contributing to in the months and years ahead.

This means that we expect to continue our focus on differential technological progress, now with an added focus on public engagement. We live — and here I say what may be for some the most controversial part of this presentation — in a democracy, and as a result we believe these questions should be hashed out at least partially in public.

There is of course a question about how to engage in public discourse on difficult and contentious topics, especially in an era of great political division and in the context of social media, the impacts of which are still being absorbed. Our current hypothesis is that many communication challenges, and much of the misunderstanding in the public, is downstream of a failure to communicate clearly and empathetically. And so clear communication, especially to the public, but also to many other audiences, is a new part of our general strategy.

Brief Q&A

— — — break for Q&A — — —

The Strategic Landscape

I’m now going to describe some of the strategic landscape, as we see it. By “strategic landscape,” this is essentially the environment that we’re operating within. It includes global trends, risks, opportunities, and challenges.

I am now going to cover material that is included in the internal strategy update document. So if you’d like to read about this while I’m speaking, or would like more information, please check out that document.

Global Trends

One part of the strategic landscape is: global trends. Here, we look at what is happening in the world as a whole, trying to determine how things will change. We then focus especially on trends that are relevant to our work.

With respect to global trends, the main relevant trend we have identified is the tension between democracy and technocracy. Essentially, there is a big question about who should be in charge of what. Advocates of democracy think “the people” should be in charge. Advocates of technocracy think “the experts” should be in charge. Questions of who should be in charge are especially important with respect to science and technology development. You can imagine a system where scientists decide, or where the public decides. Of course, there are other options, and questions about the role of individuals and the market.

With respect to this trend, it appears the gap between “the public” and “the experts” is widening in some places (e.g., US politics), but closing in others (e.g., on Twitter).

Global Risks

A second part of the strategic landscape is: global risks. Here, we try to identify what the largest and most likely sources of risk are, in terms of what might cause substantial harm to large numbers of people.

There are different ways to divide up risks. We divide them into acute, chronic, and prospective. Acute risks may happen soon. Chronic risks may issue in harm over the long term. Prospective risks are ones that involve more substantial unknowns.

Under acute risks, we identified nuclear war, pandemics, and large-scale conventional war as the most concerning. Under chronic risks, we found environmental degradation as the primary risk. Under prospective risks, we identified civilizational decline, hostile human-level artificial intelligence, and global totalitarianism as the top issues. Some authors and thinkers have proposed speculative technologies as important risks, like atomically precise manufacturing (aka “nanotechnology”), artificial superintelligence, or the possibility that we’re in a simulation which then gets shut off. We do not at present think that these are central risks.

There’s obviously a lot to say about each of those, so I’ll leave discussion about that to Q&A.

Institutions

Another part of the strategic landscape pertains to institutions. In thinking about institutions, we try to understand how they work, both ideally and actually, and what role we expect them to play in the future.

I’ll remark on three institutions: academia, science, and the public. We don’t have a full assessment of any of these, and of course it may seem strange to call “the public” an institution. It’s possible that our classification of these will change over time.

With respect to academia, our present perspective is that academia is semi-functional. This is a middle road between people who oppose academia or people who uncritically accept it. Essentially, academia is enormous. Many of the researchers are good, as is some of the research. However, various causes — in particular, in our view, the mass expansion of academia after World War II — has led many academic fields to become decidedly less functional. The most pertinent example is the field of psychology which, as many of you will know, has been mired in a replication crisis for the past decade.

With respect to science, we have all heard narratives of amazing progress (“singularity”) and the total lack thereof (“stagnation”). We ourselves are unsure. This is where our third program, Bottlenecks in Science and Technology, comes in. Through that program we hope to get researchers and thinkers to help determine how much progress is actually being made in a wide variety of scientific and technological fields and then, ideally, help us figure out how to make more.

With respect to the public, our view is, perhaps controversially, that the public is much better constituted and has a much better understanding that is often recognized, especially in technocratic circles. There are surveys, of course, which reveal a shocking lack of understanding of science on the part of the public. But if you look at movies, which are commercially successful and designed to be understood by the public, it seems that there is a broad and surprisingly accurate understanding of the role of science and technology in society.

Research Areas

With respect to differential technological development, we are always on the lookout for areas of research that are particularly promising. This both allows us to try to predict what will happen in the future (as advances are made or fail to be made) and to make good choices when we ourselves select a field to study.

I will briefly remark on a few research areas. In the future, especially as a result of our Bottlenecks work, I expect we will end up with assessments of many different fields and subfields.

Social science — We believe that the social sciences are highly promising, but that they are often mistaken for late stage sciences rather than early stage sciences. This leads many researchers to investigate in those fields using methods that are not well-suited to making discoveries.

History of science — We also think that the history of science is highly promising. In particular, we believe that the data set for successful science is the history of science, and thus that people interested in making science better ought to be very interested in the history. Of course, the history of science, and more specifically how the major discoveries in the history of science were made, has been studied to a degree. But we believe that it is possible to discover more that is of direct relevance, and have been very pleased by the initial results of our History of Science work.

Psychology — Perhaps unsurprisingly, we continue to think psychology is a highly promising field of study, especially introspective psychology. The mind exists, is studiable, exhibits many discernible patterns, and is apt to be described in many ways. We studied the mind especially in terms of understanding mental structures and in the context of psychological self-improvement, but we believe there are other ways to study it as well.

Coordination studies — This is our provisional name for what we believe should be or become a field. It would include startups, teams, political organization, and a wide variety of other topics. Of course many fields touch on these things, though we expect there to be a special advantage from focusing specifically on coordination and forms of coordination as the object of study.

Artificial general intelligence — There is a lot of hype right now about artificial general intelligence. As I prepared this strategy update the primary focus was understanding the degree to which the hype was justified. Our short answer is that while there have been some advances in AI, including the Transformer architecture from Google Brain in 2017, and large language models, like used by GPT-3 and DALL-E, we do not currently believe that we are close to human-level artificial intelligence or appreciable artificial general intelligence. Of course, it is hard to assess these things without a theory of intelligence or general intelligence.

In the process of preparing the update, however, I realized that artificial intelligence may be an early stage science, and that there may be room for valuable work here, either supported or conducted by the institute. Such work would not be conducted on a fast timescale, and would prioritize understanding how artificially intelligent systems might do what we want rather than something very different.

Early stage science — Lastly, our focus since 2019 has been early stage science. Early stage science refers to the idea that science is not conducted in the same way at all stages in the development of fields, and that the world done early in a field’s life is importantly different than after the field matures. We believe there are many fields that will benefit from looking at things from an early stage science perspective. In future updates we hope to assemble a list of such fields.

Brief Q&A

— — — break for Q&A — — —

Expected Developments

It is useful to make predictions, partially to ensure that one is looking at the future, partially to incentivize oneself to think concretely and realistically. In our case, I would not say we are so confident as to make predictions; we are willing, however, to record some expectations about the future. Here are two; each rounds the expectation to the nearest 5 or 10 year mark.

Anticipation #1: Backlash against life sciences — 30 year time frame

First, we currently expect that without careful intervention in the meantime, there may be a broad backlash against the life sciences. It is hard to estimate timeframes, but we put our current anticipation at 30 years.

The model is this: Doctors in Nazi Germany engaged in horrifying practices before and during World War II. After World War II, during the Nazi doctor trial, the idea of unethical experimentation came into the public view and safeguard policies were adopted in the US. These policies, however, lacked teeth. Twenty-five years later, there was a new precipitating cause: the infamous Tuskegee Syphilis study, where US government agencies allowed Black men with syphilis to go untreated in order to learn about the negative effects of the disease. This led to a large public backlash, then regulation and the creation of IRBs. IRBs successfully curbed unethical research, but also limited ethical research as well.

We think something similar might happen with the life sciences. We have just had a global pandemic, with millions dead, widespread suffering, and substantial economic loss. This occurs in the context of risky research, including gain-of-function research, and a worrying track record of lab leaks. We are concerned that in response to the pandemic, new policies that lack teeth will be adopted; then, after generational turnover, there may be some other sort of precipitating event (a lab spill, or some other accident), followed by the broad regulation of life science. This is concerning for a number of reasons; dangerous research should be halted earlier, and in a way that does not unnecessarily limit ethical research.

Anticipation #2: End of artificial general intelligence bubble — 5 year time frame

Second, we believe there is presently an artificial general intelligence bubble. There is a lot of enthusiasm and some real advance, but expectations have become unmoored from reality and the cost of training larger and larger models is high. We then expect the bubble will pop, with the result being a general de-emphasis of artificial general intelligence. We put our current anticipation on this at 5 years, though a more conservative estimate would be 10 years.

Challenges and Approaches

Last, I’ll talk about some of the challenges we face and how we expect to approach those challenges.

Communicating the relevance of history

One of our big challenges will be communicating the relevance of history. People like science and they like major discoveries, but this sentiment has not yet extended itself to history. We think that there is a link between the past and the present, and that clear communication and patience will eventually bridge the gap.

Navigating public unease about psychology

Another large challenge relates to psychology. People are uneasy about the mind, and justifiably so. Nevertheless, it is an important topic. Thus it is essential to figure out how to navigate the fears, concerns, and misgivings people have with respect to an understanding of the mind, and psychology, and themselves. Here too we expect that clear communication, and especially listening, will help us to bridge the gap.

One note about communication is that we expect that we will end up splitting our resources between research and communication, rather than focusing almost exclusively on research like we did in the past. Another is that we will focus on communicating both about current issues as well as fundamental questions and confusions. We hypothesize that in some cases, the most efficient way to bridge distance between perspectives is to try to speak about the most basic questions at issue.

Combating public misinformation

There has been, as many of you know, a large amount of misinformation both posted online and passed around in private about the institute. This is stressful, because we’re trying to do something good, and because misinformation can confuse people and make it harder to succeed. Our plan here is to release more public information and seek to correct existing misinformation by going through relevant official channels. We also will seek private resolution of disagreements but, where that fails, subject bad behavior to public scrutiny.

We really would prefer to not have to focus on this, but it keeps coming up, and so we believe it deserves (some) continued attention.

Addressing lack of civic knowledge

One issue that we have encountered in a number of contexts is people seeming to not understand why we are engaging with the public in the way we are, or taking other relevant actions. We chalk this up to a decline in civic knowledge; our plan to address it is to discuss relevant issues publicly.

Navigating hype around artificial intelligence

With respect to artificial intelligence, there is both hype and opportunity. In general, our plan is to form our views in a way that allows us to navigate around the hype and identify where there are genuine opportunities.

General Q&A

— — — break for Q&A — — —

--

--

Leverage Research

Research institute supporting scientific progress by studying early stage science and conducting exploratory social science research. www.leverageresearch.org