December 2022 Public Strategy Update

Leverage Research
13 min readDec 21, 2022

--

The following are Executive Director Geoff Anders’ prepared remarks for Leverage’s December 2022 public strategy update, which was held on Friday, December 16 at the Leverage online office. Audio of actual remarks and Q&A is linked just below; actual remarks followed prepared remarks pretty closely.

Leverage’s strategy presentations are about the institute’s perspective on the current events and societal forces that impact our work. For readers interested in a general introduction to Leverage Research, please see our Introduction to Leverage Research essay, our website, and a description of our programs.

Audio of actual remarks and Q&A, December 2022 Public Strategy update

Introductory Remarks

Welcome to Leverage Research’s second public strategy update. This update will build on the previous update, which was given in June of this year. If you’d like to review the material from that, we have an internal strategy document which we have made available, as well as the text of my prepared remarks for the public presentation, as well as audio of the presentation itself. (Editor’s note: These links are for the June 2022 public strategy update.)

If you weren’t there, or haven’t read up on Leverage’s strategy previously, don’t worry — I will summarize some of the ideas from last time and also there will be a Q&A after this presentation.

We have one hour for today’s presentation. I expect to spend about thirty minutes presenting, with interspersed Q&A. I’ll then open up for general Q&A, which will take another thirty minutes. For those interested in further, less formal discussion, I invite all of you to join me in the Leverage Discord after the general Q&A, for another thirty minutes of informal discussion.

With respect to the Q&A, we very much want people to understand Leverage’s perspective and approach to positive impact. As a result, please feel free to ask anything that relates to Leverage that you think will help you understand the institute’s perspective better. If your questions pertain to what I’ve just presented, then feel free to raise those when I pause for questions. If your questions or comments are more general, those are also welcome, though I would ask that you keep those for the general Q&A.

We’re going to be recording the presentation and the interspersed Q&A sections, as well as the general Q&A. If you have questions that you would prefer to not have recorded, we invite you to raise those during the informal Discord discussion afterwards, which will not be recorded.

In terms of how to use Gather, you can press 6 to raise your hand. People can keep their mics muted unless speaking, in which case unmute. You can also ask questions in the chat, where you should make sure it’s set to “Everyone” so everyone can see the question you ask.

General Strategy

Understanding Human Nature

In our first update, I talked about Leverage’s original plan and the idea of differential technological progress.

The essence of this idea is that when different areas of science and technology are developed can depend on the actions of individuals and small groups. As a result, it is possible for individuals and small groups to affect humanity’s overall trajectory by helping to determine the order in which technologies are developed.

Leverage’s first foundational research program, into psychology and human nature, was based on this idea. It is often observed that humanity’s power has grown more quickly than its wisdom. It was our aim, through that research project, to understand human nature better so as to understand how humanity’s wisdom could be increased and its power safely amplified. With greater wisdom, our idea was that humanity would be better equipped to handle the scientific and technological power it has acquired and the new power it will acquire in the future.

Recognizing Human Goodness

This original research project yielded a number of important insights that have shaped our subsequent efforts.

First among these is the claim that humanity is good. This may seem like a surprising claim, for a number of reasons. First, it is a very blunt claim on a very large topic. It may seem difficult to be able to make claims like this. Second, it sounds like it is an evaluative claim. Research is often thought to aim at making positive claims, i.e., claims about how things are, rather than evaluative claims, i.e., claims about whether things are good or bad.

With respect to the latter, the idea that humanity is good breaks down into a number of more basic ideas. The first pertains to human goals. Humanity is often thought of as selfish or self-motivated, greedy or profit-motivated, hateful or petty, or worse, sinful and corrupt. Through our research, we found that people have instrumental goals which chain down to basic goals. Those basic goals, as far as we could see, were the same or similar, and were what one might commonsensically think of as “good” or “neutral” — and also compatible with one another.

Rather than status or power or a jumble of automated reflexes, we found that people were aiming, at the most basic level, at flourishing, connection, and acceptance. In some cases this manifested locally, in the pursuit of loving and connected human relationships and friendships. In some cases this manifested globally, in the pursuit of universal flourishing and interconnection. Our researchers differed on the details of what the basic goals were, but all of the views supported the idea that humanity’s goals are basically fine.

Of course, humankind is selfish and greedy, hateful and petty, and possibly much, much worse, at least at times. What we found was that the problems in human nature pertained not the goals that people had, but to how people expected to achieve those goals.

With respect to people’s plans, and how they expected to achieve their goals, we found a shocking degree of human dysfunction. It would not be an exaggeration to say that it was traumatic for our researchers to see stark representations of mental structures, where you could see where people did not care about each other, where they were planning to hurt each other, and where their minds were tangled in horrible knots of irrational fear and self-justification.

Nevertheless, despite the difficulty of looking at these mental structures, we found, to our great relief, that the problematic mental structures were shiftable, that people could change, and that what appeared to be problems in human nature could be addressed, if approached correctly.

With good or neutral goals, and a great degree of improvability and correctability, it remained to assess the overall situation of humanity to determine whether people are likely to be an instrument in the creation of their own positive future, or if human nature is turned against itself in a way that means that humanity is best factored out.

It is difficult to make a judgment on this last point, but from everything we have seen we are optimistic. Thus the idea that humanity is good amounts to this: that humans have good or neutral goals, that these goals are compatible, that the problems arise from how those goals are expected to be achieved, that these expectations and plans can be modified, that humanity can thus be improved if approached properly, and that there are great prospects for humanity to play a central and positive role in the creation of a future good for itself.

Increasing Human Wisdom

A second important conclusion we reached pertained to the value of public engagement. After the dissolution of the first project, Leverage reformed, now with a much greater focus on public engagement and communication.

The value of public engagement was first recognized in the spring of 2019, prior to dissolution. This lesson was reinforced after dissolution, both by feedback we received and by our own experiences starting to communicate with the public.

This lesson, however, is not complete. In recent times we have come to believe that beyond mere engagement, one of the essential means by which humanity will come to have greater wisdom is through communication and discussion, in both public and private contexts, about important issues — including how to guide our own technological future.

Thus we have reached at length what may seem like a very natural conclusion, which is that humanity’s wisdom might be increased through public and private discussion and engagement. This is a natural conclusion, though it is certainly not believed by everyone.

Hence, we expect that Leverage’s strategy going forward will include not only clear communication, but the attempt to educate and inform individuals and the public in a way that conduces to their greater wisdom. We expect this aim to be greatly served by our Exploratory Psychology research program, where we have began open-sourcing the mental tools and techniques we developed, and inviting people to join an external community of researchers interested in exploring and studying the mind.

If anyone here is interested in learning more about our psychology research or being part of our external research community, please speak to Kerry Vaughan, who is in charge of our Exploratory Psychology program. We have recently released a bunch of material pertaining to our psychology research, including the first step for new researchers, which is to learn about an introspective method called belief reporting.

The Role of Supporting Projects

In addition to increasing public understanding in order to increase its wisdom, Leverage has also, since its inception, recognized the value of supporting projects. Research does not happen on its own, or at least, it does not always. When it happens, research is also not automatically distributed to the right people or organizations. Thus much research that should happen does not, and much research that should have an impact only does so much more slowly than we would want.

As part of this, Leverage has found it valuable to run supporting projects, aimed at communicating some truth, or developing some resource, or causing some other relevant change. The primary example of this thus far in the institute’s history is its decision to support the early growth of the Effective Altruism movement. We expect that there will be other examples in the institute’s future.

Summary of General Strategy

From our June strategy update and this one, we can say that Leverage’s strategy for impact includes:

  • increasing public understanding, so as to increase humanity’s wisdom overall
  • causing differential progress in scientific and technological areas, so that humanity gets develops knowledge and power in a sequence that better accords with its wisdom, and
  • running supporting projects, in order to make sure that research that is done ends up having the impact it should.

Now is a good time to break for the first Q&A.

Brief Q&A

— — — break for Q&A — — —

The Strategic Landscape

I described part of the strategic landscape, as we see it, in the last strategy presentation. For today, I’m going to cover some additions to that picture. When we get to time for questions, please feel free to ask about anything from that picture or what I talk about today.

We also have a new internal strategy document available. (Editor’s note: This link is for the December 2022 public strategy update.)

Overarching Narratives

One part of the strategic landscape that I did not talk about last time is the narrative landscape. We live in a memetic environment composed of many narratives. These narratives overlap and criss-cross in a variety of ways.

I think it’s important to be even-handed on the topic of narratives: narratives often simplify and mislead, but on the other hand, without narratives it can be very hard to understand anything at all.

For this update, we discussed two narratives which are likely to be familiar to people in Silicon Valley and the surrounding diaspora. The first pertains to progress, the second to the future.

With respect to progress, the standard narrative is that humanity has made substantial scientific and technological progress over the last several centuries, that this progress has been extremely good, that we need more progress of this type, that people oppose progress much more than they should, and that they oppose progress because they don’t understand how important or good the progress has been.

We think this narrative is true on some counts, but incorrect or inaccurate on others. It is true that humanity has made substantial scientific and technological progress over the last several centuries. It is true this has been extremely good in a number of ways, and also that we need more progress. However, we think that scientific and technological progress has also been bad in a number of ways and that people know this. People are, in our view, much more pro-technology, pro-science, and pro-progress than is commonly recognized. But they also understand ways in which science and technology has caused problems, and they don’t like being misled. Developers and proponents of new science and technology are rarely honest and straightforward about the harms and potential downsides of new developments, and we think this leads to a lot of resistance.

With respect to the future, the standard narrative, at least in Silicon Valley, is that a robot future is coming. We may merge with machines, or plug into the Metaverse, or upload ourselves to the Internet, or turn the world over to a superhuman AI. The narrative is that this future is coming, that it is inevitable, that it is good, and that the people building the future are happy about it.

We now suspect that this narrative is false on almost every count. The key observation here is that these visions are essentially escapist: we’re getting rid of our messy biological brains, or leaving our fragile bodies behind, or replacing ourselves with AIs, or literally getting in a spaceship and leaving the Earth.

Escapism can be fun and has its place. But it’s a bad foundation for science and technology projects. This helps to explain why many of the more future-oriented science and technology projects end up not working: it’s hard to think realistically about the technologies you’re developing if you really don’t want to be in the circumstance you’re in.

Our view is that the Silicon Valley-style future is actually pessimistic, rather than optimistic, that it’s not inevitable, that the people who are supposed to be creating it are frequently demotivated, precisely because the future isn’t a good one, and that the best prospects for a science and technology-enhanced future lie elsewhere.

Nearby Ideologies

Another aspect of the strategic landscape are the ideologies that occupy it. Ideologies are a particular thing, a particular mental and social technology, and a lot can be said about them. For now, I’ll just talk about four of the ideologies and what we think is happening with them.

Mathematical Scientism. There is an ideology that I have been tracking for a number of years now. It pertains to atomization, the legitimization of impersonal authority, the delegitimization of personal authority, and the predominance of a particular form of knowledge — mathematical scientific episteme. For now, I am calling it “Mathematical Scientism,” though you’ll get much of the idea if you think of the name “Science™.”

There’s a long history here and this is quite difficult to describe, but the primary update is that I now think that this ideology has lost a lot of its motive force, that the general trends powering it have burned out, and that what is left is just momentum from before.

Since this is a different and unusual topic, I’d be happy to answer questions about it in the Q&A.

Social Justice. With social justice, we need to distinguish the idea itself from the ideology. With regard to the idea, the idea is that groups, including those that have been defined by race and gender, have been treated badly historically, continue to be treated badly now, and that this should be addressed, thereby bringing about greater social justice. The ideology surrounding this, which has been popular over the last several years, is characterized by the more forceful push for progress on topics of social justice.

Overall, the read here is that social justice as an ideology made some advances, including getting much broader acceptance for the concept of structural racism, but that now its proponents have radicalized in a way that has alienated a lot of the larger population. As far as I can tell, the power and reach of this ideology is now on the decline.

Effective Altruism. Many members of the audience here will be familiar with Effective Altruism, which has been in the spotlight recently, so I won’t rehash the details. The main developments here have been the public debut of EA, which took place with the EA Time Magazine cover and the profile of Will MacAskill in the New Yorker, and then the FTX collapse and disgrace of EA mega-donor Sam Bankman-Fried.

EA is now in a very tough spot. There is very strong competition among ideologies to be accepted into society, and now that EA has a severe and visible weakness, there will be substantial attempts to push it out. We’re basically seeing this with all of the commentary tying Sam Bankman-Fried to EA and using this to showcase EA’s flaws.

The main choice here is whether EA mounts a good defense. EA’s strength has been moral authority. This is a resource it has developed over many years. That moral authority is called into question by the FTX debacle and by leaders’ lack of response to it. EA leaders will either give a convincing response soon, or the EA movement will end, with the EA community contracting. The EA community would then continue to exist but go into slow decline, and the same will be true of the EA ideology.

LessWrong Rationalism. Some but not all will be familiar with the LessWrong community, and with LessWrong Rationalism in particular. This is essentially a group of people who are interested in improving their ways of thinking, especially modeling their cognition after imagined ideal agents, for the purpose of being able to deal with the potential risk of near-term artificial superintelligence.

The big update is short timelines. For some reason, which hasn’t become totally clear, a number of the key people in the LessWrong Rationalist set came to believe that the world will be destroyed by artificial general intelligence soon. We don’t think that’s true, and it would be good if the people involved would display their reasoning on the matter. In the meantime, we think the doom mentality has negative effects on people inside and outside the group, including researchers, and has negative effects overall on the field of AI safety.

Brief Q&A

— — — break for Q&A — — —

Expected Developments & New Challenges

We like to make predictions. Energy for movements not going into EA, will go into new movements. There’s a general pessimism that people seem to have about the future, manifested most clearly in the lack of optimistic, realistic science fiction.

We expect this pessimism will be overcome, and that one of the important vehicles for this will be a new movement.

I’ll leave the rest for the general Q&A.

General Q&A

— — — break for Q&A — — —

If you want to learn more about Leverage Research’s perspective, you can find notes, internal documents, and audio recordings of all of our strategy updates here.

If you’ve found the topics discussed in this post interesting and would like to contribute to our thinking, join the discussion in our Discord community.

Screenshot of Leverage December 2022 Public Strategy Update in our Gather office

--

--

Leverage Research

Research institute supporting scientific progress by studying early stage science and conducting exploratory social science research. www.leverageresearch.org