Should we increase the burden of proof for X-Risks?

I’ve started wondering about the risks related to the Effective Altruism community’s concentration on Existential Risks (also known as X-Risks, these are things that threaten the future of humanity, like climate change, asteroids, or most infamously artificial intelligence-gone-bad). Just to try and set the tone of this post — I’m not trying to say that we need to abandon X-risks or that people who support X-risks are irrational. Instead, I’m trying to explore whether the risks associated with being wrong about a given X-risk are magnified when that X-risk becomes associated with the EA community. A lot of the cost-benefit analyzes associated with X-risks don’t take this magnified risk into account, which is problematic. With this magnification of risk taken into account, I think that there is a very strong case to re-evaluate the role that various X-risks have played in EA to date — we need a higher burden of proof for X-risks these than we’ve used to date.

The conspiracy theory heuristic

When it comes to X-risks (as well as normal risks, but that’s beside the point), many rational people use the following heuristic: if a large proportion of ‘experts’ in fields related to that risk think it’s a legitimate risk, believe that it’s a risk; otherwise, dismiss it as a conspiracy theory. I honestly think that this is a pretty good heuristic for most people to use: climate change is overwhelmingly perceived as a real threat by the vast majority of scientists — and so I am inclined to believe them. A very large number of scientists think that we should probably stop feeding livestock so many antibiotics — and so I am inclined to believe them. Some people think that the chemicals in vaccines, electromagnetic waves given off by power lines, chlorination of water, and lizard people pose large risks to humanity, but these are not particularly popular perceptions among rational people, and so I am inclined to dismiss them as conspiracy theories.

The issue is that some of the X-risks that the EA community focuses on fail this heuristic, and therefore a lot of rational people will feel fine with dismissing the risk as conspiracy theory (and, in turn, dismiss the people cheerleading the cause). Take AI for example — a lot of EAs, many of whom are rational, believe AI to pose a legitimate potential threat to humanity, but that’s not a particularly popular assessment among the full suite of people dedicating their careers to fields related to AI. Because it’s not a popular perception among the group that is expected to be experts on this (computer scientists, etc.), to the average reasonable person, people working on AI X-risks look very similar to conspiracy theorists.

Ok, but what about the risks, why are they magnified?

Now, it may be very well possible that the AI X-risk people are right; not all people who look like conspiracy theorists are wrong. For example, in the late 1800s a doctor went insane after his theory that ‘germs cause disease’ and suggestion that ‘doctors should wash their hands between performing autopsies and delivering babies’ were dismissed as naïve guesswork by the larger medical community. However, we should not discount the possibility that AI research and other X-risk work is way off base — we need to be diligent, especially due to the very real risks that such work entails. People working on their own on these conspiratorial-looking X-risks run the risk of wasting their resources and maybe a few other people’s resources if they are wrong, but when one of these conspiratorial-looking X-risks become associated with the larger EA community, the risks associated with being wrong about the importance of that X-risk grow exponentially. Why?

First, it creates a potential echo chamber for incorrect X-risks, which increases bias in support of those X-risks — in which rational people who would have otherwise dismissed the risk as conspiratorial now are likely to agree with it. We’d like to think that large support of various X-risks in the EA communities is because EAs have more accurate information about these risks, but that’s not necessarily the case. Being in the EA community changes who you see as ‘experts’ on a topic — the vast majority of experts working on AI globally don’t see the threat as legitimate (or at least aren’t particularly vocal about it), but the vast majority of experts working on AI who associate with EA do see it as a threat, and are very vocal about it. This is a very dangerous situation to be in.

Second, if an incorrect X-risk is grasped by the community, there’s a lot of resource diversion at stake — EA has the power to more a lot of resources in a positive way, and if certain X-risks are way off base then their popularity in EA has an outsized opportunity cost.

Lastly, many X-risks turns a lot of reasonable people away from EA, even when they’re correct — when X-risks that fail the ‘conspiracy theory heuristic’ as popular or given a lot of credence within the EA community, it makes it really easy to portray the EA in a not-so-flattering light. If we believe that growth of EA is a very good thing, then we really need to deeply consider the reputational risk associated with certain X-risks (which is why, for example, a lot of outward-facing EA materials down-play the role of X-risks in the community).

With those three additional risks in mind, I think that there is a very clear case to increase the burden of proof necessary to promote X-risks within the EA community, and re-evaluate them as necessary. In addition, if we are serious about any one of these conspiratorial-looking X-risks, we need to convince the global collection of relevant experts that it is a legitimate risk — if we can’t do that, then that’s a bad sign. For each of these X-risks, resources spent on convincing non-experts (not everyone in EA who supports AI research is an AI researcher!) to donate to groups that work on that X-risk should be deprioritized until the larger community of experts supports the importance of the risk. In addition, from this line of reasoning, there could be a case to be make that EA-supported work on these conspiratorial-looking X-risks should focus on awareness-raising among the global community of relevant experts, rather than direct work on the problem itself. In cases like AI, where there are a decent number of experts worried, we should work on making them more vocal, rather than using the EA community as a surrogate for their concerns.