Orienting to anxiety

Duncan A Sabien
8 min readApr 2, 2020

--

Note: I am not a medical professional, and furthermore this essay does not apply to clinically-sufficient levels of anxiety, i.e. anxiety disorders. If you are experiencing life-disrupting levels of anxiety, please contact a medical professional.

Recently, I made a Facebook post strongly recommending that my friends and acquaintances purchase a book that I think is Important and Worth Reading.

One of the replies to that post was:

This looks like the kind of book that will give me anxiety…

I had a strong and complex reaction to the comment.

I think that sentiments like the one expressed by my Facebook friend are actually fairly common. A lot of people employ a strategy of avoiding exposure to certain information and experiences, as a kind of coarse-grained emotional regulation.

On one level, I think this just straightforwardly makes sense. If I don’t enjoy horror movies, and horror movies keep me up/give me nightmares, then I can route around that unpleasantness by just … not going to see horror movies.

But in this case, my friend was considering avoiding true and potentially relevant information, because they figured that learning that information would give them anxiety.

I’m going to add in an assumption here that may not be true of the real person in this specific case, but which I think is pretty frequently true of millions of people:

They wanted to avoid true and potentially relevant information, because they figured learning that information would give them useless anxiety.

i.e. they figured that they would learn something unpleasant, become anxious, and also do nothing meaningful about it. They figured it would cause them to fret and worry to no real productive end, and therefore the choice was between:

World A, in which they don’t know, aren’t going to take action, and are happy

World B, in which they do know, still aren’t going to take action, and are unhappy.

I agree that if those are the actual possible worlds, there’s a strong argument for preferring World A to World B. But I don’t think those are the actual choices (or at least, not the only choices).

The assumption that nothing-will-change-except-my-emotions is a bad one. It’s often true, but it’s a particularly pernicious form of status-quo bias.

A few weeks ago, many, many people would have confidently declared that there was just simply no way that they could work from home, or that they could stop going to work entirely. Many people would have insisted that their local municipalities absolutely could not and would not issue lockdown orders, and that even if they somehow—unimaginably—did, that the vast majority of people would refuse to comply.

Turns out, though, that the world can pivot, and suddenly behave in ways it never has (or hasn’t for generations).

The key thing I want to zoom in on, here, is the absoluteness of the belief. I saw a lot of people saying that these things just wouldn’t happen, that they were in a sense impossible.

Not unlikely. Not costly. Impossible.

People were rounding off. They were confusing their bet-making strategies (where you put your dollar down on the most likely scenario regardless of whether it’s 51% likely or 99.999% likely) with their actual credences. Many of them probably weren’t even aware that they had non-zero doubt or confusion, though for most people you could get them there with a careful five-minute conversation.

There are a lot of reasons why this happens. It often makes pragmatic sense to ignore very unlikely scenarios; it’s often beneficial to signal confidence disproportionate to your actual sense of what’s likely; sometimes people can better enact social change by “pretending” that something is guaranteed and commonly-known even if it isn’t.

But the thing about probabilities of 1 and 0 is that they mess up the math. They make things go crazy in weird ways.

I think there’s something similar going on with the people who think that learning something will give them nothing but useless anxiety.

Underneath the assumption that the anxiety is useless is the belief that one “can’t” do anything about it.

That even if, say, the new information means that you should quit your job tomorrow and spend the next five years learning everything you need to switch into an entirely new field because solving Problem X is desperately urgent…

“Well, of course I’m not actually going to do that. Don’t be ridiculous. That’s just not how things work.”

The problem here is that we’re ignoring small numbers and small chances. We’re rounding “really unlikely” or “really difficult” or “really costly” off to zero.

Which means that if it would take fifty bits of evidence to actually get me up off my butt and cause me to believe that switching careers was, in fact, the better of the two options (the less costly, the more likely to work), then we’re never going to get there.

We’re never going to get there because when I encounter the first bit of evidence, I’ll round it to zero. And when I encounter the second bit of evidence, I’ll round it to zero. And when I encounter the fiftieth bit of evidence, I’ll be rounding it to zero rather than seriously considering it along with all the other evidence. In fact, because of the way human minds work, I’ll probably be even more dismissive of that fiftieth bit than I was of the first few, because by then I’ll have built up the habit of hand-waving that evidence away.

“It’s just in China.”

“We have it under control.”

“Our health care system is better than Italy’s, anyway.”

“Americans are too accustomed to freedom of movement; even if it would help, it’s never gonna actually happen.”

That continuous rounding-to-zero means that you’re too slow to update. That you’re systematically too slow—that by the time the evidence becomes unignorable and overwhelming and you do finally change course, it’s going to be later than it could have been, later than it should have been.

It’s a recipe for deep, deep regret.

Only sometimes, of course! Part of the reason the strategy persists is that most of the time, unlikely stuff doesn’t happen. The dismissiveness is rewarded most of the time.

But every time the unlikely thing does happen, you’re much, much worse off than you otherwise would have been. The strategy that benefits you ninety-nine times out of a hundred has the side effect of actually damaging you that hundredth time. Damaging you in unnecessary, avoidable ways.

The solution (I claim) is to get those “infinite terms” out of the equation. Stop rounding to zero and one. Hang on to the small numbers, the slivers of chance.

This means that learning new and unpleasant information isn’t useless, because you’re not just fretting and worrying and also never going to do anything about it.

Instead, you’re fretting and worrying and considering the realities of changing course.

You’re reading a book about existential risk, and you’re thinking “Gee, yesterday I was darn near certain that I wasn’t going to quit my job and go back to school and try to break into a completely different career. But, uh, turns out that there’s a chance that’s actually the thing I think I need to do? Crazy. That’s so difficult, and so costly, and so expensive—I don’t want to do that, it would hurt to do that. But this other thing might hurt even more…”

This person is now in the world of tradeoffs. They’re in the world of costs and consequences, of comparing the value of different courses of action.

They’re no longer in the world where suddenly their life is just worse, because their comforting false beliefs have been torn away, and they’re left with no new possibilities and less hope.

This is only part of the puzzle, of course. It does actually get more complicated. Sometimes, for instance, there really is no plausible path forward—if I’m living paycheck to paycheck out in Wyoming doing the only work I’ve ever done and I’m not particularly bright and I don’t have any resources to fall back on, and suddenly I learn that artificial intelligence might kill us all, and if so it’s going to kill us in such a way as to completely invalidate my guns and my bunker so I can’t even effectively prep—

I dunno. I can see the argument for “maybe that person should just avoid the information, and live their happy little life without having to deal with the disruptive truth/disruptive possibility.”

But also if you extrapolate out across thousands or millions of people…

If we all buy in to the “it’s not going to happen” narratives, the “things just have too much inertia to change” mentality, then we will miss marginal opportunities for people to make changes and make a difference. If Homer Hickam thought the same way I’m claiming a lot of people think, he would’ve never got out of West Virginia and become a rocket scientist.

And if more people related to the possibility-of-having-to-change-gears the way I’m recommending, our reaction to things like COVID-19 would be much more mature and fewer people would die.

(I note that this isn’t a mere hypothetical. I live in a social bubble where people are alert to small chances and willing to consider costly and difficult behavior changes, and within that social bubble, approximately everyone was 1–3 weeks ahead of the rest of the country on COVID-19, and got well set up with food and supplies and living situations and even managed to pull together some impressively mature contingency plans if e.g. someone is sick and local hospitals aren’t an option.)

It is the precise nature of “things like COVID-19” that they do happen, eventually. Disasters and pandemics and other rare and unpleasant circumstances are guaranteed to occur on one out of every hundred or thousand or ten thousand weeks. If one’s strategy for handling them is “just lose, when they occur,” then … well, I guess that’s fine?

But one shouldn’t accidentally themselves into the just-lose strategy, by neglecting a pretty fixable human bias.

To recap:

How should one orient to useless anxiety from things like learning about global catastrophic risks?

Duncan’s suggestion: solve the problem by attacking the “useless” part. Improve the information channel by which the anxiety has the capacity to actually influence your behavior; remove barriers to plan-changes that are just there due to inertia, and disconnected from the actual costs.

Obviously most of the time you’ll still do nothing/make no changes. But that’s because most of the time the cost-benefit analysis will tell you that it’s the correct decision to make. This will also reduce anxiety, because instead of just fretting needlessly, you’ll be able to tell yourself something like “look, I can relax here, because I genuinely believe about myself that I would shift gears, if I saw the right evidence.” Instead of feeling like there’s nothing you could possibly do because you’re stuck inside the status quo, you will (I claim) feel more like there’s stuff you could do, but you’re choosing not to do it because it doesn’t seem like the right tradeoff—and therefore you have “done something” about the information.

It’s the same sort of advice as replacing the phrase “I can’t” with “I won’t, given my constraints and the choices available to me.” There are some things you literally actually can not do, but far fewer than most people say, and trick themselves into believing.

“I don’t need to sit here worrying, because I’ve already thought this through, and I’m doing the best that I can given my current state of knowledge and resources. There’s a risk that I will turn out to have misweighed things, but at least I weighed them at all—I’m not just ignoring my intel.”

This is much, much, much better than “I’ll just cover my ears and close my eyes and hope that this isn’t one of those times where the rare and unusual thing happens.”

--

--

Duncan A Sabien

Duncan Sabien is a writer, teacher, and maker of things. He loves parkour, LEGOs, and MTG, and is easily manipulated by people quoting Ender’s Game.