Common Knowledge and Miasma

Duncan A Sabien
17 min readAug 25, 2018

Followup to: It’s Not What It Looks Like
Further reading:
The Costly Coordination Mechanism of Common Knowledge (Ben’s post was meant to be a canonical introduction to common knowledge, but I believe that my introduction below is smoother for people who haven’t thought about the concept before, and that Ben’s is better as a followup, more thoroughly answering the question “okay, so what do we do with this?”)

The structure of the example in the first half of this article comes from a talk given by Andrew Critch in 2016; the core of the idea is his, but any clumsiness in the explanation should be blamed on me.

Like many of my posts, this one is primarily about defining a couple of terms. The goal is to point at a cluster of experiences that you’ve probably had, which you may or may not have seen as connected or similar, and say look—see how all of those dots form a constellation that looks like a bear?

It seems to me that increasing the number of things that people can see and reason explicitly about is a huge part of social progress; reification gives us ever more and ever finer tools for understanding and interacting with reality.

Common Knowledge

There are three requirements for a given fact to be “in common knowledge”:

  • Everyone knows it
  • Everyone knows that everyone knows it
  • Everyone knows that everyone knows that everyone knows it

(note: this is a slightly misleading oversimplification that will be corrected below)

All three pieces are important, and whether we’re consciously aware of it or not, we track common knowledge all the time. Our brains are hardwired to pay attention to social dynamics, and respond adaptively, and common knowledge concerns are a big part of that.

An example: Let’s say that Alice, Bob, Carol, and Doug are all sitting down to dinner. Alice, Bob, and Doug are a trio of friends that often hang out. Carol is friends with Alice only, but she’s also a classmate of Bob’s.

At some point during the dinner, Bob makes a sexist joke, similar to jokes he’s made in front of Alice and Doug many times before, as well as similar to jokes he’s told in class with Carol.

Doug laughs. Alice, though, has a different reaction.

Alice has never objected to such jokes before — indeed, she usually laughs and comes right back with another joke in a similar vein. Bob is surprised, a little hurt, and defensive.

“What?” he asks. “What’d I do?”

Alice hems and haws for a moment, and then eventually says “That’s sexist.”

Bob is confused. “It’s a joke? Everyone here knows I’m not actually sexist.”

Alice: [is still uncomfortable]

What happened?

What Bob is missing, and what Alice is instinctively attending to, is the lack of common knowledge.

Let’s say that Bob’s claim is true. He’s not actually meaningfully sexist, and also everyone in the group knows this—Alice and Doug from their long friendship, and Carol from interacting with Bob in class.

What this means is that, on their own, no person in the group would object to Bob’s joke, or feel uncomfortable hearing it. They all know that it’s just good-natured fun.

The problem is, not everyone knows that everyone knows that Bob is not a sexist. Alice knows that Doug knows that Bob isn’t a sexist, since they’ve all hung out together and lobbed jokes like these back and forth.

But Alice and Carol haven’t interacted much when Bob was in the room. And given that Carol didn’t immediately laugh, Alice is suddenly aware of, and sensitive to, the possibility that maybe Carol doesn’t know that Bob is actually an okay guy. Maybe Carol was made uncomfortable by the joke, or is forming a negative opinion of Bob.

And given that making people uncomfortable is bad, and sexism is bad, Alice feels pressure to step in and say something normatively good. Their little dinners are a private, in-group, high-trust setting, where they can occasionally get away with bending some of society’s standard rules, but the uncertainty caused by Carol’s presence means that those special exceptions might not apply, and Alice needs to signal that she’s aware of, and supportive of, the general set of “shoulds” that her society subscribes to.

(I don’t mean to imply that her motives are entirely selfish. She probably also genuinely cares for Carol’s comfort, and genuinely wants to be a part of the immune system that holds back bigotry. But even if she were a total sociopath, the need to look good in front of Carol would be sufficient for her to feel pressure to object.)

The chain of reasoning that Alice followed often isn’t conscious or explicit. It happens in a flash, because it’s the sort of thing our brains are very, very good at doing.

Bob missed it, because of something akin to the typical mind fallacy—he knows that Carol is already comfortable with his sense of humor, since she’s interacted with him in class and agreed to come to dinner. Bob is feeling comfortable and relaxed and safe, and isn’t instinctively tracking all the way into [Alice’s model of [Carol’s model of [his jokes]]], and how that might be different from his own sense of how Carol is feeling.

You might think that the problem would be solved if everyone knew that everyone knew that Bob is not a sexist. That’s how Bob tried to solve it—by declaring it out loud (and with confidence that no one was going to object, since it was true). That would preempt the question of whether or not Carol was forming an uncharitable opinion of Bob, for instance.

Unfortunately, Alice is not out of the woods yet. Even if the dinner had started off with everyone knowing that everyone knew that Bob isn’t a sexist, there still would have been some social danger for Alice if she hadn’t publicly objected to Bob’s joke.

This is because the third layer of common knowledge is missing.

In this version of the situation, each of Alice, Bob, Carol, and Doug knows that Bob is not a sexist. That’s the first layer—nobody’s misinterpreting, in their own heads, what Bob was trying to accomplish with his joke.

They also all know that each of them knows that Bob is not a sexist. In other words, Alice, in her own head, isn’t worried that Carol or Doug might accidentally take Bob’s joke the wrong way. Similarly, Carol isn’t worried about Alice or Doug, and Doug isn’t worried about Alice or Carol. That’s the second layer.

But since Alice and Carol have never interacted with Bob while in one another’s presence, they’re still missing that third layer. Neither one of them knows that the other one knows that they themselves are comfortable with Bob’s humor. Alice doesn’t know, for sure, that Carol is aware of Alice’s own knowledge (that Carol knows Bob isn’t a sexist). Alice knows that Carol knows that Bob isn’t a sexist, but when she imagines what Carol might be thinking about Alice, it could go either way.

All of this is in a thought-bubble in Alice’s head—this is what Alice worries that Carol might be thinking.

Yes, it’s true that Alice knows that [everyone knows]. But that might be unique or privileged knowledge—she doesn’t know that others know that.

Similarly, it’s true that Carol knows that [everyone knows]. But Carol doesn’t know that Alice knows that Carol knows that.

Each of Alice, Bob, Carol, and Doug can, individually, be aware that everyone knows, such that it’s true that “everyone knows that everyone knows,” and yet it can still be possible for each of them to feel alone in that knowledge. Maybe they were the only one who has access to all the information. Maybe they were the only one paying attention.

And if Carol imagines that Alice doesn’t know—that Alice is just sitting back and letting sexist jokes slide in front of random female dinner guests…

Well, it would be reasonable to judge her for that. To think of her as the kind of person who won’t publicly defend and uphold commonly agreed-upon standards of decency and equality.

Alice feels vulnerable to that kind of judgment. Not explicitly, not verbally—this kind of summing-up takes place in a flash, without conscious thought. Neither Alice nor Carol was thinking all of this through in words, in the moment.

Instead, what happened was that Alice’s subconscious social modeling software did a lightning fast calculation and sent up a red alert. Bob told the joke, and Carol didn’t laugh, and in that instant, Alice felt a strong pressure to speak up—ostensibly in objection to Bob, on Carol’s behalf, but also at least partially in self-defense.

None of that would have been necessary if there had been common knowledge around Bob’s sense of humor. If Alice and Carol had each known

that the other knew

that they knew

that Bob was not a sexist

… then there would have been no need for pre-emptive defense of any kind. Alice would not have worried about Carol being uncomfortable, and would also not have worried about Carol judging Alice for being a poor ally.

Miasma

Common knowledge is actually a technical term, and when used in a technical context, it takes more than three layers (properly speaking, it’s critical that it be an infinite number of layers).

In practice, three is about the limit that people are capable of consciously tracking, reasoning, and talking about explicitly without needing to draw a lot of complicated, nested diagrams.

But that doesn’t mean that our brains, under the surface, aren’t tracking those higher layers. It’s easy to imagine the above situation unfolding, and Doug walking away with a vague sense that Alice is too blunt—too quick to switch into a sort of surprisingly critical mode when she’s worried about something, and not very good at connecting the dots for other people.

That might unfold into a layered statement like “I think that it’s bad that Alice makes Bob think that Alice thinks that Bob is doing something wrong, when really it’s that Alice thinks that Carol thinks that Alice isn’t doing enough to guard Carol from misunderstandings about Bob.”

(Here I’m deliberately not drawing diagrams, because I want you to be able to feel the sense of dizziness that comes from trying to unpack this stuff, and to compare it to the straightforward-seeming compression of “I dunno, Alice just kind of always over-corrects in these situations?”)

And then, if all of that continues to be true in Doug’s mind for a while, and he takes actions and makes statements in accordance with his belief, it’s possible to imagine that rippling out through the social space and becoming part of the way that people think about and deal with Alice. And six months down the road, you could imagine Alice feeling bummed about this, and someone else (let’s say Finley) watching Alice at a party, and coming away with the impression “I think Alice is starting to think that we all think she’s a downer, or something?”

Which, if unpacked, would be something like: “I (Finley) think that Alice thinks that all of us together think that Alice thinks that people are somehow breaking the rules when those people (and all the rest of us) don’t actually think that they’ve done anything wrong, and I think that Alice thinking this is making her think that we think that it would be better off if she didn’t hang out with us anymore, and that’s making her sad, and I think that she thinks that we either don’t notice or don’t care about her being sad.”

And if you’re Finley, and you want to do something about that, it’s hard to know where to begin.

What I’d like to do now is introduce a new word, miasma, which is a label for some of what happens when our brains track and respond to the third and fourth and fifth (etc.) layers without being able to fully process them, instead rounding things off and summing things up, making hotfixes and putting on bandaids in complex social situations.

Miasma is the ghost in the machine. It’s one of Moloch’s fingernails. It’s the set of problems that spontaneously arise out of the uncertainty between people, as a result of a lack of common knowledge.

Miasma is the “what’s going on?” in the dinner-joke situation above. It’s the root source of Alice’s stress and Bob’s hurt, which otherwise might be described as being “based on nothing” or “for no reason.”

I once read an excellent post by Katja Grace about the costs we impose on people when we share our secrets with them. If I have a secret which is widely punished in our society, but I trust you not to punish me, I may share that secret with you, so that I have someone to talk it over with.

But this puts you in a bind. Now, if the secret comes out, and furthermore it comes out that you knew, and you did not punish me, you yourself may well be punished.

In the situation above, if Alice hadn’t spoken up, and Carol had indeed docked her points for it, that would not have been miasma. That would have been a specific loss of face for a specific reason.

But the fear of that happening—the sort of generalized social anxiety that emerges out of extrapolating many steps ahead, or putting lots of weight on the micro—that’s miasma. Miasma creates pressure to take pre-emptive action, even where there is no “real” problem. And, as in the extension with Finley, it also creates lingering impressions and vague judgment-clouds that are near-impossible to shake and which seem to have come out of nowhere. To use Critch’s term, it’s negative ungrounded social metacognition (in contrast with something like hype, which would be positive ungrounded social metacognition, or the public reaction to, say, the Kevin Spacey scandal, which would be negative grounded social metacognition).

Common knowledge creation can help fight against miasma. But common knowledge creation is a lot harder than it seems—often, people think that simply declaring something common knowledge is enough, when that doesn’t really eliminate doubt. People could’ve been not paying attention, for instance, or they could have misunderstood, or they could have simply nodded along while secretly not agreeing. And everyone’s aware of those (and other) possibilities on some level, so miasma persists.

I’ve made a few references in my writing to my colleague Valentine Smith. Val is often frustrated by miasma, because it’s hard to fight back against. There’s nothing to punch, no particular argument to win, no clear actions to take. It’s like being tormented by a ghost.

Recently, the Center for Applied Rationality (where Val and I work) had its annual alumni reunion. It’s sort of an unconference, where staff members and alumni alike put various talks and activities onto an open schedule, and people go to whatever piques their interest.

As a part of this event, Val gave a talk on the Enneagram, which is a popular personality typing system that sorts people into one of nine archetypes based on their core assumptions about how the world works. It’s a fake framework and an intuition pump—not the sort of thing we’re likely to teach as content at a workshop, but the sort of toy that’s fun to play with after hours and which produces a lot of useful hypotheses and threads-to-pull-on.

After the reunion, some of the people at the event expressed concern that I’m going to categorize as miasma.

(Note that calling something miasma is not an attempt to dismiss it; a problem can be “sourceless” and still be very real. c.f. Bob’s feelings in the example above, phantom traffic jams, and the entire genre of romantic comedy.)

A fictionalized account of one such conversation:

Elliott: I’m not sure that’s the sort of talk we want to give at a CFAR event.

Gale: Because it’s not scientific? I mean, he clearly labeled it as a fake…

Elliott: No, it’s not that. I mean, yeah, maybe, it could result in some kind of bad cultural slippery slope into pseudoscience and bullshit, but that’s not what I was worried about.

Gale: What were you worried about?

Elliott: Honestly? I walked past the talk at one point and there was a pentagram-looking shape on the projector screen.

Gale: That’s it?

Elliott: That’s not it, that’s just, like…emblematic. Very little attention to optics. Reputation. Like, if it looks bad at first glance, and then somebody investigates, and then they find out, oh, okay, it wasn’t some weird pagan thing, it was some weird bullshit hippie thing…

Gale: Yeah, I can see how that’s not really any better. But, like, do you really think anybody’s going to take it the wrong way?

Elliott: I mean, it’s not super likely. But the more often we do stuff like this, the more chances it has to go sideways, and it only takes one scandal like that to shoot your credibility all to shit.

Elliott is basically right. But in this case, what can Val do?

It’s not enough to put disclaimers all over everything. Val was doing that.

No, if you’re really serious about eliminating this risk, the only solution is to give up. This is the blandifying pressure, the cover-your-ass pressure, the representativeness heuristic ruining everything. It’s an incentive slope toward boring, inoffensive mediocrity.

And the worst part is, there often isn’t even an actual real person taking things the wrong way. It’s likely that nobody who actually listened to the whole talk misunderstood it, or took it as strong evidence that Val or CFAR has false beliefs. Elliott didn’t take it as evidence that Val or CFAR has false beliefs. Elliott didn’t even think that most other people would take it as such.

But Elliott is absolutely right that, in expectation, there will eventually be a person who does take it the wrong way. Who treats it as if it is exactly what it looks like, and never mind the fact that it really, really wasn’t.

It’s like a joke from the sitcom Malcolm in the Middle, where an overbearing PTA mother throws away another mother’s brownies because they have nuts in them. “Oh my goodness, I’m so sorry,” gasps the second mother. “Which of the children is allergic?”

“Oh, none of them. But you can’t be too careful.”

To repeat: just because a problem is miasma doesn’t mean it isn’t a problem. You can’t just point and say “that objection is just miasma” and then go back to doing the thing you were doing as if the problem is solved.

But there’s a very big difference in how you respond to miasma. A specific complaint from a specific person can be discussed, addressed, compensated for. Lessons can be learned, and processes can be tuned.

Miasma, though, is often hard to satisfy with such concrete action. Often, someone will raise a miasmatic objection, and the listener will propose solution after solution, trying to patch things, only to have the objector keep sadly shaking their head because none of those specific patches cut to the real heart of the problem. The original criticism wasn’t grounded in anything that simple. Instead, it’s an instance of the social fabric trying to fumble its way toward a new equilibrium, via each individual thinking and reacting to perceived incentives where that thinking and reacting is itself incentivizing the thinking and reacting of others.

This can be super frustrating for both parties—in the past, Val used to trust his audience more, and would simply say, at the front “note that this isn’t real; it’s all a metaphor.” Now, he bends over backwards to sprinkle his lecture with reminders, going so far as to explicitly tell people “now that we’re nearing the end of this talk, I want to remind you not to believe any of this, and furthermore not to just take my word for it even on the bits I said were actually true.”

And yet, Elliott is still correct that a random passerby seeing the strange shape on the wall and listening for thirty seconds could very well draw the wrong conclusion, and use that conclusion to attack (and probably successfully damage) the reputation of Val or CFAR.

So what is there to do?

The first step, I claim, is to bother to notice. When concerns are raised, when people say “I’m just saying,” when you start to hear mutters of “this looks bad” or “somebody should probably do something,” you should check: is this a grounded concern, or is it just miasma?

“That joke in your act…I don’t know, I just think somebody’s going to take it the wrong way.”

“Okay. I’m curious, did you take it the wrong way? Did we get any complaints from the test audience, or from management?”

“Look, I just don’t think it’s appropriate to have two male teachers chaperoning the lock-in.”

“Did one of the parents register a concern? Is there anything in either of their records that makes you suspect there’ll be a problem?”

“The thing is, people are going to read this, and they’re not going to have the right context, and they’re going to think you’re just trolling or triggered or something.”

“Do you think I’m just trolling or triggered?”

“No.”

“Do you think Jordan or Kelly or Morgan or Quinn are going to think I’m just trolling or triggered?”

Figuring out whether it’s miasma or not is the first step in getting everyone on the same page about what the next steps are. Sometimes, the correct response is simply to stay the course, and reassure the person raising the concern that you’ve done the math and are comfortable with your current level of risk. Sometimes, the correct response is to change—to see if there are small tweaks you can make that will drastically reduce either the chance of disaster or the magnitude if it strikes. You can ask questions like “what are the obvious triggers for misunderstanding? What’s the rate of exposure to people who are likely to misunderstand? How bad would a 90th-percentile-bad scandal in this space actually be?”

And yes, sometimes the response is simply to cave. To not hire the person, or not run the program, or not host the discussion because who cares, no matter how cautious and reserved and tactful you are, the headline is still going to be LOCAL ELECTED REPRESENTATIVE ATTENDS DEBATE ON EUGENICS.

But recently (I claim), that last option has become the default, and this is not okay. Given the current state of witchhunts and lawsuits and doxxing and smear campaigns, it makes sense that people are a little gun-shy about predicted, hypothetical objections. But there is real value being lost, as people and projects and orgs and groups go out of their way to stave off miasma. “I dunno, someone might not like it” is becoming an infinite term in the equation, upsetting the balance of sanity and demanding a level of response that’s far disproportionate to the actual risk. It’s an autoimmune disorder, with the systems and social channels that were meant to protect us becoming problematic in their own right.

And there are plenty of times when that value wouldn’t be lost, if there was just one person willing to stand up and say, “look, if one of you has a problem, we want you to bring that forward, but if I’m okay with it and you’re okay with it and everyone in this room is okay with it, then I see no reason to give up in advance just because some hypothetical person might jump to a conclusion different from any of us, and then complain about it.” To a) make the distinction between grounded and ungrounded social conclusions, and then b) enforce the fact that grounded and ungrounded social conclusions should be responded to differently, and put a limit on how much influence the ungrounded stuff can have.

We’re not ever all going to be that person. And the world would be worse if we tried. But right now, I claim we could use just a bit more of that good ol’ fashioned Captain America stubbornness.

(Yes, the below image is an extremely overwrought and over-the-top way to end this post, to the point of being actively silly, but I like it and it conveys the right spirit so I’m doing it anyway, regardless of imagined objections.)

Further STRONGLY RECOMMENDED reading, by Andrew Critch:
Unrolling social metacognition: three levels of meta are not enough

--

--

Duncan A Sabien

Duncan Sabien is a writer, teacher, and maker of things. He loves parkour, LEGOs, and MTG, and is easily manipulated by people quoting Ender’s Game.