What is a cause?

A cartoon causal inference book review

Ellie Murray
9 min readDec 19, 2018

If you’re interested in causal inference, you’ve probably wondered at one time or another what even is a cause? The answer to that is complicated, because it’s not really science — it’s philosophy. And even philosophers aren’t sure.

But if you want to get some background on how philosophers think about causes, Causation: A Very Short Introduction by Stephen Mumford (@SDMumford) and Rani Lill Anjum (@ranilillanjum) is a great starting point.

This article is a summary of a Very Short Introduction in cartoon form. It started life as a twitter thread which you can read here: https://twitter.com/EpiEllie/status/1041369701996265474

Causation: A Very Short Introduction by Stephen Mumford and Rani Lill Anjum

Chapter 1 sets up the problem: causation is hard to define, both in general & for specific events. It’s more than just temporal ordering, but is it a separate thing?

Consider an example: a town gets sick after being overrun by rats. Did the rats cause sickness? What if I told you that a visitor from abroad had also just arrived, and was sick right before her visit?

Without knowing what we mean by a cause, we can’t say which of these (if any) caused the outbreak!

Chapter 1: What is the cause of an outbreak? Rats, a sick visitor, something else?

Chapter 2 introduces us to Hume’s theories of causation. Hume was a Scottish philosopher who lived from 1711 to 1776.

His first theory for defining a cause was the idea of regularity: i.e. sometimes things are *regularly* followed by other things.

So, is regularity causation? If it is, how regular is regular enough? For example, we regularly see steam rising from water when the water is heated. Does heating water cause steam? How many times do we have to observe the steam to conclude that heating the water was a cause?

Regularity is one way of thinking about a cause: when water is heated, steam regularly appears.

There’s another possibility. Maybe regularity isn’t enough. What about something called constant conjunction? Constant conjunction says that in order for heating water to cause steam, then heating and steam need to constantly seem conjoined. Conjoined here means linked together (think conjoined twins), i.e. one event follows the other.

The big problem is, defining causation this way can’t rule out ‘accidental’ correlation. Does heating water cause me to drink coffee? Even if when I heat water it is almost always followed by me drinking coffee, most people would say no. Something else, like being tired, is causing me to heat the water and to drink the coffee.

Constant conjunction is another way of thinking about causation: if heating water causes steam, then we expect heating water to be (almost) always followed by steam.

Chapter 3 introduces us to two more of Hume’s criteria for defining a cause: temporal priority and contiguity. Temporal priority means that causes should precede effects; and contiguity means that cause & effect should be spatially adjacent.

Temporal priority explains the asymmetry that causation seems to exhibit: causation has a direction.

A third criterion for causation: temporal priority. Causes happen in temporal order like a chain of falling dominos.

But, temporal priority also creates a problem. If the cause has to happen before the effect as temporal priority tells us, then how can the cause and effect coincide in space as contiguity tells us they must? That is, how exactly does causation “transfer” from cause to effect?

Perhaps spatial adjacency actually requires temporal adjacency, not temporal priority — that is, maybe cause and effect have to occur at the same moment in time for causation?

When and where does causation transfer from cause to effect? When does the motion transfer from one billiard ball to the next? When they are in the same place at the same time!

Chapter 4 introduces the Hume’s final component of causation: necessity. He says that effects must necessarily rely on their causes. But this is a clear flaw in his thinking — causes don’t always “work” in real life, but that doesn’t make them less causal.

For example, we can have competing or interfering causes, or a cause made up of complex parts that can fail. (In epidemiology, there’s a tool for thinking about causes made up of many complex parts called a causal pie — stay tuned for a cartoon causal inference explainer of pies!)

Causes need not be necessary. For example, sometimes the wind stops your match from lighting, but that doesn’t mean that striking a match is not a cause of the match lighting!

Chapter 5 is all about counterfactual dependencies. This is a definition of a cause which may feel familiar to scientists — a cause is something that makes a difference. But, if causes are things that make a difference, how do we think about what ‘make a difference’ actually means?

First, what is a counterfactual? A counterfactual tells us what would have happened if we had taken a particular action. For example, if I had fallen off my bike I would have broken my arm.

A counterfactual dependency then is when the counterfactual (or what will happen) depends on the particular version of an action we take. For example, my broken arm probably doesn’t depend on what color my bike was, but it probably does depend on the version of the action “falling off the bike” — i.e. if I had not fallen off my bike I would not have broken my arm.

Mumford and Anjum make an interesting point in their explanation of counterfactual dependencies: sometimes what will happen is fixed by definition. This means that we can have a counterfactual dependence that is definitely not a cause! They give the example of a calendar.

Counterfactual dependency alone is insufficient for a theory of causation. If this month is June, then the next month will be July, but that doesn’ t mean that June is a cause of July!

Below is a nice quote that summarizes this issue 👇🏽. It is relevant to an important debate between the communities that do causal inference: can we estimate causal effects for causes that we cannot understand in the real world? (The two camps in this debate are the “well-defined intervention” camp in epidemiology and the “do-operator” camp in computer science. I’ll cover this debate in a future article.)

A quote from Chapter 5: “Some counterfactual dependencies will be purely logical, mathematical, or analytic… others will concern events and facts in the real world depending on each other, and that is what interests us.”

Another interesting section of chapter 5 is the discussion of what exactly counterfactuals are? Are they imaginary? Do they really exist in some other, parallel, universe? Are events causally related because of counterfactual dependence? Or vice versa? This is an unresolved question of ontology.

David Lewis called counterfactuals “An ontological extravagance” by which he meant that counterfactuals in our world are factuals in another (parallel) world.

There are a couple other useful terms for thinking about causes in this chapter: overdetermination and sine qua non. The cartoons below explain what these mean.

Overdetermination means that we can have causation without counterfactual determination: watering the plant isn’t required for growth because it also rained.
Sine qua non means we can have counterfactual determination without causation: the Big Bang is strictly required for me to eat pizza for lunch but is not what we would consider a cause.

We’re halfway through the book. Chapter 6 introduces us to the idea of physicalism. Perhaps we should think of causation as a physical process, like the transfer of energy?

If we do, maybe we can finally explain ‘accidental’ conjunction (in the example below 👇🏽, the explanation is confounding!)

When we consider causation as energy transference, we can start to describe why accidental regularity or conjugation happen. When air pressure drops it causes the barometer to change and rain to happen, but the barometer changing doesn’t cause the rain even though it meets all the other definitions of a cause we’ve seen so far!

Another nice thing about energy transference is that it helps us explain directionality which is an important component of causation. That is, the cause causes the effect, and not vice versa!

But there’s a problem! Sometimes the direction of energy transfer is not the direction we think about when we think about causation…

Sometimes energy transference suggests a different direction than causation: does ice cool a glass of water, or does the glass of water warm up the ice?
Sometimes energy transference suggests a different direction than causation: does a blanket make you warm, or do you warm up the blanket??

Chapter 7 is about pluralism, or the idea that maybe there isn’t just one definition of causation for all cases. Maybe causation is a family of things …

Pluralism tells us causation could be a family of concepts: shared features but no single defining element

Aristotle believed in pluralism. He described 4 types of causation depending on the level at which you wanted to understand an effect.

The main problem with this idea is that accidental correlations also have ‘family resemblance’ to causation, but we don’t want them to be part of the family!! Pluralism doesn’t help us rule them out.

Aristotle’s four types of causation: final cause, efficient cause, material cause, and formal cause.

So, pluralism doesn’t seem to work. But maybe the reason we can’t easily explain causation is that it is a foundational concept — a most basic thing that can’t be broken into smaller parts. This is the topic of Chapter 8: primitivism.

Chapter 8 also questions whether we can sense or experience causation directly, or whether we can only see relationships or conjunctions.

For example, we don’t see causation when a diver jumps off a springboard, we only see that the board bends.

Do we see the diver cause the board to bend, or just that the board bends? Can we observe causation itself or only the relationships between cause and effect?

On the other hand, people are causal agents: we can decide to do things and make them happen. As causal agents, maybe we can directly sense causation, at least some of the time.

For example, when we lift something heavy we sense & respond to its weight, adjusting our lifting and balance as necessary.

Proprioception is the sensation of effort. Is that a direct sensation of causation?

We’re almost done! The final philosophical approach is the focus of Chapter 9 — dispositionalism. Dispositionalism says maybe causation isn’t a thing, it’s a potential for something.

For example, a glass that is fragile has the potential to break, but doesn’t just smash spontaneously. Maybe causation works the same way.

Dispositionalism says that causation may be a potential or tendency. A fragile glass has a potential to break when it falls but sometimes it wont. A harmful cigarette has the tendency to cause cancer, but sometimes it wont.

But this just gives us a new problem to solve: where does the ‘causal’ disposition live? Is it in the effect? In the cause? What about in nothingness — sometimes a lack of things is a cause of harm…

Where does the causal potential reside? Sugar has a potential to dissolve in water — does the presence of water activate the causal power of the sugar or do the sugar and water act together? Plants have a tendency to wilt without water — does the absence of water cause the plant to wilt, or does the presence of water prevent the usual cause of wilting?

And that’s just about it! The book wraps up with a (very) brief discussion of the history of causal inference in science, touching on Pearson and Fisher, and Judea Pearl’s causal diagrams, and ending (spoiler alert?) with Sir Austin Bradford Hill’s causal viewpoints.

So, after all that, what can we say about what a cause is? My take-home from this book is that even after more than 2000 years, philosophers don’t really know!

A train is delayed and an elk is on the track. Did the elk cause the delay? Or was the elk comfortable crossing the track because the train was already stopped?

Even though most humans think they know what causation is and can recognize it when they see it, it’s not at all easy to put into words. Maybe that’s fine for humans, but one of the most exciting goals that science is reaching towards is to teach a computer how to understand cause and effect.

To create an actually intelligent artificial intelligence, we need a machine that can understand not just what happens in the world but why it happens. If we can’t put that into words, how can we hope to help a machine do so?

So where do we go from here? The answer lies in causal inference. Even if we can’t say for sure whether something is a cause, we can still try to estimate a causal effect. That is, we can estimate how much, on average, we expect counterfactuals change comparing two or more different actions.

To do this, we need to become not philosophers but doctors of causation: we need to look for the characteristic signs & symptoms of a causal relationship, like regularity and then try to rule out all possible alternate diagnoses i.e. what are the other possible explanations other than causation?

That’s all, thanks for sticking around!

If you want to know more about the philosophy of causation, Causation: A Very Short Introduction is a great read, and the authors have a new more in depth book out Causation in Science.

If you want to know more about causal inference, follow me on here and on Twitter @EpiEllie … and if that’s not enough, a great (and free!) introductory textbook is the Causal Inference Book by Miguel Hernan and James Robins.

--

--

Ellie Murray

Assistant Professor of Epidemiology at Boston University School of Public Health. Follow for causal inference, epidemiology, & data science. Twitter: @epiellie