May Contain Nuts

Photo by Tom Hermans on Unsplash

The lights go down; the screen lights up; here they come: White on black or white on green; sometimes a dash of red; advising us to use discretion. Not in the sense of keeping a secret but in the sense of pausing to decide whether to see what follows.

They speak in code: “TV — MA, L, S” (“Mature Audience,” “Language,” and “Sex”), and read like something between a threat and a promise. (If only the rest of our evening could be summed up so neatly in advance!)

Some describe in short phrases what we or those in our care may wish to avoid. These are often oddly vague. “May contain adult themes or strong language” better describes a Shakespearean sonnet than the Jerry Springer show. But only the latter merited this accolade from the NBC TV network. Others are weirdly specific, like the British Board of Film Censor’s warning that Harry Potter and the Chamber of Secrets contains “mild language and horror, and fantasy spiders.”

We are so used to such warnings that they either pass us by completely or we read them as a kind of teaser for the excitement that’s to come. Sure, parents of young children or members of communities that impose stricter than average codes of cultural consumption might read them more carefully and watch with one finger hovering over the fast forward button. But for the rest of us, the warning comes too late. We’re in too deep.

By Tomchen1989 at

Warnings like these have become a kind of background noise — like supermarket muzak and sales assistants thanking us for shopping at Gap. So why did recent calls to attach them to course materials set US campuses aflame and prompt scores of articles about millennial flakiness and the decline of the university? How could something so commonplace generate so much angry debate?

To answer that question, we need to take a detour through the psychology of trauma, the politicization of academia, and the effects of risk aversion. But first, let’s take a closer look at “trigger warnings” themselves. What did advocates want? And why?

Initial calls to attach trigger warnings to course materials came from victims of personal trauma. For example, in 2014, Bailey Loverin, a student at the University of California at Santa Barbara, and herself a victim of sexual abuse, initiated a campaign to include such warnings in course syllabi after attending a class in which a professor screened a movie depicting rape.

Loverin’s campaign focused on material that might trigger personal memories. She had in mind not only victims of sexual abuse but also victims of other kinds of trauma. This included military veterans, many of whom study at UCSB. Activists at Oberlin, however, went further, demanding that professors include warnings about literary works containing “anything that might cause trauma … racism, classism, sexism, heterosexism, cissexism, ableism and other issues of privilege.”

Clearly, the term “trauma” was being used in different senses in these two campaigns. The first is personal and rooted in memories of specific injury suffered by an individual. The second is collective and based on experiences of discrimination that one might experience as a member of an oppressed or underprivileged group. Unjust and painful as these latter experiences might be, to class them as “trauma” probably overstates things. Trauma implies concrete wound or injury to one’s person. What victims of discrimination suffer is more often insult or exclusion. History teaches us that verbal discrimination often precedes physical violence. It should not be taken lightly. But neither should distinctions between physical and verbal abuse.

Activists like those at Oberlin rejected such distinctions — in practice, if not in theory. By using the term “trauma,” they argued in effect that discrimination on grounds of race, gender, sexual preference, and so on, is no less injurious to the victim than are physical abuse or exposure to the horrors of war. For them, “trauma” is no exaggeration; it’s an accurate description of how the disadvantaged experience oppression.

It is no accident, therefore, that such activists called for trigger warnings. The notion of a trigger has its origins in the diagnosis and treatment of post-traumatic stress disorder (PTSD) — a condition in which war veterans and victims of physical abuse experience symptoms such as anxiety, rage, nightmares, and depression. A trigger is a reminder of the original trauma that prompts onset of the symptoms. Predicting what will serve as a trigger is notoriously difficult. A sound, a picture, a face, a word, a smell — pretty much anything — might be a trigger.

US Marine Corps at

Calls for trigger warnings were often vague about what the warned were expected to do after reading them. Were they supposed to avoid that class or material? Or to approach it with caution after appropriate preparation? Was the warning aimed only at those who had suffered a relevant trauma? Or was it aimed also at those who had not?

At Oberlin, the proposed warnings appear to have been aimed at all students and to grant them permission to opt out of classes and materials they might find offensive. There is profound irony in this extension of concepts from psychopathology to everyday educational practice. In the case of PTSD, gradual and repeated exposure to triggers is a key component of effective treatment. Avoidance exacerbates the condition.

Indeed, it is remarkable just how little psychology featured in arguments between advocates of trigger warnings and their opponents. Neither side appears to have been particularly concerned with establishing what happens to students when they face upsetting material, and whether warning them in advance helps. Instead, they imported terms from psychology to argue about something else entirely.

Many advocates saw trigger warnings as a vehicle via which to topple the hegemony and rewrite the canon. And many opponents saw them as a threat to academic freedom and as coddling students rather than teaching them how to deal with challenging material. These were not arguments about student well-being that later became politicized. They were political from the start. Moves to equate racial or sexual discrimination with physical injury to one’s person is a kind of moral gerrymandering, shifting the boundaries we draw between different kinds of harm. When people bring talk of trauma and triggers into discussions of potential offense taken to works of literature, they collapse a moral distinction so basic that it’s observed even in the toughest schoolyard — that sticks and stones hurt in ways words can’t.

Slate magazine declared 2013 the year of the trigger warning. By 2016, the University of Chicago welcomed incoming students with a letter informing them, with a hint of self-congratulatory humor, that they could expect no such warnings here. This generated a brief flurry of additional debate. But, for the most part, colleges and universities simply moved on.

Today, even sites of the noisiest battles bear few traces in institutional policy. At UCSB, the student council’s recommendation to professors remained just that, a recommendation. At Oberlin, the draft policy to which professors had objected was removed from the college’s website and replaced with boilerplate about equity and inclusion and links to relevant documents and officers.

In an ironic twist, now that the noise has subsided, studies by three independent research groups may have put to rest the empirical question that neither advocates nor opponents of trigger warnings asked: Do they work?

They don’t.

Two studies found their effects to be trivial. A third found that, rather than reducing anxieties, trigger warnings can increase them. In a randomized comparison, readers who had not themselves been victims of violence were presented with famous literary texts containing descriptions of violent acts. Readers whose texts were preceded by a trigger warning about the upcoming content reported being more anxious about the text than those who read the same texts with no advance warning.

In the era of fake news, it is no surprise that debates can rage and subside with little or no attention to relevant empirical questions. But, of all places, and on all topics, how could this happen in academia, and about its core business, namely, teaching and learning?

The answer lies, I believe, in what I’ll call “disclaimer culture” and “global warning.” These affect many institutions, but institutions of higher education may be particularly susceptible.

We tend to assume warnings are harmless. They help those who need them and don’t affect those who don’t. Few of us care whether a cookie contains nuts. But the consequences for those who do can be fatal. So, the manufacturer uses up a little extra ink on the packaging to alert the eater to this possibility. Everybody wins and nobody loses.

Or so we assume.

In 2016, the Lidl supermarket chain recalled all Alesto brand honey roasted peanuts from its stores in the UK because the packaging did not state clearly in English that the product may contain nuts. The words and pictures on the packaging screamed, “Peanuts!” But because the official description of contents did not include an explicit warning, they pulled the product from the shelves to avoid lawsuits.

People who enjoy being outraged cite cases like these as examples of political correctness gone mad, and file them away in the same box as mythological EU regulations about the requisite curvature of bananas. But the problem isn’t the EU or the food and drug administrations of any one country. It’s bigger than that. It’s disclaimer culture; a culture in which fear of litigation or scandal drives practices that diminish, centimeter by centimeter (or inch by inch), the areas within which the sound judgment of individual, adult citizens is presumed or required.

In practice, it’s easy to distinguish genuine warnings from disclaimers. Genuine warnings seek to protect the warned. They focus on things that an average person might not know or might have overlooked — these pills cause drowsiness or this granola contains nuts. Disclaimers, on the other hand, seek to protect the warner. They state the obvious — these firelighters are highly flammable, or these nuts contain nuts.

Disclaimer culture provides ideal breeding conditions for warnings. If we can minimize our exposure to future complaint by issuing an advance warning — however minor the risk or unlikely the complaint — then why wouldn’t we? However, the more warnings we issue, the less each warning means. By crying wolf, superfluous warnings reduce the power of every warning. If every TV program requires viewer discretion, then, in practice, none does. That’s what I mean by global warning — the proliferation and devaluation of warnings. Disclaimer culture feeds global warning as surely as the burning of fossil fuels raises the planet’s temperature.

Universities are excellent breeding grounds for disclaimer culture. The culture wars have left them scarred, scared and risk averse. Few administrators want to deal with another group of offended students, and certainly not to risk bad press or litigation. And despite their name, trigger warnings function in practice as disclaimers, shifting the burden of responsibility from the institution to the student. Like the viewer discretion advice before a movie, they say, in effect: “Don’t blame us if what follows offends you. We warned you it might.”

At their best, universities serve as engines of knowledge-informed public discourse. But when they bow to disclaimer culture, they become part of the problem rather than the solution. Not only do they miss an opportunity to elevate the debate, but they further undermine public confidence in research and academic expertise.

What if, instead of another round of mudslinging and damage limitation, faculties responded to concerns about student well-being by reviewing them with the same rigor as they would a manuscript submitted for publication in a prestigious journal? Instead of preaching at each other about freedom or oppression, scholars might then provide us with new insight into distinctions between trauma, discomfort and offense, as well as the downside of warnings and the risks of their proliferation.

That may take a while. In the meantime, here’s my suggestion* for the new academic year. Before issuing any warning to students, ask yourself the following three questions: Whom am I trying to protect? Will the warning improve or harm their ability to cope independently with such material in the future? Will the warning’s positive effects exceed its negative environmental consequences (i.e., its contribution to global warning)?

*Readers adopt the above suggestion at their own risk. All information in the article is of a general nature. No information is to be taken as educational or medical advice pertaining to any individual’s specific psychological, physical or intellectual health.




Cognitive Psychologist; Visiting Professor at George Washington University. I write about identity, culture, and leadership — and the connections between them.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Thankful for Good Choices, Good Math, and Good Community

Dyslexia and education

College Freshman Survival Guide

University of Kashmir declares results of Entrance Test 2020

University of Kashmir declares results of Entrance Test 2020


Udacity Bertelsmann artificial intelligence (AI) Challenge Scholarship — My Journey

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Eli Gottlieb

Eli Gottlieb

Cognitive Psychologist; Visiting Professor at George Washington University. I write about identity, culture, and leadership — and the connections between them.

More from Medium

Pro-life is simple, Pro-choice can be too — Part 1

May 7, 2022 — On Freedom.

The 007s — Week 10

Viking Volunteers 2: Activism Meaning to Me