I was a student in NWDLIAFS. Here’s what you missed.

Flammarion engraving (Source: Library of Congress)

Have you seen the acronym TESCREAL circulating across the web? Been confused by all those alarmist reports about computers becoming sentient and causing a human extinction event? Mystified as to why Elon Musk is investing in colonizing Mars when he and his buddies could instead be funding the fight against climate change? Are you . . . a reader of this blog?

If so, then I was just like you. But in August 2023, I had the opportunity to participate in the first iteration of the Center’s three-day mini-course: No, We Don’t Live In A F%#*ing Simulation. Taught by Center Executive Director Emily Tucker and philosopher David N. McNeill, the course brought together students and thinkers from across the globe to deconstruct some of the ideologies to which some of Silicon Valley’s most dangerous actors subscribe, build up our defenses against them, and arm us to take them down. More than a dozen countries were represented, with participants hailing from the U.S., Canada, Brazil, Mexico, Norway, Finland, India, Nigeria, the UK, Ireland, Netherlands, Slovakia, Germany, Italy, and Morocco, among other places. Discussion was lively, and the chat was on fire. Here, I’ll tell you a little about what course participants experienced, and pass along some key pieces of our learning so you can arm yourself, too.

The course played out in three sessions. The first session, “Why we don’t live in a simulation (and why it can feel like we do),” took aim at the “simulation argument.” McNeill kicked us off by addressing a baseline question that, perhaps, you have in reading this blog: if these ideologies are so silly, unfounded, and toxic, why give them additional exposure by conducting this course? Great question. First, I will note, I won’t be linking to their proponents’ sites and papers here, to avoid driving more traffic their way. But second, I’ll throw it to McNeill to give a fuller answer: “the answer is not that we are secretly worried that maybe we are, after all, in a simulation,” McNeill told students. The reason the course began with the simulation hypothesis “is that it is not just a bad argument, it is exemplary in its badness… it is a very good example of a very bad and increasingly ubiquitous mistake.” As McNeill explained, “In the specific context of computer simulation this mistake involves confusing mathematical models we can use to help us think about various phenomena in the world with the aspects of the world we are modeling.” As a general matter, it “involves confusing the representation of a thought with the thinking we can do” by using the representation. It “is the mistake of confusing the words and symbols we use to express our thoughts with the thoughts we express using those words and symbols.”

Two panels of “Chainsaw Man” fan art cartoon depicting (panel 1) hand holding egg behind paper on mirror, female face gasping and sweating, (panel 2) female clutching male, both looking fearful, with text “How does the mirror know what’s behind the paper?!”
Source: X user @Baris6109

McNeill connects this mistake to “the dangers and the desperation involved in . . . our desire for simplicity and logical order.” The proponents of data-driven tools too often capitalize on this desperation. As scholar Sun-Ha Hong has written, they seek to cultivate a “belief in a hidden mathematical order that is ontologically superior to the one available to our everyday senses.” Because we are afraid that the world is too complicated for us to comprehend, it’s tempting to place our faith in machines and models, convincing ourselves that if we just make them powerful enough, surely they must be able to wrest order from the chaos. The simulation hypothesis uses this belief to try to convince us that our world could very well be just a really complicated video game.

But as participants learned, the best available argument offered for the simulation hypothesis is “a farrago of non-sequitur, overstatement, ungrounded assertion, elision, and simple conceptual confusion.” In NWDLIAFS session 1, participants interrogated that confusion, learning, among other things, the proper application of Bayesian probability theory, what Socrates and the Greek philosophers really meant, and how (not) to interpret Zhuangzi’s Butterfly dream.

The second session, “The right and wrong ways of caring about the future,” deconstructed the toxic ideology of “longtermism” — the “L” in TESCREAL. Timnit Gebru and Émile Torres coined the acronym, which stands for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism, to describe a set of beliefs proliferating across the web and drawing sectors of the tech ecosystem rightwards. Gebru has described these various ideologies as representing “the eternal return of eugenics.” Through NWDLIAFS, I came to understand that the people espousing these ideologies are willing to imagine a world completely unlike our own: one where we live on Mars, upload our consciousness to a cloud, communicate with sentient machines, and live forever in bodies manipulated and supplemented by mechanized devices. Their imaginations are vast, their confidence in their abilities infinite. Why, you might wonder, can’t they apply that imagination to envisioning a new world here on Earth? The answer isn’t a lack of imagination. It’s power. They can imagine their chosen future because in it, they are still on top. What they can’t abide is any future in which they’re not.

As Tucker taught participants, when you try to engage with Longtermism in particular in any serious way, it “collapses into nonsense.” But what it boils down to is that we should consider whatever we could do to make the world a better place for people alive today insignificant when compared to whatever we might do to increase the marginal chance of an exponentially larger future population, or of slight improvements in average welfare in that well-populated distant future. As Tucker explained, to be persuaded by Longtermism is to be persuaded that there is no value in attempting to improve our lives here on Earth, or the lives of our fellow living humans: it is to believe “that political community itself is not a real possibility.”

When reframed in this way, an all-important question comes into focus: who does Longtermism benefit? NWDLIAFS participants learned that it benefits corporations and the powerful people behind them. It seeks to craft a vision of the world in which the only moral choice is to invest in the AI products these corporations produce and, if we don’t, those tools themselves may end us. As Tucker explained, “If that sounds like a threat, it’s because it is. It’s a threat designed to deter people, and especially people with any kind of political or economic power, from doing anything about the real harms that tech companies are perpetrating.”

In the third and final session, “Artifice and Intelligence,” McNeill addressed artificial general intelligence and consciousness. His central claim for the session was that despite the fervor in the popular press, “​​we have no reason at all to think that current AI research, for all its extremely impressive technical achievements, has brought us even one step closer to creating a system that is intelligent in anything like the way we are.” Genuine intelligence, according to McNeil, “requires embodiment, sociality, and ultimately, desire.” Through the session, participants wrestled with what consciousness truly is and is not, challenging one another in their assumptions and beliefs. We discussed the interplay between “sapience” and “sentience,” considering the ways in which our intelligence and humanity requires being in our bodies and in the world. We came back to where we started: that one of the fundamental mistakes of our current intellectual climate is the mistaking of representations of things for the things themselves. Take ChatGPT, for instance. If we’re trying to judge whether ChatGPT is an example of “intelligence,” measures like the famed “Turing Test” tell us to take a “behavioral” approach. We can’t measure whether the thing is really thinking, but we can look at a representation of thinking–in this case, text outputs–and ask whether those could be confused with the outputs of a thinking human. But as we learned in NWDLIAFS, our words are not our thoughts. Even if these tools are able to convincingly mimic speech, to be truly “conscious” has little to do with how much of our experience we can reduce to words or mathematical representations. Models may be able to statistically predict what word is likely to come next in this sentence, but they can’t feel the warmth of my partner’s hand on my arm, experience the stress elicited by my dog barking in the next room, or do the work to truly understand (not just regurgitate) Tucker and McNeill’s lectures.

So, what do we do with all of this? As a graduate of NWDLIAFS, I can now impart the following wisdom. Here are five things you can do today to inoculate yourself and your loved ones against these toxic ideologies:

  1. Stay focused on the distinction between imperfect representations of things and the things they represent. For every tool touted as “intelligent” or “predictive,” ask: by what measure? What is this tool actually doing? Chances are, it is applying a mathematical model to statistically represent the likelihood of an imperfect proxy for the actual thing it purports to measure, based on the past performance of a given data set–not thinking.
  2. Speak precisely about technologies, the people behind them, and their impacts. For more on this, read Tucker explaining why the Center long ago decided to stop using vague and confusing terms like “Artificial Intelligence.”
  3. Always ask: who benefits? Map the power behind any given technological advance or way of seeing the world. Interrogate whether the change shifts power downwards or, more likely, further entrenches it at the top.
  4. Put down your phone and close your laptop. Get outside. Notice the birds. Connect with another human being. Be in your body. Wonder at the vast, gloriously unpredictable beauty of the complicated world we live in today.
  5. Check back here to enroll in the next session of No, We Don’t Live In A F%#*ing Simulation!

--

--