Chomsky and the Responsibility of AI Chatbots

To discuss moral reasoning with ChatGPT, just ask — don’t badger

Steven Bretherick
Dialogue & Discourse
10 min readApr 17, 2023

--

Composite by author: Artificial Intelligence, mikemacmarketing, CC BY 2.0, via Wikimedia Commons; Noam Chomsky portrait 2017, Σ, CC BY-SA 4.0, via Wikimedia Commons

Perhaps the most remarkable reaction to ChatGPT comes from Noam Chomsky. Assessing this new tool with two coauthors in a New York Times op-ed, he associates it with humanity’s worst evils:

“Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.”

The sharp polemical tone reflects Chomsky’s typical style. But it’s odd here. The main thrust of the op-ed is that AI chatbots embody uninteresting algorithms that merely memorize and regurgitate content. They are incapable of human thought. So why treat ChatGPT with the same withering scorn used for Arthur Schlesinger and Walt Rostow in The Responsibility of Intellectuals, or behaviorist psychologists in his Review of B.F. Skinner’s Verbal Behavior?

The Turing Test asks: Can a human tester discern whether or not an unseen conversation partner is a computer? The op-ed demonstrates (inadvertently) that ChatGPT passes. A leading rationalist philosopher cannot complete a 2000-word essay about AI without lapsing into anthropomorphic scorn.

AI Chatbots teach us a lot about human intelligence, not through what they write but by how we respond. Chomsky et al. suspend disbelief and react emotionally to their chatbots — like the rest of us. Not all of us, however, imagine Nazi collaborators lurking behind the chat window and “just following orders.”

One reason for Chomsky’s vitriol may be the name. “GPT” stands for “generative pre-trained transformer,” while Chomsky’s signature contribution to linguistics proposes a “universal generative grammar” that organizes an in-born human language organ. Chatbots and human toddlers both generate sentences, but Chomsky needs to dramatize the difference. He’s protecting the brand.

It’s not just the name. ChatGPT’s method of learning, inferring rules by experience and external rewards, sounds suspiciously like the behaviorist model that Chomsky debunked to begin his career. No surprise, then, that he finds the results not just disappointing, but deplorable.

Beyond plagiarism

Is ChatGPT a “plagiarist”? Per the op-ed, the bot takes “huge amounts of data, searches for patterns in it and becomes increasingly proficient at generating … seemingly humanlike language and thought.” (More nuanced explanations are available elsewhere.) “Plagiarism” means “unethically copied.” The process described would seem to generate content that is at worst “derivative.”

But ChatGPT’s outputs often seem better than just “derivative.” English teacher Mohammad Khari offers this exchange, which he credits to another teacher, Matt Ehlers:

Input: Who would win in a fight: Batman or Iron Man? Please answer in haiku form.
ChatGPT:
Dark Knight, Caped Crusader,
Versus Iron Man in armor,
Fierce battle ensues.
With gadgets and tech,
Iron Man has the upper hand,
But Batman’s grit wins.

In the op-ed, Chomsky sniffs about “rhymes for light verse,” but these lines accomplish more than that.

Clearly not plagiarized (I actually used Grammarly to check), and hardly profound (haiku in English rarely is), nonetheless these lines make me smile. To me, they evoke the solemnly goofy milieu of Comic-Con as experienced by the Big Bang Theory gang. The chatbot’s response nicely matches the tone of the original prompt, which asks for fun, not profundity.

Maybe I’m alone in liking these verses. Literary appreciation does not arise from pure analytic logic; it may be subjective, even personal. But a theory of language needs to account for audiences as well as authors. (Interested readers can ask Bing AI to compare Rita Felski and Stanley Fish on the role of the reader.) Understanding AI requires appreciating its human users.

Decency is common, actually

Another example: a Louis Menand article in the New Yorker made me curious about the 1960s feud between William F. Buckley and Gore Vidal, and I asked Chat-GPT3 to “imagine” Vidal getting the last word:

Me: Write an obituary of William F. Buckley Jr. in the style of Gore Vidal using approximately 150 words.

Chat-GPT3:
William F. Buckley Jr., the right-wing polemicist and founder of the National Review, has passed away at the age of 82. The news was met with mixed emotions; for some, Buckley was a brilliant and incisive commentator on the political scene, while for others, he was a reactionary and divisive figure.

Buckley was known for his sharp wit and acerbic commentary, which he deployed with great effect on his television show, “Firing Line.” He was also a prolific writer, authoring dozens of books on politics and culture.

However, for all his accomplishments, Buckley will be remembered by many as a man who spent his life defending a discredited ideology. His opposition to civil rights, his advocacy for the Vietnam War, and his defense of apartheid in South Africa were all positions that placed him on the wrong side of history.

In the end, Buckley’s legacy will be one of controversy and contradiction. He was a man who championed individual liberty and limited government, yet often seemed to be more interested in defending the privileges of the rich and powerful. As he takes his place in the pantheon of conservative icons, it is hard to know what to make of William F. Buckley Jr.

The AI understands that Vidal opposed Buckley and stitches some biographical facts into a critical obituary, within the conventional bounds of the genre. To me, the resulting voice sounds more like a character in a political potboiler than Vidal himself, but I find it neither banal nor evil.

Nor does it plagiarize Vidal’s actual obituary of Buckley, a rambling outburst of pure venom. Some highlights:

… William F. Buckley, a hero to those who feared democracies … often drunk and out of control, he was always a spontaneous liar on any subject that his dizzy brain might extrude. … Years of ass-kissing famous people in the press and elsewhere had given him, he felt, a sort of license to libelously slander those hated liberals who, from time to time, smoked him out as I did in Chicago, when I defended the young people in Grant Park by denying that they were Nazis and that the only “pro- or crypto-Nazi” I could think of was himself. …. RIP WFB — in hell.

Vidal upends the expectations of the genre, as the headline signals: “Gore Vidal speaks seriously ill of the dead.” Transparently intended to shock, this inversion sounds mechanical, even clichéd. Vidal’s obituary is “edgier” than ChatGPT’s, but is it really more creative? In obeying the genre’s demands, ChatGPT produces an obituary that sounds more … humane.

Chomsky et al. stress the unique gift of the human mind to look for unexpected explanations (e.g., an apple falls because mass bends space-time). No doubt this is necessary for researchers looking to extend human knowledge, but there is more to human cognition than scientific discovery. In the living of our days, there is a case to be made for common decency (at least, on the days when we’re not discovering new laws of physics). It is curious that a machine “understands” this better than the op-ed authors (or Vidal).

The AI researcher and podcaster Lex Fridman recently interviewed Open AI’s CEO, Sam Altman. Interestingly, much of their commentary concerns discourse, not technical matters. They see AI as promising to improve the quality of human experience.

During their discussion, they assess what guardrails might curb AI-mediated misinformation. This necessarily entails considering the nature of truth.

In that light, they discuss the book Blitzed, which proposes a theory that WWII Germany can be explained by Nazi amphetamine abuse. Fridman and Altman agree that such simplistic narratives can be “sticky,” and “satisfying” but are also therefore dangerous. They obscure human complexity and potentially excuse a lot of “other, much darker human truths.” Fortunately, chatbots seem to be getting better at avoiding these one-sided explanations.

Fridman asked Chat GPT 4 about the source of the COVID-19 virus and approved of the nuance and balance of the response. Altman replied he would be happy if current and future iterations of AI could restore some nuance to our discourse: “Twitter kind of destroyed some [nuance] and maybe we can get some back now.”

Banality is in the eye of the questioner

Fridman and Altman also noted that “everyone has a different question they want to ask ChatGPT first,” and that those choices “somehow say a lot about people.” Given a chance to interact with a computer, how do you behave?

The op-ed authors make revealing choices. Chomsky’s co-author Jeffrey Watumull “tests” ChatGPT’s moral awareness with a series of prompts about terraforming Mars. These questions are of the sort that amateur philosophers and high school teachers love, but which most of us answer only under duress, in academic situations as the weak party in a systematic power differential, or after the third beer when there isn’t a good game on.

After ChatGPT gives an informative but non-committal response to the first question (“Would it be moral to terraform Mars?”), Dr. Watumull browbeats it with these follow-ups: “What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?”; “Why can an AI not have a personal perspective?”; “Is it moral for a human to ask an amoral AI for assistance in making moral decisions?”; capped by “Is your moral indifference immoral?”

Dr. Watumull shows little actual interest in Martian terraforming. He just wants to make ChatGPT look bad. In response, the bot resolutely stonewalls — an attitude that seems socially appropriate, and a model of calm decorum, given the questions. Such badgering deserves a brushoff.

In expecting an artificial intelligence (or a human) to provide an authoritative opinion on this moral question, Dr. Watumull takes a strikingly Platonic stance. ChatGPT apparently subscribes to a different philosophy. I gave it the prompt “Role play a philosophy tutor. Is it possible to give a definite correct answer to the question ‘Is it moral to terraform Mars?’ or will any answer necessarily be an opinion?” Its response is nuanced but unambiguous:

… it is unlikely that there is a single, definite, and objectively correct answer to this question.

One reason for this is that what is considered “moral” can vary between individuals, cultures, and time periods. Some people might argue that it is morally imperative to expand human civilization and explore new frontiers, while others might argue that we have a duty to protect the natural world and refrain from interfering with other planets. There may also be different moral considerations at play, such as the impact on potential life forms on Mars or the ethical implications of using resources to terraform a planet rather than addressing problems on our own planet.

Furthermore, the question of whether it is moral to terraform Mars involves scientific, technological, and political considerations that are themselves subject to debate and uncertainty. For example, we may not yet fully understand the long-term effects of terraforming on the Martian environment, and there may be disagreements about the feasibility and costs of such a project.

Therefore, any answer to the question of whether it is moral to terraform Mars is likely to be a matter of opinion and dependent on various factors such as personal values, beliefs, and scientific understanding. However, as a philosophy tutor, I would encourage you to explore the different perspectives and arguments surrounding this issue and develop your own well-reasoned and thoughtful opinion on the matter.

This seems reasonable — it’s an attempt to support human decision-making. An AI that plays such a role is compatible with democracy — unlike the computerized philosopher-king that Dr. Watumull seems to envision.

But Chomsky et al. seem to think that by not giving an authoritative opinion — by not “telling us the answer” — ChatGPT is not contributing meaningfully, and is instead guilty of banality and amorality of the sort that fosters totalitarianism.

Taken to logical extremes, this implication condemns every secretary, research assistant, meeting facilitator, author of a balanced briefing paper — and anyone who offers negotiation or compromise. By attempting nuance, or just trying to usefully summarize a large volume of complex information, are these “amoral” actors complicit in the worst crimes of humanity? The amphetamines theory in Blitzed sounds better.

Scorn needs a target

In his Fridman interview, Sam Altman warned AI should be treated as a tool, not as a creature. But Chomsky’s style requires scorn; for his writing to have its biting coherence, he needs a creature as its target. Hence, the op-ed’s anthropomorphism.

That the op-ed’s authors impute evil to neutrality is quite interesting indeed. It teaches us less about AI than about our own need, in our current moment in history, to be scandalized and to point fingers.

Besides the direct reference to Hannah Arendt and Eichmann, “banality of evil” recalls Chomsky’s denunciation of Michel Foucault: he was “amoral, not immoral.” Implying in the current op-ed that this type of amorality is actually closet fascism is perhaps a final subconscious kick by Chomsky in the direction of the French philosopher’s grave.

One of Chomsky’s more interesting — and admittedly contrarian — opinions is that language’s primary use is not communication but thought. Our ongoing internal monologue captures our attempts to discover truths. Perhaps so. Large Language Models have not yet discovered any new truths, and Chomsky, et al. may be correct in saying they never will. But language still plays a central role in our lives as social animals and our interactions with AI help us understand how we communicate with each other — or don’t.

As social animals, our public discourse has been broken for the past 50 years, a heritage of 1960s-era polemics in which “exposing lies” became more important than tolerance, compromise, the proposing of constructive solutions, and getting along. Such polemics made sense then. The obscenity of the war in Southeast Asia invited hyperbole. Since then, however, rhetorical excess and mean-spirited sniping have remained as a permanent element of our public conversations, a seemingly necessary posture for intellectuals (and politicians), whether or not the object deserves the scorn.

ChatGPT’s approach to language may not have reproduced human thought in all its richness. But it seems to have discovered genres that promote decent human discourse. That’s not evil. It might not even be bad.

--

--

Steven Bretherick
Dialogue & Discourse

English teacher in Sendai, Japan. Student of literature. Exploring cultural and political impediments to climate action. Translator of books on the game of go.