Why Make Students Write in the Age of AI?

Kristina Zarlengo
13 min readDec 20, 2023

--

By Meg Lamont and Kristina Zarlengo (1)

Patent for a Ouija board, filed in 1891

In the conceptual tennis match over whether Generative AI means the end of the analytic student essay, it has been meaningful for us as English instructors to reflect — together with our students — on what the essays we assign are for. What good are they?

They are not for posterity: we have no expectation student essays will be published, or even read twice: A sophomore essay on Hamlet rarely adds to our collective wisdom about Hamlet, or power, or family. And writing essays can be discouraging to students: It is hard to know what you think; even when you do, it is hard to confront the gap between what you think and the tools you have for expressing it; and even when you can express what you think, a new gap emerges, between what you wanted to express and what your reader understands.

In the English curriculum we teach at Stanford Online High School, students nonetheless write lots of analytic essays based on textual evidence because we think writing helps each student refine their own point of view. And we think developing one’s point of view — about oneself, about the world, about one’s place in society — is vital.

Which college, if any, shall I attend? Whom shall I trust to advise me? Who is the best mayoral candidate? Does my vote matter? Does my job matter? What does it mean to be healthy, to be kind, to be fair, to be successful, to be a responsible citizen, to resist, to be a good parent, to survive trauma, to heal? Part of our purpose as English teachers is to help support our students’ confidence as they take active roles on life’s stage, which means getting better at making tough judgments. We design our analytic, evidence-based essays to prompt students to engage critically with texts, with classmates, with education itself, with the world.

— — —

We have elaborated here on some learning goals behind analytic essays to dramatize how there is no overlap between them and what a student learns when submitting a generative-AI-written essay as their own. In the latter case, they submit something that is a kind of mash-up of everyone else’s point of view, and in the former, they submit something which, however jejune or faulty, has already — before we read and grade it — furthered their capacity to think, to know what they think, to express what they think. The student who submits a Generative-AI essay has confused product for process: what we were asking for was not an imagined “right” interpretation of Hamlet, or the right number of words about Hamlet; we are instead seeking proof that our student had sharpened their own point of view.

— — —

And yet it feels like ignoring Generative AI in the English classroom — let alone prohibiting it uncritically — means inviting it to be there in the wrong ways. Therefore, this spring, we asked lots of open-ended questions of our 11th and 12th grade English students, experimenting with approaches to teaching the literary essay that included direct study of texts generated by ChatGPT. Together with our students, we developed an understanding that the Generative-AI text is product-focused, whereas the kind of analytic essay we are looking for is process-focused.

We did not know for sure that spending precious class time considering texts from ChatGPT would improve student learning or student writing: we ourselves are still trying to figure out the boundaries and promises of Generative AI. Yet, direct focus on AI texts proved useful not only to guard students against feeling Generative AI writing is as good as (or better than) their own writing, which was the most we initially hoped to achieve. More subtly, and thanks to our students’ impressive engagement, the approach helped them learn how to define and refine their own points of view.

Foil #1: Chat GPT & Baldwin’s “Equal in Paris” (2): Curiously Conflated

Discussing James Baldwin’s essay “Equal in Paris” last spring, one student observed that when he moves to Paris, Baldwin must figure out his social role all over again because Paris’ social norms differ from New York’s: thus, although rigid, norms are inconsistent from one place to another. Another student reflected that the richness of Baldwin’s description of social norms is only possible because of his perspective as a racial minority and a gay man: he must question how he is categorized because social norms in both places misconstrue him. We asked students to sum up Baldwin’s style; they noted he places the reader in his point of view; his writing universalizes the personal.

So far it was exactly the sort of text-based discussion of ideas we always have cultivated. Then we added focus on a Generative-AI text, which we used as a foil. We asked students to use theChatGPT text to better understand what Baldwin is doing and — reversing the flow — to use Baldwin’s text to understand the nature, limits and promises of Generative AI.

Students discussed ChatGPT’s response to the prompt: “In the style of James Baldwin, write about getting arrested for stealing a hotel sheet in Paris.” Here is ChatGPT’s response:

I was young, black, and poor in a foreign land, and I had just been arrested for stealing a hotel sheet in Paris. The indignity of it all was almost too much to bear. I knew that I was being treated unfairly, that I was a victim of the same racism and oppression that I had faced all my life. And yet, I couldn’t help feeling a sense of shame and guilt, as if I had somehow brought this upon myself. As I sat in my cell, I thought about the many ways in which my life had been shaped by the forces of history and politics. I thought about the legacy of colonialism and slavery, and the ways in which they had left their mark on the world. And I thought about the power of prejudice and discrimination, and the ways in which they could destroy even the most innocent of lives.

But even as I grappled with these larger issues, I also felt a sense of anger and frustration at my own situation. I knew that I was not a criminal, that I had been unfairly targeted because of my race and class. And yet, I had been stripped of my dignity and my freedom, reduced to a mere object of suspicion and contempt. In the end, I was released from custody, my charges dropped. But the experience left a deep mark on me, a reminder of the many ways in which the world can be unjust and cruel. And yet, even in the midst of this pain and suffering, I also felt a sense of hope and resilience, a determination to fight against the forces of oppression and injustice. For even in the darkest of moments, there is always the possibility of resistance and transformation.

Students observed how some sentences could be Baldwin’s, but most are like excerpts from a college essay about Baldwin. They said ChatGPT’s essay was like a copy of a copy (and copying it would result in even less fidelity). A student noted that something felt very off, as in a parody: the text “feels like it is supposed to fulfill checkbox criteria […] like a geometry proof.” Another student observed that whereas Baldwin surprises us, Chat GPT never surprises us. Yet another student reflected that the ChatGPT text accomplishes an apparent goal of generalizing: telling rather than showing, gratifying typical expectations, predicting rather than providing a perspective on the world, as Baldwin does. Another student concluded that the Generative AI text is clear, concise, and “in control,” but its content is obvious.

Importantly, students agreed that the Generative AI text is curiously conflated: whereas “Equal in Paris” (1955) is all about Baldwin’s experiences in a certain place, during certain moments, ChatGPT’s text conflates his essay’s content with how we have today come to understand Baldwin’s work in general, long after his death in 1987. ChatGPT’s text is thus inescapably a

2020’s text that fails to capture a past perspective from the 1950’s. It curiously conflates then with now, what he wrote with decades of critical reception of what he wrote, what Baldwin said with what people have said about Baldwin.

Foil #2: Chat GPT & Woolf’s “The Death of the Moth” (3): ChatGPT Positivity

Gaining steam during a second discussion incorporating Generative AI texts, this time on Virginia Woolf’s “Death of the Moth,” students considered a Chat GPT 4-generated text based on this prompt: “In the style of Virginia Woolf, write a piece in which the narrator watches a moth, flying about a room before slowly dying on her windowsill, struggling against its own death, and considers the futility of life and our odd way of attempting to make it meaningful.” Here is Chat GPT-4’s response:

How peculiar it is to find oneself alone, ensconced in a room where the quietude of the air and the muted light, coalesce in a symphony of stillness. In such moments, when the world retreats to the fringes of consciousness, we become acutely aware of the smallest of intruders, the tiniest of guests. Thus, my attention was drawn, quite involuntarily, to the erratic motions of a moth. […]The fluttering of the moth, so fragile and ephemeral, seemed to signify a deep longing, a desire for something more than the darkness from which it emerged. And so it flew, in its chaotic ballet, seeking the elusive light that shimmered like a distant dream.

As I observed the moth’s flight, its struggles growing more fervent, more desperate, I could not help but feel a pang of melancholy. For there, in the dance of the moth, I saw a reflection of our own human condition, the ceaseless striving for meaning, for purpose, amidst the infinite expanse of time. We too, like the moth, are driven by an inexplicable longing, a desire to find the light, to make sense of the shadows that encroach upon our lives.

[…] I watched as it began to falter, its strength waning, the cruel inevitability of life’s ebbing tide washing over it. The struggle was both tragic and strangely beautiful, an embodiment of the eternal conflict between life and death, the fragile balance that defines our existence.

As the moth lay dying upon the windowsill, its wings outstretched like a broken marionette, I contemplated the futility of its brief life. In the grand tapestry of the cosmos, our own existence can feel as inconsequential as the fluttering of a moth’s wings. Yet, we persist in our search for meaning, driven by a strange compulsion to forge our own paths through the darkness.

[…] Yet, even as these thoughts cast a shadow upon my heart, I could not deny the beauty of the moth’s struggle. In its unwavering determination, it had captured the essence of the human spirit, the unquenchable thirst for life that dwells within us all. Perhaps, then, the true meaning of our existence lies not in the grandeur of our endeavors, but in the quiet courage to face the darkness and, like the moth, to dance our own dance until the very end.

As in their discussion of Baldwin, students noticed that the Chat-GPT response loses the sense of “being in time” with the events described by the narrator, such that, in the words of one student, “I do not feel like the moth is dying.” But most of their observations honed in on the loss of Woolf’s hallmark indirectness, and the way Chat GPT’s tendency toward directness gave the piece a “one-dimensional meaning” inimical to Woolf’s style. As one student put it, “she never really says ‘Hey guys! I’m using a metaphor!’” the way Chat GPT does. Instead, another student noted, Woolf’s writing is “more elusive, harder to pin down, and nuanced.” That is, Woolf’s piece opens up multiple interpretive possibilities while Chat GPT’s response, with “no grasp of indirect meaning,” narrows things down to only one interpretation that students suggested was painfully obvious and somewhat condescendingly plain–the very opposite of Woolf’s piece.

Students also noted what they labeled “Chat GPT positivity” in the response, a tendency toward an explicit, upbeat moral lesson; we speculated on what in its training had led to this tendency. We had first noticed this tendency when we fed ChatGPT the beginning of a student’s anecdote about a ski trip and asked it to finish it; as a human class reading the anecdote, as one student noted, we “legit thought for a second [the narrating student] was gonna crash” and were all wincing in anticipation of this outcome. But both ChatGPT 3 and 4 took the same opening to the anecdote and predicted a much more positive outcome than we humans did, one with a notably didactic purpose.

Foil #3: Chat GPT & Didion’s “On Being Unchosen by the College of One’s Choice” (4): Glib Bromides

Our final discussion on ChatGPT considered Joan Didion’s essay, “On Being Unchosen by the College of One’s Choice” alongside a text ChatGPT generated in response to that exact title:

It can be disappointing and frustrating to be rejected by the college of your choice. It’s natural to feel upset and to question whether you are good enough or whether you made the right decision in applying to that particular school. However, it’s important to remember that being rejected by one college does not define your worth as a person or your potential for success. There are many factors that go into college admissions decisions, and it’s possible that you were simply not the right fit for that particular school.

There are also many other options available to you, and it’s important to keep an open mind and consider other colleges and universities that may be a good fit for you. You may even find that one of these other schools is a better fit for you and offers more opportunities than the school you originally applied to.

It’s also important to remember that college is just one part of your life and your journey. There are many paths to success, and being rejected by one college does not mean that you will not be able to achieve your goals or be successful in the future. Keep an open mind and try to stay positive as you explore your options and make plans for the future.

Reading this response, students’ criticism of “ChatGPT positivity’’ gained further momentum. Perhaps because they were in the middle of their own college-admissions processes, students sharply criticized ChatGPT’s glib bromides. They observed that its writing was “lazy and weak.” One student called it “triggering” in its facile dismissal of the authentic angst students felt about the college application process. They returned to ChatGPT’s condescending tone; one student added that its text “perfectly encapsulates the failure to understand emotion.”

It became a bit of a game in class to get ChatGPT to output anything gloomy or negative in tone, because it wasn’t easy. When we instructed ChatGPT to be “negative and poetic” about “On Being Unchosen by the College of One’s Choice,” it nevertheless produced this final line: “For even in the face of rejection, I had the strength and resilience to keep moving forward, to keep pursuing my dreams, and to never give up on myself.” More seriously, students made a connection between ChatGPT’s bias toward cheery positivity and Art Spiegelman’s comment on the reception of one of our earlier course readings, Maus, where he argued that many people were offended by Maus because they wanted a “friendlier Holocaust,” one where clear lessons were learned and the suffering experienced by the author’s father was ennobling in some way — an approach Spiegelman rejected as unfaithful to the actual experience of the Holocaust and its aftereffects (5). As a class, students expressed that ChatGPT’s cheeriness undermined — or even threatened to invalidate — their own experience of the world, which wasn’t always positive,didn’t always give a clear moral lesson, and couldn’t always be reduced to a clear takeaway going forward.

There were other gaps, too, as one of our rural students noted. After giving ChatGPT a prompt about her main extracurricular activity, she noted wryly that “chatgpt doesn’t know about goat shows.” Indeed, even a human from a major metropolitan area would likely catch the problem with ChatGPT’s response: “As the whistle blew, the goats took off in a flurry of hooves, each determined to prove they were the fastest, but only one would cross the finish line victorious.” Taken one way, this response from ChatGPT to a rural student’s prompt is simply humorous. But it points to the bigger issue of all the lives and experiences that are not well represented in the training corpus for ChatGPT and, as a result, are not represented in its outputs.

Our students speculated that ChatGPT could be seen as an extension of Roland Barthes’ ideas in “The Death of the Author,” in that its outputs are “a reflection of societal standards / patterns / expect[at]ions,” and that viewing ChatGPT in this way could help us notice and identify biases in those societal standards, patterns, and expectations — such as a bias toward cities and against rural communities, toward didactic lessons and against antifoundationalist principles of questioning.

— — —

As witnesses at the début of a speaking library, we are impressed, as anyone should be, by Generative AI’s nearly perfect grammar and by how well it fulfills formal requirements (the 5-paragraph essay, the sonnet, the anthem, the letter of recommendation, etc.). And we are delighted that someone inarticulate may now have access to a tool that allows them to be judged by the content of their ideas rather than the form of their sentences.

However, it is important to mark the boundaries of Generative AI, which, as one of our students put it, “does not write. It calculates.” Our students can go way beyond conforming to expectations and conventions. Each student can do deliberative work that reflects and defines their personal perspective: what they love, what worries them, what needs to change. Rather than their calculations of how to fulfill our expectations or anyone else’s, we hope to see their writing, writing that requires a point of view that can only be arrived at — and only derives meaning — through their own experience and engagement.

Notes:

  1. Margaret Lamont is Assistant Head of School and Instructor of English at the Stanford Online High School. Kristina Zarlengo is Division Head of English and Instructor of English and of Legal Studies at the Stanford Online High School. Other than the illustration (an 1891 patent application for a Ouija board that is in the public domain) and the noted quotations of Chat-GPT search results, we are solely responsible for the opinions expressed here. However, others share credit for any insights, especially our students. All the students in 2022–2023’s OE020 English § 1 (Zarlengo) and § 3 (Lamont) generously expressed their views during class discussion; we have generalized attribution to all students in both sections for simplicity and because discussion was deeply collaborative. We also credit Glenn Kleiman, Senior Advisor at the Stanford Graduate School of Education, who encouraged us to write this article. Further credit goes to him, and to our Stanford OHS colleague Anna Pisarello, for wonderfully helpful comments on drafts.
  2. https://www.commentary.org/articles/james-baldwin/equal-in-parisan-autobiographical-story/
  3. https://gutenberg.net.au/ebooks12/1203811h.html#ch-02
  4. https://wowwritingworkshop.com/on-being-unchosen-by-the-college-of-ones-choice/
  5. https://www.vulture.com/article/art-spiegelman-maus-interview.html

--

--

Kristina Zarlengo
Kristina Zarlengo

Written by Kristina Zarlengo

Kristina Zarlengo works at the Stanford Online High School, which serves a global student body with a curriculum that is entirely online.

Responses (1)