Lo and Behold: Turning ChatGPT from Enemy to Friend

Alex Ames
GMWP: Greater Madison Writing Project
4 min readAug 15, 2023
(AI-generated image from fotor.com)

“Why did you do it?” I asked Milo, the freshman spending his study hall in my classroom rewriting his character analysis assignment.

His answer didn’t surprise me: “I just wanted to get an A.”

At the end of this past school year, I felt personally attacked by AI-powered language models like ChatGPT. Milo and many more of the 9th grade students I teach at a public high school in Madison, Wisconsin had spent the spring turning to AI to “write” the texts I had assigned them–persuasive essays, character analysis paragraphs, poems. Some didn’t understand the assignment, or hadn’t read the book or articles I had given them. Rather than asking for help, or asking for more time–even after all the relationship building we had done during the year–they pasted my prompts into the AI’s chat box and submitted what it spit out. Like Milo, I’m sure they all “just wanted to get an A.”

As a result of this experience, I finished the school year seeing AI as my adversary. Conversations with teachers and other colleagues suggest they see it the same way. In the work I’ve done this summer through the Greater Madison Writing Project, I have heard educators repeatedly state things that AI can’t do (tell personal stories, cite sources, give local examples); I have heard them ask the question, “what can humans do that AI can’t do?” believing that to be the key question to help them create ChatGPT-proof assignments for those students, their counter-attack in the new battle with our potential AI overlords. I can’t make those assertions, or answer that question, or create those assignments–at least not confidently–because I know AI will only grow in its capacity to do. What we say it can’t do today, it will be able to do next week or next month.

Beyond that, the more I have read, written, thought, and listened (to colleagues, to scholars, to my gut), the more I have begun to see AI for what it is: a tool, not unlike the innumerable technologies that have come before it, from the alphabet to the printing press to the telegraph to the internet, tools that have become not our enemies–as foreign and fearsome as they may have seemed at first glance–but rather partners in deepening and broadening our ideas, and spreading them to ever-widening audiences.

At this summer’s conference on Teaching Writing in the Age of ChatGPT, I was asked to write about the following questions: What writing needs human writers? and What writing should AI do?

I think both of these questions are the wrong questions, as they advance the adversarial relationship between humans and AI. As Dr. Jerry Zhu reminded the attendees of that conference, ChatGPT doesn’t think or understand meaning; rather, like the Stochastic parrot, it merely samples the body of texts it has access to (essentially the entire internet), cobbling together ideas from everything we humans have already written.

The right question, then–or perhaps the more useful one–is, what writing can humans do with the help of AI? Could it be possible that AI can help us–and, crucially, our students–write texts that would have been harder, more time consuming, or even impossible without it? Put another way: if we can teach our students to ask questions, create prompts, and critically read the responses of an AI-powered language model that has instantaneous access to the entire Internet, and can deliver its information not through millions of links, but through clear writing–which can itself be tailored to the needs of individuals through additional questions and prompts–shouldn’t we?

For all its real and potential flaws, I think we should.

Of course, that assertion leads me to another question, one that points to a lot more work, work that cannot, and must not, be done by one person, or even by a small group of people at any school. It is work that will be subtle and specific, tailored to individual disciplines or even individual students, but will, I think, be the difference between AI becoming a tool for increased learning rather than a means for cheating on assignments–in other words, the difference between a partner and an adversary:

How do we do it?

This September, I will welcome 100 more 9th graders into my classroom. I will spend the rest of this summer–and the upcoming school year, and beyond–thinking, reading, and listening (to colleagues, to scholars, to my gut) about how to teach my students to use these remarkable new tools ethically and purposefully. I’m not naive: I know some students will still turn to AI, as Milo did, to do work for them, particularly if our grading systems continue to value product over process. But I also believe that, if Milo had been taught to use AI as a partner, he might not have turned to it to cheat. It’s my job as a teacher to help him learn how.

--

--