Predictable and Average

Generating Thoughts about AI and Student Thinking

Mark Childs
GMWP: Greater Madison Writing Project
5 min readSep 5, 2023

--

Original Thinking

Fifteen years ago, a student approached me after class looking unusually stressed.

He was usually quite comfortable in English class, with the quick wit and writing talent that enabled to him to breeze through class discussions and writing assignments. My class was a good fit for him, because I would typically offer students clear direction on assignments and he would typically use the instructions to focus his energies, organize his thoughts, and direct his writing. As a reader, I never felt his writing and ideas were not formulaic, he simply used the genre requirements, whether it was a personal, analytic, or creative piece, to engage with ideas and produce a piece of quality writing.

In fact, observing this student’s successful approach to class helped me articulate my emerging approach to teaching writing in English: start students reading a good writer, have them articulate the elements of the writer’s style and genre, then imitate the writer to learn something about writing and something about their life. When students do this, reading & imitating such writers as Joan Didion, Ta-Nehisi Coates, Jamaica Kincaid, Elizabeth Bishop, they invariably improve and I invariably read a set of engaging, insightful, and moving pieces. Though imitative, this approach enables students to develop their writing skills while giving them a form to develop their own ideas about their experiences.

But back to this student, approaching me after class exhibiting unusual anxiety about an upcoming assignment.

“I’m really worried about this literary analysis. The assignment calls for an ‘original’ reading, but how am I, a high school student, supposed to say something about Hamlet that has never been said in 500 years?”

“Oh no, that’s not what the requirement means. You know how I’ve asked you to look at various interpretations of earlier works and argue for your preferred meaning, this just means that I want you to come up with your own interpretation, one that’s original to you. You’re correct that I’ll probably have read something similar by other students or literary critics, but as long as you don’t do any background research, then you’ll be constructing an original interpretation. I just want you to practice your interpretation skills. So, what’s on your mind about Hamlet?”

With evident relief, the student resumed his normal attitude towards class, we discussed his ideas, built a framework, and off he went to write a successful piece using the literary interpretation essay skills he had learned in class.

Average Understanding

The memory of this student was sparked while watching Dr Jerry Zhu’s informative presentation explaining generative IA at this summer’s Teaching Writing in the Age of ChatGPT: Summer Symposium. My main takeaway was that large language models are essentially probability machines, enabling a generative AI to draw upon billions of document to predict the most likely response to the user-generated prompt. Visualize a standard distribution curve and right there are the top of the bell-shaped curve is the AI response: not the exceptionally good or bad response, but the most likely.

This made sense, it’s why my own experiments with ChatGPT, the generative AI software that exploded into public consciousness, offered useful lists of vacation schedules and lesson plans: lots of people had already produced similar texts, so the AI knew what response would most likely satisfy me.

Or, put another way, the AI is designed to produce average responses.

The word “average” tends to have a negative connotation, but I think that average responses work as the starting point for planning a vacation or a teaching lesson. Lots of people have visited Rome and lots of people have taught The Great Gatsby, so it’s unsurprising that ChatGPT was able to predict that I too would want to visit the Colosseum or discuss the American Dream.

And I must confess that as memorable as my exchange with the student was, I don’t remember his essay on Hamlet. It scored well enough to earn him college credit for his IB score, but it also earned the average score for that year’s IB English essays. And yes, I’m officially an average IB English teacher: across a decade of teaching students to sit IB exams, my students scored within 0.02 marks of the worldwide average (out of 7 marks).

Predictable Thinking

I suspect there are two main lessons for me as a teacher to consider over the coming year as I think about using AI in classrooms:

The first is that generative AI is designed to predict the average response, the one that would most likely to be produced by majority of human writers. As I work with teachers and students we are all going to have to think carefully about when the situation calls for an average answer: when is using ChatGPT to help brainstorm, revise a draft, shape an outline, or add details an effective use of the wisdom of crowds or settling for an unhelpful, sometimes deadly, average?

The second is that words such as average and predictable are going to take on new meaning, as we grapple with generative AI responses. For example, teachers are going to have think very carefully about their students’ average level of work as they draw upon generative AI: while early experiments with ChatGPT typically produced mediocre work, whatever learning is taking place inside the black box is steadily increasing the quality of response. If a generative AI is simply predicting how humans would likely respond yet the quality of response is increasing, perhaps AI can similarly help raise the average level of human writings as we find ourselves in a virtuous cycle of raising the level of our own writing and thought to match the aspirational predictions of what AI thinks humans want?

Formulaic Coda

When I asked ChatGPT whether the AI could learn to think like a human, I was assured that humans “have access to personal experiences or emotions . . . that influence human learning and decision-making in ways that are beyond the capabilities of AI models.” But in turn, I suspect that generative AI models can reveal some insights about humans because they are not limited to or bound by individual experience.

One thread running the GMWP conference was that AI produces formulaic writing, but just as I think AI might challenge our notions about averages and predictability, I wonder if AI might formulate some ways in which we might improve our thinking and writing.

For instance, in an English class where my mini-lesson on varying sentences fell on deaf ears, I had students create a graph charting the number of words per sentence in a sample paragraph of theirs versus Joan Didion. The students, confident that they already varied their sentence lengths, were surprised to see the contrast between their writing and Didion’s. Building upon this insight, the students then constructed the formula (5–25–25–40–100–25–30–15–5–20–5) for a “Didion paragraph,” imitated this structure, and produced their best writing of the year.

Somehow, this work resulted in a group of students discovering an extremely effective formula for revising a paragraph. Might the “Didion Formula” illustrate ways in which AI can reveal insights into human writing?

In short, I am curious how AI might reshape our writing classrooms and our young writers if we pay attention to the lessons it offers us, predicting our average response and providing hidden formulas that can accelerate our students’ thinking and writing?

--

--