Power dynamics in the writing classroom: AI and shifts in teaching styles

Laura Dumin
EduCreate
Published in
6 min readMay 22, 2023
Teacher and students having a conversation

Spring 2023 saw a swirl of discussions around generative AIs. So many of these discussions seem focused on the good/bad binary or the “look at how ChatGPT can be used to tweak this or change that” posts. Instead, I want to focus on practical ideas for writing instructors.

For most of us, trying to figure out how to handle student use of generative AIs in writing classrooms may have left us in a cold sweat, worried that everyone was going to cheat. I mean, why wouldn’t they? The technology is right there, the bots are easy to use, and if you close one eye and squint, the writing that the bots produce isn’t half bad. Of course, that also means that there’s a fair portion of the writing that the bots produce that isn’t half good either.

There were a lot of reactionary posts from instructors and institutions alike, banning the bots and vowing to police ALL THE WRITING so that no AI-produced content could get through. But I’m arguing against that approach, both within my own classroom and at my institution and here’s why. What does it do to our credibility and to a student’s motivation to do good work if we start from the assumption that everyone is cheating? How much harm is caused by accusations of using AI writing in unauthorized ways, especially if all we are doing is relying on AI detectors (which have a wide variation in accuracy) to make our cases for us? Why are we so desperate to find all the cheaters rather than take a step back and teach both AI literacy and why our content matters?

I fully understand that there can be an uncomfortableness with the not knowing, and how generative AIs work is new for most of us. But the not-knowing co-learner persona seems to be a good space for instructors to inhabit as we think about what generative AIs can and can’t do well. We can model that learner behavior for our students, giving guidelines on where generative AIs can and can’t be used in our assignments while also being flexible and willing to change if/when we find out more about how our students are actually using these programs. While it can be hard for instructors to switch their paradigm from “bringer of knowledge” to “co-learner” this can be a strong and viable pathway forward. Foucault and Freire both wrote about power dynamics and if/how instructors might shift their positioning in the classroom. These conversations are not new, but this seems like a good starting point for tackling concerns around student use and/or misuse of any AIs, especially in writing classrooms.

For us to model learner-behavior, we first must admit that things are changing. Writing and production of content is changing, but what knowledge means isn’t. Things are still true or not, real or not, but how we engage with students to explain that will also need to change. We cannot continue with the instructor as the most-knowledgeable person in the room model, as far as what AIs can do. We will be fighting a losing battle against TikTok and YouTube content-makers. There are just too many programs and too many ways to use them in unauthorized ways in the classroom. So, as we become co-learners, we can learn from our students about ways that AIs influence their writing behavior and then we can reform our assignments to meet their needs for content knowledge while incorporating AI-output into the final assignments.

This past spring semester, I revamped my courses to purposefully use ChatGPT and any other AIs my students wanted to try out. In my face-to-face courses, we brought in examples of ChatGPT’s output and critiqued it. We discussed AI hallucinations and how we must be wary of what the bots give us. There is fact-checking involved and it helps if we know something about the topic that we have asked the bot to write about. I asked students to use ChatGPT to critique their drafts, giving them human feedback from peers and me along with bot feedback. Then students reflected on the feedback that the bot gave them and compared it to the feedback they received from humans. This was an interesting exercise in showing what the bot is good at and can be used for. Some students found the bot critique helpful, or they noted that it reinforced what their peers had said. Others found it less helpful and might not choose to use it again outside of class. I consider both outcomes a success, because generative AIs are tools. They can’t replace humans, but they can assist us with our work.

One assignment I incorporated this semester asked students to have ChatGPT write drafts of their research papers. My upper-division technical writing students found this to be more of a chore than a help, with one student noting that “anyone who could get the bots to write them a decent paper deserved a good grade just for figuring out how to get the programs to work well.” My lower-division students had less frustration with the process. This feedback helps me to see where the generative AIs can be useful in future semesters, and it is good to see that level of student work matters for how useful the drafting capabilities might be right now.

There have been other places where I have incorporated AIs, but the theme remains the same. I gave students guidelines and asked them to be transparent about their use of the tools. I asked students to reflect on their experiences so that both they and I could understand what did and didn’t work. I trusted my students to be honest with me and I was open with them about the capabilities of our current generative AIs. When I didn’t know how something worked, I said “This is what I have heard or seen from other forums. Go try it and tell me what happens.” And they have. It was fascinating to give students some of the power and to let them know that I trust them to do the right thing. I’ll be adding a few more assignment tweaks in my summer courses and continue to look for ways to accommodate and include AIs within my classroom.

Now, let me interject that I’m also a realist. If there is a course that matters little to a student other than checking off a box on their graduation requirements, they are less likely to put a lot of energy into the work. This is not new. And I’m not sure that there is an easy way to overcome that. But, for the courses that matter in their majors, students can and should be given opportunities to use the different AI tools. We should be transparent with our guidelines and they should be transparent with their use of AIs. And then we should stand next to them and learn alongside them to see how these programs can enhance what we are already doing. Sure, we may need to rethink our assignments as we accept that AIs are here, and students are using them. But this openness to the new tools will only help us all in the end.

Instead of being adversarial or hoping to catch all the cheaters, we can embrace not knowing everything and allow ourselves to learn alongside our students. In this way, we can foster a space of knowledge and critical thinking rather than a space of fear of punishment. In the end, this should help instructors to have less frustration with how students are using the programs and should allow students room to learn and grow in the safety of a classroom geared toward knowledge and ethical tool use.

--

--

Laura Dumin
EduCreate

Professor, English & Tech Writing. Giving AI a whirl to see where it takes me. Also writing about motherhood & academic life. <https://ldumin157.com/>