AI detectors: Why I won’t use them

Laura Dumin
8 min readJun 3, 2023

--

I have recently had the opportunity to be part of some conversations on using AI detection programs to determine if students used generative AI (referred to as AI from here) to write their papers. Every time, I say “nope” to the use of these detectors, and I usually get some push back (ok, that’s an understatement. The push back is often pretty strong). But let me tell you why I’m no fan of the detectors or of using AI to tell you if something was written by AI.

Before we get into the reasons why I’m anti-AI detector, let me ask you a question. Why did you get into teaching? What drove you to become an educator? Were you driven by the idea of helping students to learn new concepts? Did you love your field and want to help others love it too? Did you think that knowledge was worth gaining, even when the learning was hard? Or did you want to punish anyone who made a mistake? Catch every single last cheater on the planet and watch them burn for their sins?

I would wager that most of us got into teaching to impart knowledge, not to be gatekeepers of what “correct” transmission of that knowledge looks like. (Caveat here that plagiarism is in a different category for me than AI use.)

Let me also add here that I am not naïve; I’m not unaware of why students cheat and why knowledge is important. But I am a realist. AI is here and we need to figure out what that means for our classrooms without focusing on punitive measures as a means of enforcement.

Problems with AI-detector programs

Now let me come back to why I am not in favor of using AI-detector programs on student writing.

1. It creates an adversarial environment in the classroom. It sets things up with the assumption that everyone will cheat if they can, so the instructor is going to use these detectors to ensure that the cheating doesn’t happen. Once students realize that instructors are interested in catching them cheating, they are likely to choose to disengage from the course and the material. This can leave students less prepared at the end of the semester and can lead to us having a more boring semester since students won’t necessarily be as willing to go on a learning journey with us.

2. Instructors put too much faith in the programs and might choose to believe them over the word of a student. These programs are not foolproof (see Turnitin’s latest results). These programs are easy enough to defeat by using one program to write an assignment and having another program rewrite the assignment to sound more human. These programs catch the use of Grammarly, which is an AI program, but I argue that having a program fix your grammar and word-choices is far different than having an AI program write your paper for you. And there is an arms-race of sorts between defeating the detectors and having the detectors work for a bit. This is a race that no one wins.

3. We have the potential to inflict real trauma on students by accusing them of cheating when they haven’t. In order for students to learn well, they need to feel secure in our classrooms. They need to feel like we trust them to gain the knowledge that they need to be successful in our classes. If they feel like we are out to get them or if they are constantly worried about being falsely accused of cheating, they are less likely to gain the knowledge that we want them to have and are likely to feel more stress while completing assignments.

4. If you have run your own writing through an AI-detector, you might have been surprised by the results. I keep hearing from people who received positive AI scores for work that they wrote without any outside help. This also shows a flaw in how detectors determine what is and is not written by AI.

5. How many AI detectors must come back with a positive result for you to believe them? How much time are you spending running every essay through multiple detectors? Or, on the flip side, are you trusting just one and letting those results determine which students you accuse of cheating? This seems like a place where we could quickly spiral into policing rather than teaching and allowing students space to experiment with these tools in a low-stakes way. And isn’t the point of education to give students room to try, learn, fail, try again, and get better at a task or topic over time? (This comes back to literacy and guidelines.)

6. AI is quickly integrating into the programs that students and instructors alike use regularly. Google Docs just came out with their AI program, and Ethan Mollick has written a good piece about what that program can do. Microsoft Office is rolling out Copilot, which puts AI help at our fingertips. Windows 11 has an AI assistant that will soon be available. AI is integrated into our searches and browsers, either with the use of programs like Bing or with add-ons to other browsers. Soon enough, it will be hard to imagine a world where AI tools aren’t available in many of our programs.

7. Along with AI being easily accessible and readily usable, what does it say about us if we are checking our students’ writing for AI, but we are using it ourselves to help with our writing and workflow? I know many people who use programs like ChatGPT on a weekly, if not daily, basis to complete some of the more mundane parts of their jobs. And some people, like Mollick, are finding that it does a pretty good job of writing letters of recommendation or general job application letters. So AI is quickly becoming a tool for many people. Would we want our own workplace writing to be checked for AI? Many people would “fail” work if that were the case. Sure, students need to learn how to make their ideas understood, but if AI can be a helpful tool in that journey, perhaps it is worth thinking about what it makes sense for us to teach our students about writing and idea creation.

8. At the end of the day, I am here to teach. I am here to give students room to grow, shift, change, solidify, or whatever else they might do in relation to the skills and content that I present. I feel like my time is better spent in helping students to learn new things rather than policing everyone on the off chance that I might catch one or two people using AI inappropriately.

What I do instead?

So how did I combat the idea of cheating this last spring? I came at it from multiple angles, not being sure of what might work or fail. I have a syllabus statement that indicated that we would be using AI in some places and not in others. I taught my students about what AI could and couldn’t do well. I showed them pictures of world leaders playing at the beach and the Pope in a puffy jacket. I showed them examples of what ChatGPT wrote and where it hallucinated. I had students use ChatGPT (and other AI programs if they wanted) and then had them reflect on the output. I gave students guidelines on each assignment (and have clarified those for the summer and beyond) about where they could and couldn’t use AI. In other words, I spent time on AI literacy and why we still need critical thinking skills to use these tools.

And I’m not the only one doing these sorts of things. We are beginning to see a shift toward educating educators on how AI does and doesn’t work, and how we can incorporate these tools into our classrooms in ways that make sense for us. I have begun offering workshops this summer to help educators think about how they might use AI in their classrooms. There are others offering similar services. It will take some effort to reach everyone who wants to learn AI-related skills, and I am pleased to see many institutions realizing this.

Take-aways

At the end of the day, most of us want to teach, not police. Most of us want to make connections with our students rather than push them away. And most of us want to find a clear path forward that allows us to focus on our jobs rather than add extra work. While AI-detectors might seem like an easy solution, they are likely to cause more harm than good in both the short and long term. As educators, we owe it to ourselves and our students to focus on ways forward that accept that AI is here for now and that think about how AI could or couldn’t be used in our classrooms.

I know that for large courses, it can seem overwhelming to wonder whether students have used AI to help with their assignments. Hopefully some of the ideas presented above will show a better path forward than relying on AI-detectors. Even adding small bits of AI literacy throughout the semester can make a difference in helping students see where the tools can be used or would be less helpful.

If you are looking for some further ideas on what that might look like for you or your institution, here are some sites that might be of help:

· A Generative AI Primer

· Artificial Intelligence

References

Choudhary, G. (2023, March 21). Barack Obama and Angela Merkel’s AI-generated beach day goes viral. Mint. https://www.livemint.com/technology/tech-news/barack-obama-and-angela-merkel-s-ai-generated-beach-day-post-goes-viral-details-11679385289640.html

D’Agostino, S. (2023, May 26). Professors plan summer upskilling, with or without support. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/05/26/professors-plan-summer-ai-upskilling-or

D’Agostino, S. (2023, June 1). Turnitin’s AI detector: higher than expected false positives. Inside Higher Ed. https://www.insidehighered.com/news/quick-takes/2023/06/01/turnitins-ai-detector-higher-expected-false-positives

Klee, M. (2023, May 17). Professor flunks all his students after ChatGPT falsely claims it wrote their papers. Rolling Stone. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/

Lufkin, R. (2023, May 31). It’s time to shift the AI conversation from student cheating to training educators. FE News. https://www.fenews.co.uk/exclusive/its-time-to-shift-the-ai-conversation-from-student-cheating-to-training-educators/

McLeod, S. (2023, June 1). Maslow’s hierarchy of needs. Simply Psychology. https://www.simplypsychology.org/maslow.html

Microsoft. (2023). Copilot: A whole new way to work. https://news.microsoft.com/reinventing-productivity/

Mollick, E. (2023, June 3). Setting time on fire and the temptation of The Button. [blog]. Substack. https://substack.com/inbox/post/119988460

Novak, M. (2023, March 26). That viral image of Pope Francis wearing a white puffer coat is totally fake. Forbes. https://www.forbes.com/sites/mattnovak/2023/03/26/that-viral-image-of-pope-francis-wearing-a-white-puffer-coat-is-totally-fake/?sh=1bf5d00d1c6c

Office of Educational Technology. (2023). Artificial intelligence. https://tech.ed.gov/ai/

Warren, T. (2023, May 23). Microsoft announces Windows Copilot, an AI ‘personal assistant for Windows 11. The Average. https://www.theverge.com/2023/5/23/23732454/microsoft-ai-windows-11-copilot-build

Webb, M. (2023, May 11). A generative AI primer. National Centre for AI. https://nationalcentreforai.jiscinvolve.org/wp/2023/05/11/generative-ai-primer/

--

--

Laura Dumin

Professor, English & Tech Writing. Giving AI a whirl to see where it takes me. Also writing about motherhood & academic life. <https://ldumin157.com/>