AI, Academics, and the Accused Students

Sanindie Silva
QMIND Technology Review
5 min readOct 16, 2023

In the rapidly evolving landscape of education, balancing AI in the world of Academics raises critical questions about the ethical use of technology in academic settings, with professors grappling to strike a balance between innovation and maintaining academic integrity.

The traditional approach to education is undergoing a transformative shift as students increasingly turn to Artificial Intelligence (AI) technologies to enhance their learning experiences, with new products announced daily and gaining immense popularity among students. AI technologies such as ChatGPT, Grammarly and ScholarAI have become go-to tools for tasks ranging from creating practice problems to editing grammar in essays.

As students navigate the demands of academic life, these AI tools offer a versatile set of capabilities. ChatGPT, for instance, provides a unique space for brainstorming ideas, generating content, and seeking inspiration for assignments.

The potential benefits are multifaceted. These AI tools can help improve writing skills, offer instant feedback, and enable students to iterate and refine their work efficiently. The adaptability of these tools fosters personalized learning experiences, catering to individual needs and learning styles.

However, this surge in AI usage is not without its challenges. One primary concern revolves around the ethical use of AI in academic settings, with questions arising about the boundaries between assistance and plagiarism.

Despite most of these tools aiming to empower students, some technologies can give way to academic dishonesty. With the ease of access to this new technology and the lack of regulation within most institutions, students may feel more inclined to use AI tools for malicious practices. While some students leverage its capabilities to overcome writer’s block or seek inspiration, others may use it to generate the final piece of content. With the acceleration of technological progress, Large Language Models (LLMs) like ChatGPT are powerful enough to write essays in seconds, with a surprising level of competency. With the rapid rise of this new technology, it’s no wonder that university professors are worried about their students using AI to cheat.

With Canadian universities reporting levels of cheating on the rise (Dubinski, 2023), and given the technology is new and minimally regulated, it is understandable that professors are taking their own measures to prevent cheating in their classrooms. However, they must ensure they use the right methods to catch cheaters.

A Rolling Stone article from May 2023 discusses an incident at Texas A&M University where several students were accused of using ChatGPT to generate the content for their assignment. The accusations led to disciplinary actions, including failing grades and threats to student’s graduation status. While some students admitted to using ChatGPT for their assignments, several others were adamant that they were falsely accused of academic dishonesty (Klee, 2023).

The professor had copied the students’ work into ChatGPT and asked the chatbot if it had written the reports, and ChatGPT falsely claimed it did (Klee, 2023). However, ChatGPT does not have this capability and cannot identify what it wrote, let alone if it was written by another LLM (Novak, 2023).

With the rise of chatbots and LLMs, there are also AI-identifying tools that claim to identify AI-written texts from human-written texts accurately. Most AI detectors, often used for plagiarism detection, work by analyzing text to identify common patterns or any discrepancies that might indicate that the content was created using an LLM. However, these tools are unreliable as numerous complaints have been lodged against their use, such as their bias against non-native English and neurodivergent writers.

A study published in the Pattern by James Zou, an assistant professor at Stanford, compared the accuracy of seven popular GPT detectors using writing samples from native and non-native speakers. The study found that AI detectors misclassified over half of non-native English writers’ samples as written by LLM. One detector incorrectly classified 98% of the non-native English writers’ work, while the accuracy for native English writing samples performed extremely well.

You might be familiar with the now viral exchange, where a Professor at Purdue University, Rua Mae Williams, was accused of sending an email written by an AI and infamously replied, “It’s not an AI. I’m just Autistic.”

Tweet Source

These misclassifications raise concerns regarding accessibility and how AI-detection software disproportionately impacts neurodivergent and non-native English-speaking students. This could lead to unfair consequences and potentially exacerbate existing biases against already disadvantaged groups. Both parties, from the last two examples, emphasize that more work needs to be done to address these preexisting biases, both in the AI-detection algorithms and within the bureaucracy of Academia.

While it’s apparent that a lot of work needs to be done to improve existing AI detection tools, that doesn’t mean that educators won’t still try to use these tools, especially with trusted academic tools like Turnitin now having AI-detection software.

It’s now up to universities to ensure proper procedures are in place to properly handle disputes about AI technology use in the classroom. Many universities have started considering how AI will interact with the academic sphere. Several have released statements and policies regarding AI usage for academic work and how it’ll handle cases of potential Academic Dishonesty via AI (read Queen’s University’s AI policy). It is of the utmost importance that there is clear communication and policies regarding AI tools from the institutions.

Ultimately, it is the universities’ and institutions’ responsibility to clearly communicate how to use AI in classrooms, implement proper investigations that protect against biases in AI-detection tools, and ensure students aren’t wrongfully accused of academic dishonesty. However, it does take time to implement proper practices. So, in the meantime, I recommend students have a detailed history of their work progress, either using Google Docs to track their edits or having multiple time-stamped documents of their work as they’ve progressed.

This article was written for QMIND — Canada’s largest undergraduate community for leaders in disruptive technology.

References

Dubinski, K. (2023, January 12). Cheating reportedly on rise in universities and colleges, and ai could make it worse, experts say | CBC news. CBCnews. https://www.cbc.ca/news/canada/london/cheating-on-the-rise-in-universities-and-colleges-and-artificial-intelligence-could-make-it-worse-1.6710298

Klee, M. (2023, May 17). Professor flunks all his students after CHATGPT falsely claims it wrote their papers. Rolling Stone. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7), 100779. https://doi.org/10.1016/j.patter.2023.100779

Novak, M. (2023, October 5). No, chatgpt can’t tell you if a paper was written using chatgpt. Forbes. https://www.forbes.com/sites/mattnovak/2023/05/16/no-chatgpt-cant-tell-you-if-a-paper-was-written-using-chatgpt/?sh=4c7b34ec70e4

--

--