The issues of “AIgiarism” in higher education

Luke Beales
Digital Society
Published in
6 min readMar 15, 2024
Photo by Growtika on Unsplash

In late 2022, the release of ChatGPT showed the world that futurism was imminent. AI was no longer a science fiction trope; instead, it was settling into the everyday household, with chatbots generating responses to almost any questions inputted. But its effect on education was swift: according to a 2023 survey, 95% of Americans believed that students used AI to assist with their assignments to some degree. In this blog, I will cover how chatbot plagiarism poses a threat to the higher education sector, and the ways universities have tackled — and could tackle — the challenges posed.

Photo by Christin Hume on Unsplash

Termed ‘AIgiarism’ by venture capitalist Paul Graham, plagiarism of AI-generated content has forced many educational institutions to either ban or enforce strict policies on the use of chatbots such as ChatGPT. The University of Manchester, for example, states that AIgiarism ‘must be treated using the same principles and processes as plagiarism of a person’. One reason for this is that, from an equitable point of view, cheating may unfairly raise one’s grades higher than those who deserve such marks. However, from an individualistic point of view, fellow Digital Society member Ching Jung Hsu suggests that prolonged use of AI for academic purposes may weaken one’s critical analysis skills, which I would argue is due to the lack of exposure to proper examination.

Photo by Emiliano Vittoriosi on Unsplash

Nonetheless, from a purely academic view, not only can chatbots such as ChatGPT produce misinformation, but it can be difficult to differentiate between truth and falsity due to the lack of — or fabrication of — citations for any given generated material. These drawbacks of chatbots in academia can be seen in the below picture, where I asked ChatGPT to produce an analysis of how chatbots can affect higher education. The aim of this was to show that AI could generate a plausible paragraph that fits in accordingly with the topic (and style) of this blog.

An example paragraph generated by ChatGPT

Referring to the ‘equitable’ view, while the paragraph may not go into enough detail to compete with a top-performing analysis, it is certainly coherent and critical enough to receive a higher appraisal than deserved. From the academic stance, however, the fluency of the writing creates a façade of credibility, as, despite the convincing impression given, the lack of citations diminishes the validity of the arguments the chatbot generates, thus prior knowledge is needed to prove the soundness of any given discussion. To somebody inexperienced in this field, algorithmic bias and compromised privacy, for example, cannot be considered factual without verification from outside sources, as this poses a risk of contributing to the spread of misinformation.

Photo by Nathan Dumlao on Unsplash

So what has been — and what can be — done about this?

As previously mentioned, restrictions on chatbots have already been put in place, from schoolwide (and even countrywide) bans to controlled usage and harsh sanctions. However, setting rules doesn’t mean they will be followed: enforcement is needed. One way this has been done is through the use of AIgiarism checkers, with companies such as Turnitin, CopyLeaks, GptZero, Sapling, and ZeroGPT currently on the market and being used within the education sector to find those who AIgiarise.

Photo by KOBU Agency on Unsplash

However, such software has its drawbacks. As Dr. Sarah Eaton, a professor at the University of Calgary, states, too many educators see these tools as “a silver bullet [that] can help them do the difficult work of identifying possible academic misconduct”. But as it turns out, these AI plagiarism checkers are often incorrect and can give false positives (as well as false negatives). The University of Manchester is aware of this and, in an AI teaching guidance document, states that “Tools to detect AI-generated content are unreliable and biased and must not be used to identify academic malpractice in summative assessment”.

Photo by Dom Fou on Unsplash

So, if AI detection software is currently unreliable, what should be done about AIgiarism? Turnitin offers a solution: have a conversation with the student and get to know their writing style and background. While this sounds plausible on paper, there are multiple issues with it. Firstly, as a mathematics student, I have found myself enrolled in modules containing over two hundred students at one time, with only one or two lecturers having face-to-face interactions with the students. In such modules, it would prove difficult for the staff to understand each student’s writing style or background, especially those with low attendance. Secondly, if the conversation Turnitin refers to is that of an AIgiarism confrontation, the student could simply lie, and it would be very difficult without software to prove that chatbots weren’t used.

Photo by Scott Graham on Unsplash

Thus, my proposal to combat AIgiarism is not by directly attempting to identify and penalize those who do it. Or, at least not for now; should AI detection software become more technologically advanced in the future, perhaps its reliability will be of use in higher education. But for a present-day solution, I believe that looking at a more analogue method would be of use.

A section of ChatGPT’s response to “Prove the Black-Scholes equation”

Changing the assessment style so that it either allows for AI to be used or renders AI use pointless would mean that the use of chatbots would no longer require punishment. For example, in mathematical subjects, heavier weighting on proof-based questions may cause chatbots such as ChatGPT to struggle. The above picture shows a section of the response generated by ChatGPT when I asked it to prove the Black-Scholes equation; as can be seen, it fails to form an explicit proof, instead asking me to refer to academic papers. Another change in assessment style that could benefit a wider range of subjects is the use of more in-person examinations so that the use of AI is not possible without traditional analogue cheating methods.

Photo by Andrew Neel on Unsplash

In conclusion, despite the opportunities chatbots offer for the higher education sector, we see numerous challenges presented too. Universities and colleges have been seen to try and tackle the issues using AI checkers, but the efficacy of such software is questionable. Fighting AI within the education sector seems somewhat futile without legal intervention, but if universities learn to work alongside the growing improvements of AI with tactful thinking (rather than brute force via the use of counter-software), then perhaps the future of education will not be tainted by AIgiarism.

--

--