ChatGPT: Revolutionizing Education or Eroding Critical Thinking?

Exploring the Dangers of AI Technology and the Need for Ethical Guidelines in Schools

Haley Hougardy
Artificial Corner
4 min readMay 2, 2023

--

In George Orwell’s 1984, an authoritarian government restricts the use of language so that complex thought is simply unthinkable. As a result, the Ministry of Truth is free to manipulate literature and historical documents to sedate the masses without being questioned. While Orwell’s 1984 is a work of science fiction, emerging technologies like ChatGPT are not, and they represent an ever-evolving threat to critical thought.

Artificial intelligence, like ChatGPT, utilizes advanced natural language processing technology to imitate human writing conventions. I, like many, use AI to brainstorm content ideas. However, we must address the potential negative impact of this technology on critical thinking skills in educational contexts. AI developers continue to push the limits of what is possible, but there are mounting concerns about the rise of cheating and the spread of misinformation. Establishing guidelines for AI use amidst the excitement of futuristic innovation is crucial to prevent the loss of critical thinking among students.

Author generated AI image from the terms, “George Orwell’s 1984 on red cover alongside a computer.”
AI-Generated Image from WriteSonic’s PhotoSonic

Many educators are already taking an aggressive stance against ChatGPT. In January 2023, New York City’s Department of Education banned access to the system on all school devices and networks. Educators fear the tool may discourage critical thinking skills and prevent accurate assessment of students’ writing proficiency. Despite attempts to ban its use, ChatGPT’s chameleon-like power raises concerns about its ability to masquerade as human. For instance, a 2022 study found that ChatGPT can produce natural-sounding text nearly indistinguishable from actual human-written passages.

Some claim that the benefits of unfettered AI outweigh the disadvantages. For example, AI experts claim access to the system will increase efficiency and improve student learning outcomes. AI-powered systems could generate customized student learning evaluations that cater to individual needs and interests. This approach would eliminate the one-size-fits-all approach to education and allow teachers to personalize the learning experience for each student. Instead of assigning the same practice problems to everyone, teachers could leverage AI technology to create unique problem sets that are more engaging and relevant to students’ interests, potentially improving their motivation to learn. It’s clear that AI in education can potentially deliver numerous benefits for teachers and students.

The real issue comes when we allow learners direct access to the tool. When young students use AI writing assistants to complete assignments, they are not learning how to write independently, which can lead to a lack of critical thinking and poor writing skills in the long run. Educators must promote the responsible use of AI tools and encourage students to view them as learning aids rather than shortcuts. However, the question remains: can we trust students who may not yet be old enough to vote to use AI responsibly, even with proper training?

Author generated AI image from the terms, “Panic-Inducing AI Robot.”
AI-Generated Image from WriteSonic’s PhotoSonic

Academic dishonesty is a widespread issue in educational settings. Studies show that many students admit to cheating on exams or homework and committing plagiarism. Serious cheating begins in high school, where the pressure to achieve good grades is high. Recent findings from the International Center for Academic Integrity suggest that most high school students indulge in academic dishonesty. Nearly 95% of the respondents, comprising approximately 70,000 students, acknowledged violating their institution’s honor code. Of these, 64% confessed to cheating during examinations, while 58% admitted plagiarism.

Cheaters may feel no guilt, and educators often unknowingly reward students’ dishonesty with higher GPAs. Research conducted at Fordham University revealed a surprising relationship between cheating and GPA. Researchers discovered that students who engaged in cheating activities had a higher average GPA of 3.41. In contrast, those who refrained from cheating had a comparatively lower average GPA of just 2.85. Moreover, cheating has skyrocketed during the pandemic with the dramatic shift to online learning and testing. Newsflash: ChatGPT has transformed the landscape of academic dishonesty, making it easier than ever before

Ultimately, students will rely on the tool instead of their critical thought to boost their productivity. It is essential to avoid unintentionally using AI in a way that harms students’ learning, despite its potential as an educational tool. By emphasizing the importance of critical thinking and creativity, students can use AI writing assistants to enhance their productivity while honing their problem-solving skills and developing a deeper understanding of the material. However, the need to monitor students’ AI use persists, especially in light of studies on academic dishonesty.

So, what’s the solution? I’m not entirely sure. However, I’m interested in hearing others’ thoughts. Programs like Turnitin have started incorporating AI detection software to discourage dishonest students. However, if AI continues to evolve at the same rate right now, who is to say it won’t outsmart these detection systems? I utilized AI detection software to analyze a piece I created using WriteSonic, and the results were unsettling. Some detection tools identified the content as AI-generated, while others deemed it “100% human.” This discrepancy highlights the varying levels of accuracy among AI detection software and underlines the alarming capabilities of advanced AI writing assistants

Orwell’s 1984 is a cautionary tale of what can happen when individuals lose their critical thinking ability. Although AI holds great promise for futuristic innovation, it is vital to consider the adverse consequences of such advancements. If we’re not careful, we might not notice how much we rely on it, making it more difficult to think critically and make informed decisions. Therefore, it is crucial to establish ethical guidelines for using AI technology in schools to ensure that it supports critical thinking rather than replaces it.

Artificial Corner’s Free ChatGPT Cheat Sheet

We’re offering a free cheat sheet to our readers. Join our newsletter with 20K+ people and get our free ChatGPT cheat sheet.

--

--

Haley Hougardy
Artificial Corner

Writer @ mhanational.org + ex-writing instructor. Here to discuss writing/mental health in digestible ways.