Teaching New Tech: AI Chatbot Education in Middle Schools

Polis: Center for Politics
4 min readNov 29, 2023

--

Sofia Bliss-Carrascosa (PPS ‘25)

Sofia Bliss-Carrascosa (PPS ‘25)

When fed a prompt, ChatGPT can write an essay, then grade it, simultaneously replacing the student’s and the teacher’s labor. It’s easy, free, and widely known. Students are already using it; schools need to respond.

Given their ubiquity, K-12 students will inevitably use AI chatbots — school systems must implement a curriculum to teach students and educators how to appropriately use them while maintaining academic and intellectual integrity. Technology can improve a holistic education, but for that, its use must be purposeful. Educators have two main concerns regarding student use of chatbots: false information and cheating.

AI chatbots like ChatGPT are machine and deep learning models, trained on large amounts of data to recognize complex patterns and draw conclusions. They are combined with natural language processing models to understand and formulate sentences based on typical speech patterns. Essentially, ChatGPT is a predictive language model: it will respond by producing the most likely combination of words. It cannot determine fact from fiction: it is only trained to predict what word is statistically likely to come next and there is no guarantee of the validity of its claims.

According to Impact Research polling, almost half of students aged 12–14 already report using ChatGPT for schoolwork; over three-quarters of student users say it has a positive impact. Among other tasks, students may use ChatGPT to conduct research (anything from collecting background knowledge or as a main source), summarize readings, gather ideas for a project, or even complete assignments on their behalf. Over-reliance can hinder students’ learning and facilitate deception. This prompts concerns about academic integrity, as teachers don’t have effective ways to determine whether a student has plagiarized an assignment from a chatbot.

School responses vary widely: some have responded by banning ChatGPT on campuses and others go as far as to incorporate it into their teaching. Not all teachers’ views of ChatGPT are negative: half of all teachers report using it to create lessons and two-fifths encourage their students to use it. Even among educators that embrace AI technologies like ChatGPT, valid concerns about falsehoods remain.

Taking ChatGPT to be a reliable source leaves students vulnerable to misinformation. OpenAI, ChatGPT’s creator, warns that the program can “make up facts,” “hallucinate outputs,” and “produce harmful instructions or biased content.” Intended to serve a similar function, Google’s AI chatbot, Bard, generated misinformation 78 of 100 times when prompted, making egregious statements like “the Holocaust never happened.” Efforts to improve the factuality of this technology have yet to succeed. OpenAI heralds the newly released GPT-4 as an improved version, yet it is even less effective than its predecessor at suppressing disinformation. As tech companies fail to remediate these problems, ethical use — defined here as navigating misinformation and other improper uses — becomes the responsibility of consumers. Our educational system must give students the tools to critique this information.

Implementing this curriculum at the onset of potential AI chatbot use will enable students to responsibly benefit from the technology, maximizing their educational attainment and minimizing their susceptibility to harm. Education on how chatbots work — their utility and limitations — can be integrated into other tech literacy coursework. Most importantly, expectations for appropriate use should be individually addressed by teachers at the beginning of every course, regardless of the academic subject.

- Teachers must define what behavior constitutes cheating and what does not. Submitting an essay written by AI may be inappropriate, but generating a list of potential topics with it might not. Educators should consider that entirely banning AI use is ineffective and will likely not be adhered to. Even if classrooms are tech-free, educators can’t control the learning that happens at home. Reasonable standards encourage adoption.

- Teachers should engage in open conversations about the benefits, drawbacks, and limitations of AI. Students should be encouraged to interrogate the technology and must be made aware of its use as a tool as well as its fallibility.

- AI should not be used to verify information or serve as a citable source. Students must be required to find additional justification from a credible source for all information drawn from ChatGPT.

If equipped with the knowledge about when AI tools are helpful and when they are not, as well as clear and reasonable expectations, students will be primed to succeed in academic settings. Above all, critical thinking is essential, encouraging learning outside the classroom and insulating students against the falsehoods chatbots have a tendency to perpetuate. These are core principles of teaching; they simply need to adapt to our ever-changing technological landscape.

Sofia Bliss-Carrascosa is from Palo Alto, CA and an Undergraduate at Duke University’s Sanford School of Public Policy. This piece was submitted as an op-ed in the Spring ‘23 PUBPOL 301 course. This content does not represent the official or unofficial views of the Sanford School, Polis, Duke University, or any entity or individual other than the author.

--

--