AI in Education

Let’s face it — it’s an “and” statement. It could help teachers if appropriately used.

Jenn Sayre
New Writers Welcome
5 min readAug 1, 2024

--

A happy, rested-looking female educator who may be using AI to support her paper-grading workload.
Image of a rested-looking teacher supported by AI grading — generated by the author via Adobe Firefly

Like many technological leaps, life as we know it changed with the emergence of generally accessible generative artificial intelligence, such as OpenAI’s ChatGPT.

Despite their novelty, people have mixed feelings about these tools. Some are fascinated by their potential, while others are concerned about the implications, depending on the information they encounter and the people they talk to.

At this point, it is clear that accelerating workstreams with generative AI will increase productivity tremendously and transform teaching and learning.

In either case, we must apply it thoughtfully instead of indiscriminately throwing it at every scenario.

Unfortunately, we already see unscrupulous behavior in business and academia regarding the surreptitious use of generative artificial intelligence. AI like ChatGPT presents transformative potential in education by automating tasks for educators and facilitating learning for students.

However, its integration requires cautious implementation to avoid misuse and educate students on AI’s ethical use.

Transformation Calls for Rigor

Consider the era when internet access became widespread, democratizing information access for learners. While this marked a significant advancement, it also brought challenges that underscored the increasing importance of critical thinking. Today, the ability to discern credible sources remains paramount, just as during the Internet’s initial proliferation.

The earlier net positive that is internet access also had downsides, notably an upswing in plagiarism due to the ease of content access and replication. Learners seeking to make shortcuts in demonstrating mastery of a topic found it more straightforward to present information as their own (Bailey, 2019), giving rise to a burgeoning industry and development of plagiarism detection platforms like TurnItIn.

Educators swiftly voiced concerns that students might misuse AI tools by utilizing its generated content in their assignments and presenting it as their original work.

Regrettably, these concerns proved valid. An acquaintance of mine faced repercussions in June 2023 when they used ChatGPT to expand a chapter of their graduate thesis, raising ethical dilemmas in their department. They were lucky to have been given a second chance to rewrite it independently.

Policy Gaps

Notably, a need for a cohesive, standard academic policy on this currently exists, and there is a lack of reliable tools to detect AI-generated content accurately. While some tools (including TurnItIn) claim to achieve this, their accuracy could be better (Williams, 2023), and they occasionally even misattribute well-known, human-authored works to artificial intelligence.

The infamous Levidow, Schwartz, and LoDuca discipline involving inappropriate ChatGPT usage in the Avianca case captivated me as it played out in 2023.

The judge’s ruling highlighted that the real issue was not the use of artificial intelligence per se but rather the ethical lapse of failing to review and verify the content, compounded by the insistence on the veracity of fabricated filings generated by ChatGPT (Merken, 2023).

Students have made the same mistake, underscoring the importance of critically analyzing and cross-referencing AI-generated reference content to avoid unwittingly propagating false or fabricated information. The risks associated with students placing unwarranted trust in generated content without conducting their critical analysis and review are concerning.

If students lack familiarity with a topic, which is a probable scenario, and do not cross-reference with reliable sources, they risk presenting hallucinated ideas as their own, perpetuating misinformation.

Benefits for Educators

It’s not all bad, though.

As with many issues, artificial intelligence is a gray area with tremendous potential when deployed and used thoughtfully. Educators can benefit significantly from integrating generative AI to automate repetitive or tedious tasks.

Consider one of the most common complaints of educators: how much of their time is taken up by grading. This task, especially where free-form responses or papers are concerned, has historically demanded manual review and analysis. For example, an educator could provide a tool with the assignment rubric, then provide student papers to analyze and provide markup, directing educators to specific areas for a more in-depth review. Of course, a nuance of this use case means a tool and model that does not continue to train from student content is necessary. Additionally, educators can use AI to identify blind spots in lesson plans concerning the intended curriculum.

These examples of applications of GenAI could directly address a vast opportunity and have a significant positive quality-of-life impact on our educators. At a time when the United States faces a shortage of educators (US Department of Education) — both due to people leaving the profession and too few people entering, technology that improves the quality of life and time burden on educators could make a massive impact on staffing levels and, therefore, education quality and student outcomes.

Benefits for Students

Integrating generative AI into education necessitates reevaluating what and how we teach. Ethical uses of artificial intelligence for students include:

  • generating a list of sources to investigate in their research
  • generating flashcard content
  • using it to help them organize

Beyond imparting research, critical analysis, and synthesis skills, students must learn about generative AI’s responsible and ethical use.

Education will need to evolve to encompass skills like prompt engineering and refinement, just as many of us needed to develop coherent keyword search skills. Artificial Intelligence will become another tool in our toolkit that we must learn to navigate responsibly, just as we learned to use computers and the internet in a general sense.

The Bottom Line

In conclusion, generally, accessible generative AI has introduced a paradigm shift in education. While it poses significant potential in the form of automated grading and augmenting learning experiences, it necessitates careful, nuanced oversight and education about its ethical use.

Integrating AI into education is crucial to leveraging its productivity capabilities and ensuring learners cultivate critical thinking and responsible AI engagement. The responsible integration of AI in education, combined with informed and ethical use, can significantly enhance the learning process for educators and students and better prepare students for a future increasingly shaped by artificial intelligence.

References:

Bailey, J. (2019, January 29). 5 historical moments that shaped plagiarism | Turnitin. TurnItIn Blog. https://www.turnitin.com/blog/5-historical-moments-that-shaped-plagiarism

Merken, S. (2023, June 26). New York lawyers were sanctioned for using fake CHATGPT cases in a legal brief. Reuters. https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

Williams, R. (2023, July 14). AI-text detection tools are really easy to fool. MIT Technology Review. https://www.technologyreview.com/2023/07/07/1075982/ai-text-detection-tools-are-really-easy-to-fool/

--

--

Jenn Sayre
New Writers Welcome

MBA, PMP, POPM, & Woman in STEM. I forged my tech career in a weird order. I write about tech, leadership and career growth, and college in the US.