Navigating AI Use in Education

Brian Li
Neurotech@Davis
Published in
4 min readDec 6, 2023

In the rapidly evolving field of AI, large language models are reshaping education. This article explores diverse perspectives on AI integration in academia.

Written by: Brian Li

In a recent discussion about opportunities in AI, co-founder of Coursera and AI expert Andrew Ng emphasized, “AI is the new electricity… it is general-purpose technology, meaning that it’s not only useful for one thing, but it’s useful for lots of different applications.” The proliferation of AI, such as large language models (LLMs), has broad applications in professional, academic, and personal areas. For example, ChatGPT, a LLM that was released in November 2022 by OpenAI, exploded in popularity and became one of the fastest growing apps of all time due to its versatility and accessibility. In addition, its introduction of generative pre-trained transformers (GPTs), featuring shareable chatbots that are customizable for a specific purpose, further amplifies the possibilities of LLMs. Moreover, with OpenAI’s framework integration with LangChain and other open source conversational models, users can input their own custom data for the LLM to query on, allowing for enhanced accessibility and flexibility. For example, users can input their data in a variety of file formats (.pdf, .pptx, .docx) and from popular apps like Gmail, Slack, and Discord for a LLM to answer any queries.

With the rise of these conversational models, the relevance and impact of the academic applications of AI are a growing concern among educators. According to a recent UNESCO global survey of 450 schools and universities, less than 10% have formal guidance concerning the use of generative AI. As a result, this responsibility has been pushed to individual departments and/or classes to enact their own policies on the degree to which generative AI is allowed. This decentralized approach raises critical questions about ethical considerations, potential biases, and the standardization of AI usage in academic settings. Furthermore, this lack of a comprehensive policy framework burdens educators and administrators, resulting in varying degrees of understanding how to navigate the integration of AI into education.

To understand this skepticism, there are three general attitudes towards the use of AI, particularly LLMs, in education. Placing these viewpoints on a spectrum, let the leftmost point represent “considering all AI-generated work as plagiarism” and the rightmost point represent “embracing all AI-generated work”. The third view in the spectrum lies in the middle, suggesting that some AI usage is permitted, while other instances are restricted.

Two arguments that skeptics of AI-generated work argue for are the preservation of academic integrity and the encouragement of critical thinking. By treating all AI-generated work as plagiarism, it upholds the traditional standards of academic integrity, ensuring that students are responsible for their own understanding and effort. Through this, students are encouraged to engage deeply with the course material, rather than using and/or relying on LLMs.

On the other hand, proponents of embracing AI-generated work raise concerns over the hindrance of integrating technology in education and disregard of real-world applications of AI tools. By embracing AI-generated work, students will be more willing to explore and learn from the benefits of AI, such as personalized learning and timely feedback. Additionally, if students learn how to use AI appropriately and effectively in school, they will be more familiar with AI tools when they enter the workforce.

The third view in the spectrum suggests a middle ground, acknowledging the complexities surrounding the use of AI in education. Proponents of this view argue that the benefits of AI can be harnessed without sacrificing the development of critical thinking and problem-solving skills. To achieve this, proponents advocate for the integration of AI in a controlled and purposeful manner. Regardless of the arguments that can be made for and against AI-generated work mentioned above, opponents of this view question what the cutoff in AI use is before it is considered plagiarism. Due to a lack of universally applicable guidelines for the use of AI in education, if schools allow for some use of AI, plagiarized work that falls under AI-generated work will need to be evaluated in a case to case basis.

Moreover, with any tolerance of AI-generated work, there are ethical considerations, such as data privacy and algorithmic bias, to take into account.

As AI becomes increasingly popular, students await the establishment of formal guidelines of the use of AI by educational institutions. During this period, it is important to discuss the benefits, concerns, and ethics of integrating AI into education as it is paramount for how students learn in the future. Moving forward, like any piece of technology, the impact of AI will vary based on how it is managed, regulated, and consumed.

References

¹See LlamaHub for an open-source library of data loaders for LLMs to be used with LlamaIndex and/or LangChain

²Various other LLMs besides ChatGPT like Google’s Bard and Claude have increased in popularity

Further Reading:

--

--