Eight Things Students Should Know About AI

Benjamin Klieger
10 min readApr 23, 2024

--

While there are many types of AI, this article will focus on the newest popular category of tools: text-generating AI, also known as Large Language Models (LLMs). As with all tools and technologies, AI can be used for good or bad. It can advance your education if used well, or impede it if used unproductively. If you want ideas for how to use AI productively in education as a student, you can learn from online resources, talk with your teachers, develop your own experience by testing new tools, and even discuss with an AI chat tool such as ChatGPT. This article aims to provide a list of points every student should know about AI, written from a student perspective by a current high school senior.

1. How does AI work?

Newer text-generating AI tools, such as OpenAI’s ChatGPT, Microsoft’s Bing Copilot, and Google’s Gemini, are Large Language Models (LLMs), that learn from a large amount of data to be able to develop conversational responses to human prompts. Many have been trained on most of the text on the internet, Wikipedia, and digitally published books, as examples.

While LLMs can generate content that would require reasoning if generated by a human, an LLM is not the same as a machine that is thinking, nor one that is calculating. Instead, LLMs mimic reasoning by outputting a response in one block of text at a time, rather than thinking in concepts like humans.

When a LLM generates a sentence, it will predict the next token (put simply, the LLM’s version of a word) one at a time. After generating a set of probable next words, the LLM will then select one and append it to the output, then move to generate the next word.

Image showing three predictions for the next word after “my favorite food is”. The predictions and percentages are as follows: Pizza with 53%, sushi with 30%, and tacos with 5%.
Example prediction of next word from LLM
(Source: DeepLearning.AI)

Before the release of ChatGPT, you were likely already familiar with using a language model: The autocomplete feature on iMessages is a language model which predicts the next word. LLMs are *Large* Language Models as they have been trained over a large collection of text, providing them greater knowledge and ability than your phone’s autocomplete.

To generate more creative and diverse responses, a randomness factor (or “temperature”) is often introduced to effectively adjust the likelihood of selecting words, enabling choices that would not have been highly probable to be considered.

Result of adding temperature, a randomness factor, to choose the next word.
Includes several runs of selecting the next word. (Source: DeepLearning.AI)

This produces a more creative output for the LLM. Overall, these features contribute to the working of AI models which can accomplish impressive feats.

2. What can AI do?

LLMs can be highly capable in some academic areas, such as GPT-4’s ability to score within the top ~10% of SAT test takers or attain a near perfect score on the AP Macroeconomics exam [1]. LLMs can take on the role of a teacher, tutor, student peer, and more. They can provide individualized support through teaching information with simpler language to explain complex topics, using custom activities to adapt topics to your interests and learning preferences, and testing your understanding by creating mock quizzes. Additionally, LLMs are available 24/7 to help you brainstorm or answer any clarifying questions, as in the examples below.

Example of GPT4 explaining a topic
Example of Tutor-GPT testing student understanding

LLMs are capable of helping in all class subject areas, from English to mathematics, across grade levels from elementary school to university. The optimal usage of AI may differ depending on the context of the grade level and subject area. For English classes, you may find AI best at being a brainstorming partner, while for mathematics it can explain new concepts.

It is important to note that not all AI have the same capabilities. For instance, the difference between ChatGPT-3.5 and ChatGPT-4 is significant. While GPT-4 achieved a 5 score on the AP Macroeconomics exam, GPT-3.5 only scored a 2. In addition, AI’s capabilities are expanding and improving rapidly. Besides advancements in the accuracy of solving text-based problems, ChatGPT-4 can now view and interpret images such as mathematical charts or artwork, and read through files such as class readings.

Differences between GPT3.5 and GPT4 performance. Green highlights the
gains in performance between GPT3.5 and GPT4. (Source: Open.AI)

3. What roles can AI take?

LLMs can augment your education in many ways, rather than just answering a question or assignment. This includes guiding your ideas and thought process, assisting with or providing iterative feedback on drafts, making the research process more efficient, and much more. LLMs can take on a multitude of different roles that can be used productively for your learning, some of which are described in the table below. In order to ensure AI enhances rather than replaces your learning process, usage of LLMs requires engagement, reflection upon the problem and prior knowledge, and even some productive struggle through the assignment.

Potential roles of AI in education, from UNESCO, 2023

To ensure AI is addressing and augmenting your genuine understanding of a topic, its role should extend beyond providing answers or explanations to questions. For all topics, both creative and analytical, if you want to improve your skills and knowledge, addressing and practicing the problem-solving process is essential.

For instance, the writing process — idea formation, organization of points, structured argument, converting ideas into clear sentences, and revision — offers greater value for learning than just the end result. The writing process can teach you how to brainstorm, think critically, organize ideas, and convey them in an understandable and structured manner. Using AI in a manner that avoids this process is an impediment to long-term learning, but using AI in a manner that encourages these skills can actually encourage this learning.

4. How are AI’s capabilities expanding?

New powerful tools are being developed that augment AI’s capabilities to expand beyond text generation based only upon past knowledge. For instance, Perplexity AI equips AI with access to the internet, allowing you to search and ask questions answered with real-time information and citations.

Recent versions of ChatGPT provide it with access to powerful mathematical tools like Wolfram Alpha and the ability to execute code, enabling accurate math answers through running computations. AI is also being integrated into popular platforms such as Quizlet and Google Docs, where it can take advantage of your previously uploaded materials while integrating into your existing workflow.

In addition to plugging into these new platforms, AI is increasingly becoming multimodal, able to read files and images and understand audio as input.

Asking Perplexity.ai a query on March 2024 about the
latest discoveries in the field of biology
Promotional video of Q-Chat, an AI tutor with access to
Quizlet notecards, from Quizlet, in 2024
Clara AI, a writing tutor directly integrated as a collaborator
on Google Docs
Bruno, an AI that listens to student conversations in group work and can
provide feedback to improve the collaborative dynamic

5. What are AI’s limitations?

LLMs make predictions which means their outputs can be incorrect or subjectively poor. This includes responding with incorrect information, either about the topic or your conversation history with the AI. For instance, AI could state that the American Revolutionary War began in 1772, when it actually started in 1775. You could also tell a LLM the name of the main character in a book you are reading, and the AI later mistakes the name for a different one. These inaccuracies are often called “hallucinations”. Hallucinations can be small, such as a mistaken date, or large, such as fabricated publications or entire events.

LLMs can also produce outputs that reflect the biases that are embedded in the training data, which consist largely of the internet, wikipedia, and digital books. For example, asking about historical events may produce responses that omit or overlook perspectives that are underrepresented in the training data. Responses produced in languages that are less represented may be of a lower quality [2].

LLMs also have significant reasoning gaps that most humans do not have. For instance, the same LLMs that can attain high scores on standardized tests have often found building a rhyme scheme other than AABB to be very difficult, and will incorrectly say that two words rhyme when that is clearly not the case.

ChatGPT4 in March 2024 attempting to create an ABAB rhyme scheme. Both
attempts are incorrect, and ChatGPT labels “light” and “meet” as rhymes.
Claude 3 Sonnet in March 2024 attempting to create an ABAB rhyme scheme.
Claude’s attempt is incorrect, and it labels “hue” and “sight” as rhymes.

These gaps underscore how LLMs process information and produce responses differently from humans, and both accomplish things most humans cannot and make mistakes most humans would not. Other limitations depend on the model. Some AI have a finite corpus of knowledge which may not include recent events, or do not retain memory of previous interactions outside of your individual session with the LLM.

How should students respond? These limitations mean you should always critically analyze the output of AI and understand it can make elementary mistakes on complex problems. Treat AI’s output the same as you would a peer, who may have more knowledge than you on the topic, but is not infallible.

6. How to talk to AI?

Most new AI platforms are built with a conversational interface through which you can talk similarly to how you would a human. In fact, not only is it by design to make the method of interacting with AI feel more familiar, it is actually advantageous to carry over insights of human conversation to AI. Specifically, AI acts surprisingly human in many ways:

a. Telling an LLM that it will receive an award or punishment for its level of performance has been shown to increase the quality of the output.

b. Instructing the AI to embody a specific persona, such as a “skilled mathematics teacher” or “experienced writing instructor” can also improve the output.

c. Providing examples of the kind of output you are expecting is also very beneficial.

d. Asking the LLM to provide a sequential thought process, or “chain of thought”, also improves performance because it simulates giving the AI “time to think” [3].

The pursuit of improving AI’s output by modifying how we speak to AI is called prompt engineering. Other suggestions for prompting AI, such as specifying the length of the desired output, are included in OpenAI’s documentation. These tips apply to OpenAI’s ChatGPT and overlap with recommendations for interacting with other LLMs.

You should experiment with these prompting practices and determine what works best for you. While there are many strategies you can employ, a practical strategy is to consult prompt engineering guidelines once you have hit a task that AI is having difficulty accomplishing.

7. What are additional considerations for AI use?

When considering how and when to use AI, you should be mindful of your school’s policy on AI usage. These policies will differ from school to school, and may range from a full ban to a full allowance. They may also leave some discretion to your teacher and sometimes even vary depending on the assignment. You should fully abide by these policies and be transparent when you have used AI by citing the model and how it was utilized.

Data privacy is also an important consideration when using LLMs. Many LLM chatbots have data policies such that all of your conversations may be recorded for training of future models or human review. There are ways to opt out of these programs in some services. Do not share sensitive information or any material you would not feel comfortable with another person seeing.

Similar to LLMs, platforms that work to detect AI writing are only providing likelihoods and should not be used for any disciplinary action. They can be understood as the similarity between the given text and what AI might generate, rather than the chance of a particular text being AI generated. This distinction is important: A human can write in a way that is similar to how a LLM would generate text, and a false positive may result. After all, LLMs are trained to write like humans.

8. What is the future of AI?

AI tools are changing rapidly, and many of the limitations that have been discussed will be improved over time. In addition, future versions of AI will become more capable and open up new possibilities for usage. The principles in this document have been written with this future in mind, and thus, much of the content will continue to be relevant. However, it is worthwhile to consider the impact of new developments on this article’s suggestions as they occur.

AI is changing the world, with the potential to impact just about every aspect of our lives. Thus, it is helpful for you to think about how AI will impact the future of work. This is the subject of much debate, but it seems clear that AI will have a notable role in creating new jobs and transforming existing ones. Staying up to date with the current technology is the best way to be prepared for these changes. You should take advantage of opportunities to explore how to use AI effectively and safely. As today’s students, we can shape how AI will change our world.

Thank you to Glenn Kleiman, Barbara Treacy, and Chris Mah, all of whom have supported my work and provided guiding feedback on this article. Thank you to my friends and student peers Harrison, Keshav, and Noor who provided formative reactions and critiques. Except for the content explicitly generated by ChatGPT, Claude, and DALL-E, all content of this article was generated by the human author, who is solely responsible for the opinions expressed. AI, specifically ChatGPT-4 and Claude 3.0 Opus, were used for minimal brainstorming and copy-editing only. Bruno and Clara AI, two projects mentioned as examples of equipping AI with tools, are projects in which the author is part of the team.

--

--

Benjamin Klieger

Benjamin Klieger is an incoming undergraduate studying Computer Science at Stanford University '28. He researches and develops technology in AI and education.