Student Perceptions of AI: Use, Trust, and Literacy

Researchers at foundry10 investigated how high school students use ChatGPT, trust in AI, and use these tools effectively.

foundry10
foundry10 News
6 min readJan 30, 2024

--

This is Part One of our three-part series on high school students’ perceptions of AI. See Part Two and Part Three for more information.

With the popularization of artificial intelligence (AI) tools like ChatGPT, students have new options for seeking help on assignments. Teachers and administrators are often curious but hesitant about student use of AI, with potential plagiarism and cheating high on the list of concerns. But how are students using ChatGPT? How much trust do they place in its accuracy? And what do students know–and not know–about how to be effective and responsible users?

To better understand student use and perceptions of AI, foundry10’s Digital Technologies and Education Lab conducted focus groups with 33 high school students across the U.S. We asked about their experiences, opinions, and behaviors when using tools like ChatGPT for schoolwork. Their responses highlighted the reality for American teenagers today: a life filled with heavy academic loads and tight schedules where AI can help simplify tasks.

This part of the blog series examines how students utilize ChatGPT. We explore students’ trust in ChatGPT’s accuracy and their basic literacy in AI, noting their reliance on informal learning methods such as social media and peer advice to understand and use these technologies.

Key Findings

Environmental Pressures

Students used ChatGPT to help them navigate significant academic pressure and balance multiple time-consuming responsibilities, such as extracurricular activities and part-time jobs. Students reported being more likely to use ChatGPT for assignments they perceived as tedious or repetitive.

Ways Students Are Using AI

Students used ChatGPT for various tasks, most commonly in writing and reference/research. Some students used ChatGPT instead of doing their work, but many used it as a supplementary tool to advance their learning.

Student Trust in AI

Students cautiously trusted ChatGPT’s output, often verifying its information through outside research. Some students accepted ChatGPT’s output as accurate without question.

Student AI Literacy

Students showed a basic awareness of ChatGPT’s functionality but a limited understanding of the mechanics behind large language models (LLMs). With little formal education on generative AI, students primarily relied on social media, peers, and learning through trial and error.

Environmental Pressures

Students used ChatGPT to cope with academic pressure, repetitive tasks, and balancing multiple responsibilities.

Teenagers today often balance school with demanding extracurricular activities (e.g., sports, arts) and part-time jobs. Some participants discussed taking rigorous courses like Advanced Placement (AP) classes that expect hours of homework on top of an already long school day. Participants often described exhausting schedules with minimal free time and feeling overwhelmed was a common theme.

For me, school days are just really long. I wake up, go to school for eight hours, I take AP classes, things like that. Sometimes I have practice. I literally come home, do my homework, and go straight to sleep sometimes. I pretty much have no free time on weekdays sometimes. — David (age 16, grade 11, Focus Group 2)

Meanwhile, participants talked about navigating the integration of AI technologies into their learning. For many, tools like ChatGPT seemed like a much-needed opportunity to cut homework time so they could balance extracurricular demands. Several participants acknowledged using ChatGPT for assignments they found repetitive or unnecessary, justifying this as a time-saving measure for tasks in areas where they felt they had already mastered the skills.

Participants grappled with complex issues around plagiarism and the ethical use of AI tools, but they often saw ChatGPT as a way to stay afloat and as a helpful tool to supplement their learning.

Ways Students Are Using AI

Students reported being most likely to use ChatGPT for writing and reference tasks, though their specific methods vary widely.

ChatGPT use was near-unanimous and frequent, with participants reporting using ChatGPT for schoolwork anywhere from multiple times a day to every few weeks. Participants said they most frequently used ChatGPT for homework help, research assistance, and project completion. The most popular domains were writing and reference-related tasks.

Participants discussed using ChatGPT for humanities subjects, especially English, more than non-humanities subjects. Some participants also used ChatGPT for science, math, foreign language, and computer science classes. Below, we highlight some specific examples of students’ ChatGPT use:

It’s important to note that participant use of ChatGPT was largely voluntary, unprompted, or disallowed by teachers and school administrators. (More on student reactions to school policy here.) While some participants discussed being explicitly instructed to use ChatGPT on an assignment, it was much more common for them to decide independently when and how to use ChatGPT.

Student Trust in AI

Students described trusting ChatGPT output but knew it could be inaccurate and often sought to verify its information.

Most participants expressed cautious trust in ChatGPT’s output, with most feeling generally positive about its accuracy. Some participants displayed uncritical trust, including using ChatGPT in ways known to be error-prone, such as asking for quotes from books and research sources. However, many had also noticed times when ChatGPT’s output contained errors, such as inaccurate historical facts.

Another common concern specific to writing tasks was that AI-generated text could read as robotic, unnatural, or too advanced, limiting its utility as a replacement for their writing. As a result of these concerns, many participants described becoming critical users of ChatGPT, working to verify or modify its output rather than recycling it verbatim.

Student trust also varied based on their knowledge of the subject and task type (e.g., one student trusted ChatGPT to help generate ideas but not to research a new topic).

I think most of it is based on intuition. Based on your prior knowledge on the field, you can tell if this is something that wouldn’t look like a student’s work or if it’s just flat-out wrong regarding a topic. So, I think you need to have some sense of knowledge on the field to make an accurate claim of the degree to which you can use it. — Henry (age 17, grade 12, Focus Group 5)

Strategies students used to manage the risk of errors included:

  • Using ChatGPT in domains they were already familiar with, which allowed them to cross-check output against their prior knowledge and critical thinking.
  • Validating or supplementing outputs with outside research.
  • Using common-sense checks (e.g., identifying when ChatGPT’s output contradicts itself) to consider whether the information provided was plausible.
  • Modifying output text to be more accurate or to sound more like their own voice.

Student AI Literacy

Students learned the basics of ChatGPT use through social media, peers, and trial and error, but there were often gaps in understanding.

It was rare for participants to recall formal education or mentorship around AI use. There were a few exceptions: for example, one participant’s teacher gave a lesson on responsible use of ChatGPT, some said their parents guided them in exploring ChatGPT, and participants reported the occasional class activity that integrated ChatGPT. Still, for the most part, they were on their own. Without formal guidance from educators, participants’ main sources of information on AI were social media, peers, and their experimentation.

Overall, participants showed a limited level of AI literacy. They generally reported that ChatGPT gathers and synthesizes information from the internet, and many viewed it as a sophisticated web engine like Google. While participants sensed that ChatGPT processes existing online data, they had less clarity on how large language models generate new content, the extent of their originality, or why they sometimes produce errors.

I’m pretty sure it gets all of its information off the internet itself so it is basically just a web engine. — Emily (age 15, grade 10, Focus Group 3)

Up Next

Part Two of this blog series will examine participants’ ethical dilemmas and responses to school policy, and Part Three will provide action-oriented recommendations for teachers and administrators.

Read more about the Digital Technologies and Education Lab’s work in this area.

--

--

foundry10
foundry10 News

foundry10 is an education research organization with a philanthropic focus on expanding ideas about learning and creating direct value for youth.