Student Perceptions of AI: Ethics and Policy at School

Researchers at foundry10 investigated how high school students understand ethical AI dilemmas and believe tools like ChatGPT should be used in school.

foundry10
foundry10 News
8 min readJan 30, 2024

--

This is Part Two of our three-part series on high school students’ perceptions of AI. See Part One and Part Three for more information.

Today’s high school students are learning in the age of new artificial intelligence (AI) tools like ChatGPT, and while such tools offer time-saving tricks for overburdened teens, their use is controversial. Teachers and administrators are often curious but hesitant about student use of AI, with potential plagiarism and cheating high on the list of concerns.

To better understand student use and perceptions of AI, foundry10’s Digital Technologies and Education Lab conducted focus groups with 33 high school students across the U.S. We asked about their experiences, opinions, and behaviors when using tools like ChatGPT for schoolwork.

Their answers revealed that students are grappling with complex dilemmas around ChatGPT with a high level of nuance — deciding what responsible use looks like, making their own choices about when and how to use ChatGPT, and navigating the consequences of rapidly developing school policy.

Key Findings

Ethical and Practical Dilemmas

Students showed remarkable nuance as they grappled with tensions between ChatGPT’s practical benefits, concerns about its impact on their learning and development, and ethical dilemmas regarding fairness and academic integrity.

School and Classroom AI Policy

Students recognized the importance of regulating AI in schools and called for clear guidelines on its use. They also voiced concerns about the potential negative impact of surveillance technologies, broad technology bans, and the risk of inaccurate AI-detection tools leading to unjust cheating accusations.

Ethical and Practical Dilemmas

Students balanced ChatGPT’s practical benefits with concerns about its long-term impact on their learning and ethical dilemmas around cheating and plagiarism. Participants demonstrated impressive nuance in discussing the benefits and drawbacks of using ChatGPT for schoolwork.

Below, we summarize some of the primary practical and ethical concerns participants grappled with when making decisions about using ChatGPT.

Practical Dilemmas: Threats to Skill Development

  • Long-term consequences of over-reliance on ChatGPT. Participants worried about the long-term consequences of becoming dependent on ChatGPT, including decreased critical thinking and creativity. They feared that the ease of obtaining answers from AI tools might discourage them from forming their ideas and could harm their development of essential cognitive and analytical skills. Many participants were aware that certain uses of ChatGPT (e.g., generating an AI-written essay to submit as one’s work) could harm their opportunities for learning.
  • Short-term consequences of over-reliance on ChatGPT. Short-term, practical consequences were also a concern. Participants knew there were some academic settings (e.g., paper tests) where they would need to be able to perform independently from technology.
  • Weighing the value of assignments. Many participants reported being more willing to use ChatGPT to replace their work (e.g., writing an essay for them) when they believed the assignment was not contributing to their learning, whether because they felt they were already skilled or perceived the tasks as repetitive.

Ethical Dilemmas: To Cheat or Not to Cheat?

  • Cheating and the risk of consequences. Many participants mentioned their fears of being accused of plagiarism and their steps to avoid it (e.g., rewording output, not using ChatGPT to generate writing, or running writing through AI detectors). Participants discussed experiencing tension between the convenience and utility of ChatGPT and their desire to avoid cheating and the consequences of cheating.
  • Fairness to other students. Some participants recognized that ChatGPT-use by some students and not others could confer unfair advantages. For example, one participant recalled another student being praised for their secretly-AI-written essay and that this was unfair to others in the class.
  • Preparation for future careers. Some participants believed that overreliance on ChatGPT could present an ethical problem in future careers, such as medicine. They worried about the harm that could be caused by professionals who relied on AI to break into fields they were underqualified for. Other participants said they didn’t believe ChatGPT would be useful in college or the workforce. Finally, some saw ChatGPT use as more acceptable for adults than students because of perceptions that adults are no longer learning.

How Often Do Students Use ChatGPT to Cheat?

Some participants in our focus groups directly admitted to using ChatGPT to cheat or plagiarize. Others said that they did not but knew peers who did. The likely pressure of social desirability bias — students may be hesitant to confess to cheating in front of adults — prevents us from concluding the prevalence of cheating behaviors in our sample. However, a recent survey suggests that the availability of AI has not increased cheating beyond pre-AI levels among high schoolers.

Given practical and ethical dilemmas, how do teenagers form opinions on when ChatGPT use in school is acceptable? Some participants said they received nuanced advice on AI from the adults in their lives, but it was more common for them to report being discouraged from using AI in any form. Participants made their own rules without adult guidance, expressing a startlingly consistent set of guidelines for what they believe constitutes appropriate and inappropriate use of ChatGPT.

Several participants compared the dilemmas surrounding ChatGPT to navigating responsible use of other forms of assistance, like graphing calculators, SparkNotes, and getting help from a parent.

On paper tests, you’re not going to have electronics near you, you’re not going to have those ideas be generated, or you’re not going to have a template set up for you. You still need to learn how to do it by yourself without it helping you. — Jessica (age 16, grade 11, Focus Group 4)

Participants did not always report perfect adherence to their ethical standards. Several participants used ChatGPT to write assignments and submit the output verbatim or reported that their friends did this. But their positions reflect much more nuance than popular narratives sounding the alarm over AI-enabled cheating suggest. While there was variation in participants’ opinions, most agreed that there was a distinction between using tools like ChatGPT to aid their learning versus doing their work for them.

Below, we briefly summarize student-generated examples of responsible and irresponsible ChatGPT use:

Importantly, student-generated guidelines focused on which sorts of tasks were acceptable and the motivation behind asking ChatGPT to do a task. Why did you want ChatGPT to write your essay for you? If you couldn’t complete the assignment without it, participants seemed to support the choice less than if you could write the essay yourself but chose not to.

This subjectivity may not lend itself well to school policy. Still, it reveals an internally consistent aspect of students’ thinking about AI: they worry that overuse could harm important learning so they may oppose uses that directly bypass learning.

You shouldn’t solely depend on ChatGPT for information. It’s just not going to help you in the future. If you keep copy and pasting everything you write you won’t learn anything. — Samira (age 15, grade 10, Focus Group 1)

However, suppose they perceive that an assignment is not helpful to their learning or that they have already demonstrated proficiency. In that case, they may be more likely to support ChatGPT use even when it falls under the umbrella of cheating/plagiarism.

School and Classroom AI Policy

Students supported the need to regulate AI use in school and called for clear guidelines, but they expressed concerns about the consequences of total-ban policies.

The actual policies in participants’ schools starkly contrasted their nuanced reflections on responsible use. According to participants, many of their schools or districts had enacted a total ban on any use of AI — a policy that participants said had negative consequences for their learning.

My English teacher was saying you should not be caught at all using it. It’s just going to ruin your high school life. — Samira (age 15, grade 10, Focus Group 1)

Participants generally supported some regulation of AI in school, recognizing that some uses are not conducive to fair assessment or learning. But participants also pointed to methods of enforcement that sparked concern, like sharp increases in surveillance technology (e.g., screen monitoring, browser history tracking on personal devices) and broad bans on all forms of technology in the classroom (e.g., no computers in English class.)

Participants felt some of these measures were an invasion of privacy and worried about the potential impact of broad technology bans on their learning. Participant voices contribute to an ongoing debate about the impact of digital monitoring on students.

I feel like they’re more strict on this thing called GoGuardian where they’ll lock my computer and just see. I know there’s a new update now with Google Docs where it watches how you type, so it makes sure you can’t copy and paste stuff. — Kaylee (age 16, grade 11, Focus Group 1)

Additionally, participants frequently mentioned teachers’ uses of programs that claim to detect AI-written content. Many participants believed that all their assignments were being run through these trackers, and some had already adopted counter-measures like checking their assignments with the same programs before submitting them.

While participants had largely adapted to AI detectors, some expressed fears about their accuracy, reporting incidents where students were falsely accused of plagiarism. Evidence suggests that these students are right to be concerned — AI detectors, while popular among educators as a quick fix to plagiarism fears, are notoriously unreliable.

So for me and my friends, we usually just use it for ideas because we are very scared people, so we’re afraid of plagiarism, don’t want that to happen because then you don’t even get to get the whole percentage. — Trinity (age 14, grade 9, Focus Group 2​​)

Policies on AI use in schools vary greatly, with differences across school districts and even between individual classrooms. While some schools implemented total bans, others had more nuanced policies. This inconsistency led to participants adapting to varying classroom rules, with some teachers supporting ChatGPT use and others strictly opposing it.

Yeah for me some of my teachers allow us to use it but then some of the other ones warn us not to use it because if we use it for the assignment then we might get a zero on it for cheating.​​ — Caleb (age 14, grade 9, Focus Group 3)

Participants expressed a need for clearer and more consistent guidelines from educators and administrators regarding permissible AI use to understand the rules better and explore tools like ChatGPT without fear of violating policies.

Up Next

Part Three of our blog series explores recommendations for school administrators and educators. Read more about the Digital Technologies and Education Lab’s work in this area.

--

--

foundry10
foundry10 News

foundry10 is an education research organization with a philanthropic focus on expanding ideas about learning and creating direct value for youth.