Artificial Intelligence in Schools — To Use or Not to Use, and if Yes, What For?

Tibor Prievara
EducAItion

--

There are just as many scenarios about how AI could redefine the functioning of schools as futurists and AI experts.. Some share their joy daily by recommending 4–5 or even 10 new AI tools (which they themselves don’t use). The question always remains: it’s fine that a cool AI solution exists, but we rarely get an answer on how it could feature in the daily school routine. Here at EDUCAITION, we spend a lot of time thinking about how AI can be meaningfully employed in service of learning, thus fulfilling its potential of becoming a fantastic tool. Perhaps what sets us apart slightly from many other experts on the topic is our classroom experience. We bring it into the classroom, figure out the methodology, and based on this, recommend AI applications that can truly assist teachers in their daily work, thus making pedagogical work much more efficient. In this short summary, we attempt to define the role of artificial intelligence, specifying various levels of application (or simply attitudes towards AI). As a bonus, we share a link to a collection site where everyone can browse through over 15,000 AI applications currently available!

First Level — AI as an adversary

For AI to appear in schools with the teacher’s knowledge and consent, it is crucial that teachers recognize that AI has a place in education. The irony at this level is that mere denial or rejection won’t prevent AI from finding its way into the classroom. It will show up in assignments, essays, projects, or even exams. Unlike the project method’ where if the teacher doesn’t believe in it, there simply won’t be any projects in their classes — problem solved. The first important realization at this level is : We must understand that we CANNOT prevent artificial intelligence from entering the school. If this is the case, we must adapt. This can be done at several levels (or steps), but denial alone will not yield results.

This level of dealing with AI includes teachers starting to chase AI-submitted essays and homework. At this point, they must engage with the issue, finding a plagiarism checker or AI detector website where they believe they can catch students attempting to cheat. There were particularly tragicomic elements to this at the beginning of AI’s spread; perhaps you remember the Texas teacher who uploaded students’ submissions to ChatGPT and then asked an earlier version of the AI whether it wrote those essays, leading to the failing of all involved when ChatGPT readily agreed.

That being said, there have been significant advancements since. There are numerous websites that promise to detect AI-generated texts. However, interestingly, one of my students mentioned that an application claiming to ‘humanize’ AI-written texts (i.e., make them undetectable to detectors) was disappointing because it simply filled the writing with nonsense, swapping words and phrases. This is likely because large language models (LLMs) basically calculate probabilities, meaning they calculate what words are likely to follow others. Since detectors know this, it’s reasonable to assume that if an application is trained to check the same probabilities in an uploaded text, it could determine whether the writing comes from artificial or natural intelligence. The ‘humanizer’ might think that swapping the most likely words for less likely ones (without changing the context) could make the text harder to identify. However, this leads to a significant loss of text quality.

It’s also crucial how quickly detectors keep up with the development of AI language models — it’s quite possible that a detector using ChatGPT 3.5 could be fooled by a text written with ChatGPT 4, and so on. Thus, while these tools can be useful, they should be approached with healthy skepticism. If you’re interested in playing with these applications, CLICK HERE to find 28 tools, or here for another 11 if you’re searching for a ‘plagiarism hunter’.

At this level, it’s clear that AI is present in the classroom, just not with the teacher’s knowledge. Another lesson is that you can play cops and robbers with the kids, but this won’t eliminate or stop its spread. Perhaps it’s slightly better to try to understand for ourselves what we can comprehend about how AI works and consider how we might use it for our benefit once it exists.

But this leads us to the second level — AI as magic — from which, by the way, even staunch AI application advocates eventually graduate, many seeing it as a solution to public education issues.

Second Level — AI as Magic

This level belongs to those who immerse themselves (though mainly superficially) in the possibilities of AI applications, feeling that there’s a real ‘game changer’ at play. The allure of the MagicSchool AI, where you just need to enter a single sentence and the application immediately spews out a lesson plan, test questions, simplified texts, grades essays, or helps figure out what to teach when you have no ideas, is enormous. Every teacher understands the value proposition of these offerings.

Then, if we look a little closer at the QUALITY of the solutions we receive, we must realize that the lesson plans are clichéd, commonplace, simple — something intangible is missing… perhaps creativity, the spark, the real idea? Test questions are often boring, only querying facts, with incorrect answers easily filtered out in a multiple-choice test, or downright nonsensical. Fundamentally, it doesn’t teach anything and has very little pedagogical benefit. Ultimately, we find that if we arrive ‘with a blank slate’ and keep asking questions, we get almost entirely useless, irrelevant answers, and we don’t leave with an empty sheet, but rather one scribbled all over by a 3-year-old.

What are we doing wrong? Exactly what we hold our students accountable for — we want to save work. Based on one sentence, we expect the AI to understand our desires and serve them up with high-quality ideas. However, AI is not capable of this. Of course, we’re not saying that you cannot plan a lesson using artificial intelligence, but we are saying that you can’t skip the professional work — if you like, the thinking, creativity — that goes into it. We’ll discuss what might be a solution at the next level.

Both directions have ‘extreme’ practices at this level. Closer to the First Level is what might be called the ‘boredom apocalypse,’ where on one side, AI is used to generate a two-page email from a short sentence, and on the other side, to condense a two-page letter into a one-sentence summary. Thus, unnecessarily producing long texts just for someone else, also using an AI application, to shorten them. Then a one-sentence answer is formulated, from which a two-page letter is produced, and this continues until humanity dies of boredom.

Leaning more towards real creative use, I think, there are those educators who develop an AI toolkit and try to use it. For example, they are able to produce visual aids with an image generator, write good and poor examples of an assignment with ChatGPT, that is, relatively safely employ more and more AI tools in preparing for classes. They are not placed on the third level because here AI still appears in the preparation, presentation, teacher’s lecture, possibly as oral or visual prompts (e.g., the teacher creates an image with DALL-E which students can talk about).

This naturally includes when we start producing tests with applications like Curipod or MagicSchool AI. More advanced users already know what prompting is, as well as iterative prompting (i.e., when we refine and shape the AI’s response), and spend a longer time preparing a test series, although this process involves a lot of professional ‘gray areas’.

a) We already know that AI doesn’t think, it calculates probability. Therefore, ‘common sense’ simply doesn’t exist for it.

b) AI doesn’t weigh options, and it spits out brilliant stuff and complete nonsense with incredible confidence. The same is true when compiling test questions.

c) Since AI doesn’t think — i.e., it doesn’t see connections that aren’t the most obvious or most likely. For example, if we’re making an EFL test and ask it for a multiple-choice question where it needs to insert an adjective, it just picks three other adjectives, and that’s it. But it won’t include, for instance, an irregularly formed adverb (unless we tell it, but then we’re essentially writing the question), similarly sounding words, etc.

d) Skimming and scanning test questions generated by others, or thinking about a test question ourselves requires a different type of cognitive procedure. As much as we might think that we’re ‘reviewing’ what AI wrote, my experience is that we’re much more likely to agree with a fundamentally not bad, but not good — average — test question than if we ourselves wrote the alternatives and distractors.

e) Finally, are we sure that it’s okay for us to want to entrust our students’ assessments to Silicon Valley mid-level programmers? Can their algorithms signal back the students’ knowledge more intelligently, effectively, usefully, and beneficially than us? Are we happy, thus, with ‘black-boxing’ our accountability).

Third Level — AI as a Tool

At this point, artificial intelligence is deliberately employed pedagogically sound ways in schools, trying to serve learning through the opportunities it offers. Here, it also appears in students’ hands, used for project tasks, wonderfully integrated into daily routines, but it doesn’t replace learning, it serves it.

What’s it good for? Firstly, it can help establish equal opportunities — I have a student who writes much better essays with AI than without. And, before you say anything, it’s not that AI writes the compositions for him, but it helps him compensate for his rather severe dysgraphia. It helps interpret texts, rewrite sentences, and can be a great assistant for a dyslexic child (IMPORTANT! Of course, how we use it and for what, as well as what learning processes the students go through in terms of AI, matter.)

Common to the above examples is that

a) students can also use AI applications, since it’s difficult to use any tool for skill development if students can’t access it (a soccer ball is a great tool, but if it’s on the shelf in the equipment room, the students won’t learn to play soccer better). Similarly, if they can’t use AI tools, they won’t learn how to use them effectively.

b) they learn not only WITH artificial intelligence but also ABOUT artificial intelligence. Many educators don’t engage with AI because they themselves don’t understand it, understandably not daring to bring it into the classroom either. There are undoubtedly important ethical dilemmas bordering the path of AI usage — where do the data go that we input into an AI; who stores it, what do they use it for; can we know exactly what happens inside such an LLM (no, but nobody really knows) etc. If we can approach these issues consciously, do our homework, understand the comprehensible part of its operation, we can help our students make responsible decisions (e.g., about how deep a personal data they provide to ChatGPT).

c) this relates to the pedagogically conscious use of AI — how we can teach our students that here it’s not about just typing a question, getting the right answer, and going home. Preparing an essay with AI (if done correctly) won’t necessarily take much less time than writing the same composition from scratch. The value proposition is that with AI, we can write a much more demanding essay — if we go through the iteration process. Both teachers and students need to change, adapt — the concept of ‘author’ transforms, finding new meaning somewhere near that of ‘editor’.

d) lesson plans are an integral part, i.e., used in situations that would be difficult or impossible without AI (e.g., we don’t read a conversation with a firefighter about his daily routine from a textbook, but we input the textbook text as a prompt into an AI, and the students’ task is to talk to it, ask questions, and gather information; a variation of this is teaching about the founding of the United States by having kids engage in an imaginary conversation with George Washington).

The third level is thus when we find a place for the currently available AI applications in pedagogical processes so that students can use them consciously, not instead of learning, but to support learning. In other words — AI is no longer an enemy, not a magician, but a very useful assistant, capable of performing certain tasks so efficiently (pedagogically as well) as we never could.

Disclaimer: There will likely be a 4th and 5th level of integrating artificial intelligence, but at the moment of writing this article, we do not see them yet. We may need to completely rethink the above classification by September.

Finally, we promised 15,000 AI applications, CLICK HERE to access the huge database.

--

--