AI In Education — What Could Possibly Go Wrong?

Keith Rispin
PEAI
Published in
9 min readMar 11, 2023

An analysis of ChatGPT's short list of negatives.

A while back, I asked ChatGPT: What could be some possible negative outcomes of AI in Education? The response wasn't anything new or especially insightful. Still, it did point out that it (ChatGPT) knew about the mistakes we have already made when implementing technology in education so far. All but two of the six responses I got were things we have already encountered with edtech. Undoubtedly, we will be doomed to repeat this IF educators do not become more involved with the coming AI revolution in education. For this article, I will deal with responding to the six negative outcomes of AI in Education that Chat GTP came up with and deal with why educators need to be involved with the development of educational AI at a later date.

Chat GPT Prompt: What could be some possible negative outcomes of AI in Education?

AI-enabled technologies can reduce the amount of human interaction in education, decreasing creativity and critical thinking.

How AI affects the amount of human interaction will depend on how or where AI is implemented in the education system. In my previous article, I suggested that AI will first enter the education system to handle summative Assessment. My rationale was that this kind of Assessment was essentially a collection of data points generated by the student, measured against expected learning outcomes or standards. This would be a relatively simple task for the AI, and much of the technology it would utilize is already in use. The second and most enticing part of this entry point is that Assessment is the LEAST favourite task of a teacher's job. It is usually done off the clock, after school, and at home and consumes ridiculous amounts of time. If teachers could offload this part of the job to an AI, most would be happy to do that. It would then free up time for them to plan for the part of the job they do enjoy: working with students. AI could allow for better classroom human interaction and improved learning outcomes in this use case.

As for a decline in creativity and critical thinking? In the scenario I am considering, AI is not generating student content but measuring student-generated content for evidence of critical thinking, creativity, and original thought, all against a database of appropriate grade-level standards and learning outcomes. Taking Assessments out of the teacher's hands and allowing them to focus on teaching and learning should improve outcomes such as critical thinking and creativity.

AI tools may increase the digital divide as AI-based tools and resources can be expensive and may only be available to some students, particularly those from low-income backgrounds.

Chat GPT comes up with one primary reason that the digital divide will increase with the introduction of AI in Education, but there are actually three reasons that the digital divide will continue to grow as AI enters our learning spaces.

Absolutely 110%, AI tools will increase the digital divide IF implementation is done in the same manner that current Education Technology has been implemented up until now. ChatGPT is mainly concerned with those from low-income backgrounds, and it is correct. Not all families can afford the technology that powers twenty-first-century learning. The opportunity gap between those with access to these learning technologies and those without is significant and will only grow as AI becomes more commonly used.

The second reason a digital divide exists is the problematic or expansive geographic landscapes that isolate remote communities. In Canada, reliable access to the internet is not universal across the land. Yet, internet access is a must for any student in today's world if they hope to take advantage of twenty-first-century learning opportunities. If we can't serve all students equally today with a foundational service such as internet access, there is no hope of ensuring equitable access for all when AI enters the scene.

The map below illustrates the internet service in the part of the world where I live. The darker the orange-coloured dots, the better the coverage. It is important to remember that connectivity only sometimes equates to reliability. I live 20 minutes from downtown Vancouver, and I am at the edge of where reliable internet access can be had. Five hundred meters further east, and I am in a black hole with no internet access (not that that is a bad thing), but it simply goes to show that a crucial part of twenty-first-century learning is not available to all. If the current connectivity issues can't be resolved today, there is little hope that we will be able to prevent the widening of the digital divide once AI becomes a mainstay of public education. That said, the Government at all levels is trying to improve coverage nationwide. See CRTC for more information.

High-Speed Internet Coverage British Columbia — CRTC

The third and final issue which creates and perpetuates the digital divide is the complete and total lack of a system for coordinating, planning and implementing educational technology across the country. This issue isn't unique to Canada and is an issue in America and other countries as well.

In Canada, ten provinces and three territories have jurisdiction over their education systems. Within these politically distinct regions, there are approximately 300 different school districts and innumerable numbers of schools and students. As it currently stands, no single decision-making body helps coordinate the equitable implementation of education technology across the country. As a result, there can be significant differences in the implementation of edtech between provinces and territories, school districts, and even schools within districts. Essentially, the digital divide in Canada is a function of political structures that fiercely protect provincial jurisdiction and autonomy above all else. The result is that the same twenty-first-century learning opportunities are unavailable to all Canadian citizens.

In contrast, countries such as Korea and Finland have robust central planning and implementation structures to ensure the equitable implementation of EdTech, and they have had these decision-making bodies in place for decades. Most recently, the European Union established The Digital Education Action Plan.

The Digital Education Action Plan (2021–2027) is a renewed European Union (EU) policy initiative that sets out a common vision of high-quality, inclusive and accessible digital education in Europe, and aims to support the adaptation of the education and training systems of Member States to the digital age — European Commission Website

Some jurisdictions understand that coordinating a plan for equitable implementation of Education technology is essential to ensure the digital divide does not become a chasm. With the arrival of AI onto the scene, this kind of centralized approach will become even more critical in the years ahead. Countries more concerned with protecting political jurisdiction will only create more inequity among learners.

AI-based tools can be challenging to control and may present a lack of control for educators and students.

ChatGPT's concern about a lack of control reaches far beyond educators and students. If we have lost control of AI in an educational setting, it will mean that we have lost control of AI in all aspects of our lives. Although we have yet to speak about what a superintelligent AI is if the time comes when we have lost control of an AI in Education, it will NOT simply be an education problem.

Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.”
Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

The kind of control educators should be concerned with is deciding what type of AI should be used in an educational setting. If educators do not get involved with these decisions, people outside of the education system will make these decisions for us and as a colleague of mine used to say…

“The further away from the classroom that a decision about education is made, the worse that decision is” — Bill Ferguson

AI is coming, and we have already lost control if we do not start being involved in how it will be used.

AI-based tools can be biased and can lead to inaccurate learning results.

One would hope that objectivity would be a baseline objective of any Educational AI, but it is doubtful it will remain as such in many parts of the world. An educational AI (EDAI) in Florida will be far different than an EDAI in Finland. It is unavoidable that bias will be coded into the AI depending on the jurisdiction in which it operates. This is understandable since learning is a social construct; an AI will reflect the biases of the society it serves. In jurisdictions where political and social agendas are not the priority, an AI can serve the community objectively.

Privacy Concerns: AI-enabled technologies can raise privacy concerns as data is often collected from students and can be used for various purposes.

Student data and privacy are a primary concern with today's educational technology, and ChatGPT is correct; it will be a concern in the era of AI in Education. If there is one thing that stops AI from entering the education system, it is the rules around privacy.

It is unlikely that siloing an AI in every school district nationwide where student data can be securely stored would be economically or functionally feasible. Any AI in Education would have to be cloud-based, which will be a significant problem in ensuring the privacy of information. For AI to enter the education system, it would have to find a way to protect student and teacher privacy. Here are some possibilities.

  • The first and most likely way to ensure privacy would be to scrub all personal data from anything uploaded to the AI so that there would be no way to identify a student by their submitted work.
  • The second would be to create some kind of hard break between the identifier for the submitted work and the student's identity. You could do this in myriad ways, but it would be inefficient and time-consuming.
  • The third would be using unhackable technology to store all personal information. Unfortunately, a filing cabinet is the only technology I am aware of that is immune to cyber-attacks.
  • Fourth, legislators may throw their hands in the air at some point and say, "screw it! Everyone’s data is already compromised; any effort to protect privacy is futile."

AI could lead to a decrease in job opportunities for teachers as AI takes over more of their job functions.

It would appear that ChatGPT thinks AI will probably take over direct instruction, and there are certainly those who would be drooling at the prospect of unemploying teachers. The cost savings could be immense in the long term, but if the pandemic taught us anything about education, kids learn better in a classroom with a teacher. Sure, as much fun as it may be to imagine some hybrid Terminator/Kindergarten Cop figure standing in front of a classroom of kids, it is improbable. At least at this point in the evolution of AI. We will have to go through a few iterations of educational AI before we hit that point, if ever.

We will see attempts to replace teachers with AI relatively quickly, but I doubt they will succeed. A good teacher is hard to replace; therefore, I think teachers' jobs are still reasonably secure. As I mentioned earlier, the successful use of AI in Education will be in the area of Assessment simply because this is work that is time-consuming and usually done on a teacher's own time. Whether things play out this way is yet to be seen.

So what now ¯\_(ツ)_/¯

The six negatives Chat GPT came up with are great. They provide us with a concrete set of issues we should address as we plan for the arrival of AI in the education setting. That is IF we prepare for its arrival. It has been my experience that the K — 12 education system tends to react to the arrival of new technologies, not plan for the arrival of new technologies. This is not the teachers' fault however, they are doing the job they have been asked to do. Keeping up with the rapid technological changes has become just an added burden that should not be expected to be handled on a teacher-by-teacher, school-by-school or even district-by-district basis. To plan on such a micro level is a waste of time, energy and resources. Steps need to be taken provincially and or federally so that AI is implemented in a controlled and well-planned manner across the country.

--

--

Keith Rispin
PEAI
Editor for

Father, Husband, Educator. Think life is far 2 short 2 B taken seriously. Some would say I also know a little bit about technology in education.