Teaching AI Without Talking About Power Misses the Point
A response to Prof. James O’Sullivan — and a call to move from demystification to resistance.
I listened to James O’Sullivan’s recent Substack post on AI literacy in education (read to me the best way Siri could manage an Irish accent).
It landed.
In fact, I found myself nodding more than I expected. Maybe it is because we are both Irishmen named James. Maybe it is because we both know the existential struggle of explaining the apostrophe in our surnames to a login form. Maybe it is because he speaks with a clarity that refuses to romanticize the machine.
But while his tone hits that sweet spot between poetic and practical — somewhere between Seamus Heaney and Roddy Doyle— I cannot sit this one out without pushing back.
We agree that AI should not be mystified. We agree that students should be taught to question it, prod it, and refuse to treat it like gospel. But here is where I push: it is not just about getting students to “understand” AI. It is about recognizing that AI already understands them — because it has been trained to. And it is already being embedded in schools in ways that make understanding secondary to compliance.
This is not theoretical. This is happening.
We Are Teaching AI Before We Are Teaching Students
In the United States, schools are being swept into AI adoption at a staggering pace. Districts are not easing in — they are being saturated. Adeel Khan, founder of MagicSchool AI, recently announced that the platform is now used by educators in nearly every U.S. school district. Over 10,000 schools. Active in 160 countries. All this in just 18 months.
That is not organic growth. That is saturation by strategy.
And it is not just teachers choosing tools. It is venture-backed acceleration. Khan celebrated a $45 million Series B funding round led by equity firms and tech venture capital, while launching a Data Analysis Tool that scrapes school data through natural language processing. Meanwhile, educators are being told the mission is about empowerment — about making teachers “magic.” But read between the lines: teachers may be the face, but AI is driving the agenda.
I wrote about this recently: we are failing AI literacy because we are buying into a version of it that serves vendors more than students. A version that focuses on control, productivity, and behavioral analytics. Not understanding. Not empowerment.
Prof. O’Sullivan makes a compelling case that AI literacy should be about context, not magic. I agree. And to his credit, he nods at the deeper tension — the way AI already knows students, the way it slips into classrooms before anyone gets to ask if it should be there. But then he almost gives it up. As if the fire has already spread, so let us just talk about fire safety. I am not ready to go along with that.
I want to know who lit the match. Who is holding the lighter now. And why so many school leaders are being handed branded hoses that spray marketing instead of water — while being told the building is under control.
When tools like CoPilot, Gemini, SchoolAI and Magic School AI are integrated into learning without thoughtful scaffolding, students are not just passive users. They become training data, labor, and product — all at once.
This Is Not About Fear. It Is About Pattern Recognition.
The narrative that critics of EdTech are technophobic is tired. It is also intellectually dishonest. I do not fear AI. I fear the way it is being sold to schools. I fear the pressure placed on educators to become procurement officers overnight. I fear how quickly districts default to shiny tools in hopes of solving deep, systemic issues.
And let me be clear — Prof. O’Sullivan is not part of the hype machine. He is not selling anything. He is asking the right questions and pushing back against deterministic narratives. That matters.
And it is here where I think we most agree — especially when it comes to how educational systems prize optimization over originality. In Ireland, the Leaving Cert rewards students for mastering a formula, not for thinking critically. I have written before about how this environment primes students to write what machines can mimic — structured, predictable, optimized prose. That should concern us more than AI’s capabilities. If students are trained to perform to a rubric, is it any surprise when generative models outperform them?
Prof. O’Sullivan gestures toward this same discomfort — the idea that the real issue is not the machine itself, but what our systems reward. Where we begin to differ is what we do next. I am not content to stop at demystification. I want students to go further. I want them to question who benefits from these systems, who gets excluded, and who gets to decide what “literacy” even means.
AI Literacy Without Power Analysis Is Just Compliance Training
There is a reason why the dominant models of AI literacy being promoted to schools feel so hollow. They focus on functionality, not freedom. They train students to use the tools, not challenge the systems. They offer guardrails, not agency.
In one of my Medium pieces, I argued that we are designing AI literacy to make education compliant, not smarter. That is still the case. What often gets labeled “AI education” is really just exposure — watching a tool work, seeing a demo, reading a definition. For example, I completed all five MagicSchool AI certification courses in under 15 minutes — without ever logging in or using the platform. That says more about the training than it does about the tool.
Very little of this equips students to intervene. To resist. To build differently.
We need AI literacy that makes students dangerous thinkers, not docile users.
Where We Find Common Ground — and Where We Can Build
Prof. O’Sullivan makes a powerful call to humanize AI. To situate it. To strip away the illusion of inevitability. That framing is essential. His reminder that “we cannot make better decisions about technology without better conversations about it” is spot on.
So let us push for those conversations. But let us also refuse to let them be sanitized.
Let students ask who funds the tools. Who sets the limits. Who benefits. Let them critique the platforms that shape their school day. Let them design alternatives rooted in their experiences. And let us stop pretending that integration is progress if the terms are dictated from the outside.
We can — and must — teach the technical. But we should not stop there. We need to lift the hood, yes. But we also need to ask why the engine was built in the first place, who it leaves behind, and where it refuses to go.
The way we talk about AI in education will shape the way we teach it. If we treat it like magic, we will mystify. If we treat it like software, we will standardize. But if we treat it like a political, social, and ethical terrain, we will start to give students the tools to navigate it — and challenge it.
Prof. O’Sullivan and I may start from different places, but I believe we are walking toward the same horizon: a future where students are not just prepared for AI — they are prepared to shape it.
That future cannot be built on lessons alone. It will require permission to question, trust in student agency, and a willingness to share power. If we are serious about AI literacy, we cannot afford to treat those as optional.