Addison Maille
12 min readFeb 23, 2024

There’s a lot of people talking about the future of AI when it comes to learning and education. Virtual assistants that will be on some tablet computer or some other device, guiding and teaching them using a personalized curriculum to each student as the kids move through their K-12 education and possibly beyond. But is this futuristic feature more fiction than fact? I fear the latter is true. Understanding why and what we can do to avoid potential pitfalls along the way will be critical going forward.

Let’s start by looking at what people are overwhelmingly using AI and other computer enhanced functions for. When it comes to smartphones, there are a minority of people that actually use their smartphones to get smarter through the use of audible books, podcasts, and just looking things up on the internet and YouTube. The much larger majority of people are not doing this. What’s more, as the given age of a person using a smartphone drops, think millennials and younger, the odds of using their smartphone to learn appears to go down, not up.

More and more evidence suggests that young people use technology to avoid learning rather than augment it. This doesn’t mean it has to be this way. My children have used many videos to learn skills from drawing to doing tricks with a yo-yo. There’s no 1st principle inherent in technology that would stop someone from using it to improve their learning. The problem happens when the uses that include augmented learning have to compete with the entertainment value of the technology. Unless the person that possesses the technology has the wisdom to use it to improve their learning, rather than their entertainment, they will rarely use it in this way. This is precisely the wisdom that most young people lack.

When we give technologies that can be used to augment human skills, the younger people tend to see it as a technology that can replace the need to learn. This is why young people are increasingly bad at skills such as math, driving, and even basic socializing. Rather than doing the hard work to understand these critical skills, they just bypassed them to the extent they can with technology. They let calculators do the math, driving assist modes monitor the traffic around them, and rely more on social media for their social skills than actual in-person interactions. This is not only a bad start to a learning revolution, but it sets a precedent that will cause them to think of technology from an entertainment and convience perspective first and its learning potential second, if at all.

Then there’s the loss of opportunities to learn that AI represents. As we begin to lose more and more jobs to AI, we will be left with fewer and fewer real world opportunities to learn skills. And this is no longer theoretical as Silicon Giants like Meta, formerly Facebook, and Alphabet, formerly Google, have already begun massive layoffs due to AI. And in Alphabet’s case, they laid people off despite being very profitable. They literally didn’t need as many programmers and other computer related professionals because of the efficiencies they gained through their use of their own AI. And this is only the beginning of this shift.

If we don’t learn the basic skills, which is what entry level jobs have done in modern societies for more than a century, then we won’t be any good at higher level skills. All the talk of AI doing the drudgery work while humans focus on the more creative endeavors assumes that we have learned the basics. We all used to understand this concept of learning basic skills before we move on to something more advanced. Hence the expression you gotta learn to walk before you can run. We have to learn how to count before we can do math. And we have to learn how to write simple small pieces such as paragraphs and essays before we can write longer form articles, books, and so on. For every basic skill set that we remove from offer in either education and/or entry level jobs, the higher level creative work that requires those basic skills will no longer be available to future generations. In such a future, we won’t be using AI to augment our skills. We will be dependent upon AI for those skills.

What AI represents is something that’s novel to human advancement. All the advances in the history of Western culture has been one of increasing skill sets that people can learn. There have always been more specialties/skills to learn in industrial cities then there ever were in the farm. Throughout all increases in human understanding ranging from the transitions of the Stone Age to the Bronze Age and then the Iron Age to the invention of the printing press, the Industrial Revolution, and digital revolutions, the number of skills that humans could and needed to learn were constantly increasing.

AI represents the first real potential that due to technology, rather than a loss of technology, the skills on offer for humans to learn will actually get smaller and shallower. I don’t care what correlations any historian tries to make about AI. When human learning plummets, as has happened a number of times when a sophisticated civilization falls to a much less sophisticated one, the loss of skills leads to a lessening of the human condition. From the dark ages to the loss of the library at Alexandria, for better or worse, when our learning starts declining, so do we.

Now is usually when we get the avalanche of techno-optimists that will chime in with statements how we can fix these problems and make these AI driven personal assistants incredibly robust, accurate, and superior in all ways despite difficulties in the past. Yes! Of Course! This Utopia will succeed… right… Oh Shit!

Computers tend to be good at creating compulsions, not motivation. Humans, at least some of us, are oddly good at motivating other humans. Our parents, teachers, siblings, and friends can all play pivotal roles in motivating us through a variety of ways to push ourselves academically, professionally, and so on. Other humans tend to be good at pushing us to be better ourselves. But computers, so far as I can see, are not good at motivating us to be the best version of ourselves. What they are good at is leading us to compulsions, or what you might call addictions.

Digital content is great at being addictive, but far less so at being motivational. Porn, video games, social media, online gambling, and many other examples don’t have strong track records of motivating us to be the best versions of ourselves. As if to put an almost painfully on the nose example of this phenomenon, the vast majority of motivational content on the internet involves learning, the very thing that most of us don’t use it for. Motivational books, podcasts, speakers, and more tend to be the elements of the online world where we learn the most. And even they have many well acknowledged downsides. They can be primarily summarized as creating a motivational treadmill that doesn’t actually lead to any real action. Just the cathartic release that makes us feel like we made progress by taking in the content without any reciprocal actions.

Technology is far more likely to create what’s known as a race to the bottom of the brain stem. It fiercely drives us to be the worst versions of ourselves in service of whatever compulsion we’ve acquired. While there may be a small number of successes found here or there, they are few and far between. Most people can’t name any computer game, porn, or gambling site that’s well known for bringing out the best in its heaviest users. And if the digital app and/or material isn’t troublingly addictive, then it will usually get usurped by something that is.

While all this destruction to our learning is happening, the infrastructure we once relied on for education will crumble even more than it already has as a result. Teachers will turn into little more than student monitors and the most capable teachers will find work in other industries. As AI starts replacing teachers, we will lose what little expertise the field still has left. As we re-engineer education the people that still remember how to do it the old way will get filtered out for less skilled labor that’s cheaper like we’ve seen in other service oriented jobs. This has been a common occurrence in practically every profession that’s ever been automated.

However, automation itself isn’t the problem. There’s a strong argument to be made in many industries whose product and/or service is orders of magnitude simpler that automation is a good thing. Fast food or manufacturing widgets are classic examples. While I mean no disrespect, the skills required for these industries are far simpler than the skills required to effectively teach groups of children. While there might be a few highly technical jobs for dealing with AI glitches, the rest will be little more than hall monitors, if schools are even needed. If we lose the best teachers it will take a much longer period of time to replace what we’ve lost if the AI assistant fall short of expectations. And, as COVID19 taught us, it only takes a year for kids to lose out on critical development windows. If we take multiple years to rebuild our systems, having to rely on subpar AI and hall monitors to fill in the gaps, we will make what happened with COVID19 look like sound policy.

The implementation of AI will also have the effect of centralizing power in whatever companies can make some version of AI work for an education model. They will likely crowd out other competitors through the usual anti competitive tactics that invariably end in some version of market capture. All of this will happen in the backdrop of curriculums that will still be decided upon at the district and state level. A process that has long been plagued with corruption as so few people pay attention to the billions of taxpayer dollars these institutions control. This all brings us to the worst problem of AI people are just now beginning to become aware of.

These AI learning assistants will be the single most direct portal to young minds that technology and, by extension, the companies that create the technology have ever had. These are literally the future consumers of our economy directly in the hands of profit seeking motives. The competition to orient them towards this or that ideology, product, or service will be overwhelming to put it mildly. We will be given all the usual assurances by scrupulous politicians that don’t even understand the systems in the first place. Meanwhile, the actual levers of control will be in the hands of people whose names we don’t know that may or may not get paid by the same companies that provide the AI. And if they don’t work for them, rest assured that the revolving door of politics and industry will mean that if they aren’t paid by them today, a lucrative contract for their brother or a highly profitable position will be in their not so distant future.

If you think AI wouldn’t possibly be that corrupt, then just look at what happened with Google’s Gemini. This was a program that was designed to produce whatever images were asked of it via a text prompt. It literally would not produce the image of a white male. Every image that it produced from Vikings to the founding fathers and popes were all women and/or people of color. Now this may seem like a small oversight or a mild attempt to indoctrinate people, but this is just the tip of the iceberg when it comes to how companies could alter AI to capture the minds of young people that don’t have the life experience to see what they are really doing. And remember that the slippery slope argument isn’t a fallacy if a particular group keeps taking whatever latitude is afforded them to push the line further and further in their preferred direction. When there are clear ideological and/or financial gains to be had by trying to take ground through a slippery slope argument, then it ceases to be a logical fallacy and becomes a reality.

So how do we safeguard against this concept? We must understand the process by which innovations should go through in order to be adopted. In order for any idea, product, or service to prove its worth on a large scale, it must accomplish three things. First, there must be a working prototype. And I don’t mean some vague program that can talk to me through a computer or some other electronic means. I mean there must be a working prototype that can do all the cool things that techno-optimists are claiming it will do in a way that is at least similar if not identical to what they are claiming. This means that it’s been demonstrated in the real world to be highly effective for all the different grade levels, socioeconomic conditions, skill acquisitions, and other criteria for which its makers are claiming it to be effective. And there must be an accurate accounting of what, within education and learning, it is and is not good at so that educational institutions can plan accordingly.

Second, it must be shown to be scalable for mass manufacturing of whatever interface is needed for this technology to work. While this typically isn’t a problem in the digital era, nonetheless, it must be properly proven, not just given empty assurances. This is usually done by incorporating some degree of existing technology platform that doesn’t require radical. retooling. And if the platform is incredibly novel then the burden of proof must be high enough to match the novelty of the platform.

Thirdly, the true financial cost must be known. Too many times this or that company or institution will make a claim that they can produce this or that product at a given price point that seems very appealing. But, by some magical process, the actual roll out price seems to balloon to two or three multiples of the original price and even beyond. Some version of unforeseen difficulties is invariably used to justify the price hike. And the company that is supposedly wallowing in cost overruns still manages to post record profits and historic bonuses. This story has been played out many times from vaccines, Wall Street banks, military hardware, and beyond.

Because every company trying to make this kind of technology knows how lucrative these large contracts can be, they will have an even greater incentive to blur the line for all three of these benchmarks. They will be incentivized to under test their prototypes while overhyping their results, make exaggerated claims about their manufacturing schedules, and underestimate their true sticker price. And while contracts can put safeguards in place for all these problems, the history of public private partnerships are such that they almost never do. The public invariably seems to get stuck with the bill in the short term while our children pay with their lowered learning curves in the long run.

The amount of educational technology that has been grossly exaggerated in its benefits, had unforeseen cost overruns, and/or massively underdeliver in its results might as well describe nearly all of educational technology. Apart from the shift from chalkboards to dry erase boards the vast majority of technology upgrades have failed spectacularly. Smart boards and high def projectors just seem to reduce the work required for teachers showing the students movies and TV shows. They were no longer burdened with having to wheel in a VHS or DVD player from the library. At a cost of thousands of dollars per classroom, this seemed excessive to me 20 years ago when I witnessed it as a teacher and it continues to seem excessive to me now. Kids with tablet computers still manage to not learn how to write with any skill but always manage to find a game or ten they can play. And whatever the technology is, you can bet that less than half the teaching staff has any clue how to use the more advanced features, which are invariably where the greatest benefits were supposed to come from in the first place. Technology in education overwhelmingly seems to be a racket rather than a benefit.

So let’s hold our applause till they can actually do what most industries have to do with new product and service rollouts. Make them produce a working prototype that comes with powerful evidence as its proof of concept. Let’s ensure that they can manufacture it at scale. And let’s demand a price that’s actually commensurate with the benefit to learning that the students receive. Until then, I strongly suggest that we not go down yet another rabbit hole where schools start altering their best teaching practices to make way for yet another technological boondoggle that enriches everyone but the teachers and students.

Nearly all public districts as well as private schools have the power to force these companies to prove the worth of their products and services prior to purchasing them. If these miracle AI assistants really do outperform teachers by a large margin without any appreciable drawbacks then I will happily endorse giving them their due spoils. But this decades-long history of giving them ever greater rewards before they’ve earned it has got to stop. And if the well being and education of our children isn’t a great enough reason to hold them accountable, then what is?

Addison Maille

I am a learning enthusiast that is trying to improve humanity’s understanding of how learning works.