AI is not our friend — it’s the Tower of Babel

Organized ramblings on the many, many dangers of AI

Thomas Jenkins
The Coastline is Quiet
5 min readJul 12, 2023

--

I’ve encountered every possible reaction under the sun to ChatGPT’s emergence around the beginning of the year. As an educator, the one I’m most familiar with is revulsion at its use as a teaching tool (a sentiment that I share completely). But I know there are many others who view it as a (potentially) invaluable teaching tool or a welcome assistant to those looking for new ways to phrase ideas.

The technology behind ChatGPT is impressive, in its own way. By now we’re all familiar with the reality that it has far more in common with predictive text on an iPhone than with any sentient robot from a science fiction movie. And it’s likely that the language models will become more and more advanced over time. I’m not confident in my own ability to accurately distinguish between human and AI produced writing.

But the more that time passes, the more I’m convinced that the AI revolution will be nothing but bad. I struggle to see any real future where the mass proliferation of AI and AI-adjacent technology is actually a benefit to humanity. I understand the potential applications, I really do. But given every risk ahead, AI’s unchecked growth seems far more likely to be a trap made of our own hubris and sin.

Education — the first domino to fall

Let’s start with education, perhaps the most obvious negative use case of AI. In the Spring 2023 semester, students around the world began to use ChatGPT to write their assignments, take their tests, and generally stymie attempts by their instructors to produce real work. ChatGPT is a temptation to plagiarize that is levels of magnitude more appealing, accessible, and effective than any in the field of education before. With just a few clicks and keystrokes, it’s possible to produce clean (if exceedingly bland) content.

I’m still grappling with the full implications of ChatGPT in education. I’m rethinking how I assign papers and trying to decide if having students handwrite their work in the classroom is worth the time cost it would demand. It’s a challenge that I know is happening in countless other teacher’s minds as well. ChatGPT arrived in the middle of a school year when it launched this year — August will bring the world’s first full school year with the tool available.

On the one hand, many students’ willingness to use this tool is a symptom of the broader commodification of education in the United States. Students know they have to get a high school diploma and college degree to access many of the better-paying jobs in our economy, so they (rationally) view education as a stepping stone rather than an end to itself. Given that mindset, cheating of this nature fits into a mold that’s understandable (though not any more justifiable).

And yet, the reality of the American education system does little to excuse blatant cheating. I confronted ChatGPT to see how it would respond to my assertion that it posed serious risks to academic integrity. It gave me a canned response somewhere along the lines of “students and teachers need to work together, etc, etc” that I found entirely unconvincing. Of course, I didn’t expect the language tool to give me anything actually useful. I view written work as an essential ingredient in student assessment and can’t imagine conducting a class without it. As we get through the summer, I’m mentally preparing myself for a significant number of AI-written assignments to sift through.

Everything else

Stepping outside the field of education, things don’t look much brighter. The race to AI is led by some of the biggest companies in the world — Microsoft and Google. And this revolution takes place in a world where there are increasingly few companies controlling technology and entertainment. Recently, the entire internet cheered as Meta (Facebook) took a bite out of Twitter’s core audience, while Microsoft plowed ahead with its purchase of Activision. These companies are only getting bigger by the day and their dominance of the technology sphere looms over all of this.

In journalism and media, the potential future is even darker. Next year is an election year — it doesn’t take a genius to foresee a tidal wave of AI-written fake news articles about any subject imaginable. ChatGPT will often try to extricate itself from controversial topics, but a quick Google search uncovers a myriad of ways to get around its guardrails. Where will this technology be in a year’s time? Is there any future where it doesn’t have a negative effect on public discourse, which was already in pretty bad shape?

There are plenty of voices calling for caution in the field of AI, but it seems unlikely that they will be heeded. AI is the next big thing, the next target for research and planning. The smartphone revolution has clearly passed its defining days, while other tech trends like entertainment streaming have started to mature. For these massive corporations with ludicrously large budgets, AI is the next golden goose.

The Tower of Babel

Human history is littered with examples of hubris and foolishness, but the one I’m drawn to in this instance is the Tower of Babel. Contrary to what many people assume or believe, the goal of the tower wasn’t to literally reach to Heaven (a goal that the people who built it would have recognized as impossible). Instead, it was a monument to human greatness and a direct violation of God’s command to disperse throughout the earth.

God’s confusion of humanity’s language as the Tower went up was a strike against humanity’s foolish and vain attempts to avoid his will. By mixing the language everyone spoke, he forced them to find their own groups and spread out from Mesopotamia. The vast dispersion of humanity followed this moment — the greatness of mankind had tried and failed to resist God’s commands.

If there’s a “Tower of Babel” moment coming for the AI Revolution, I won’t pretend to know or profess what that could look like. Humanity is sprinting down a rabbit hole that it doesn’t really understand. To be clear, I’m not arguing that God is about to smite humanity for its adoption of AI (though that’s certainly possible). What I’m arguing for instead, is that humanity’s history of charging full-steam-ahead with no consideration other than progress, profit, or some other measure of success nearly always ends in some kind of disaster.

One exuberant proponent of AI argued that, “AI is quite possibly the most important — and best — thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those.” I’d argue that this is not the case. Instead, AI has much more in common with a long-decayed ziggurat somewhere in the fields of ancient Mesopotamia.

The views expressed are mine alone and do not represent the views of my employer or any other person or organization.

--

--