Is ChatGPT Good for the World? And Who Will We Hold Accountable If It’s Not?

Tom Gammarino
6 min readDec 19, 2022

--

A little girl wearing a backpack holds the hand of a robot.
Photo by Andy Kelly on Unsplash

When OpenAI unleashed ChatGPT on the world a couple of weeks back, it marked a clear line in the sand between the world we used to live in and the one we live in now. It might even count as a genuine paradigm shift, offering a new set of norms not just to the engineers of Silicon Valley but to everybody who feels they have a stake in human culture, which I think means every last one of us.

I asked ChatGPT earlier today whether it was good for the world, and it begged off: “I do not have the ability to evaluate the impact or usefulness of specific products or technologies in the world.” However, when I asked it to write a 500-word essay on the topic, it gave me a fairly nuanced response and admitted that “While a technology may have immediate benefits, it’s important to think about how it may affect society over the long term.”

This is what the creation says, but what about the creators? Just how much thought did OpenAI — the company that produced ChatGPT (and which was spearheaded by billionaires Elon Musk and Peter Thiel, among others) — put into considering the knock-on effects of their new tech before making it available free to everyone with an internet connection? My hunch: not nearly enough.

To be clear, I think AI is awesome. I’ve had a love affair with intelligent robots since seeing The Empire Strikes Back in the theater. And for the past twelve years, I’ve been teaching a popular science fiction class to high school students, quite a few of whom have told me that our AI unit was their favorite; no doubt that has something to do with my own enthusiasm for the wonders of technology. All of which is to say, it isn’t with generalized technophobia that I greet the arrival of ChatGPT. It is, however, with a creeping sense of dread.

ChatGPT and the various AI art generators that have proliferated lately are proof of concept. They’re not perfect, but given the accelerating rate of technological change, I don’t doubt such programs will decisively outstrip humans in many areas of creative endeavor in due course. The question facing the technogentsia is no longer Can we do it? but Should we keep doing it?

As a teacher, I worry less about my students cheating on writing assignments (though I’ve written about that here) than I do about them depriving themselves of a singular tool for self-improvement. Make no mistake, writing is hard. We struggle to write an essay or a story just as we struggle to lift weights at the gym. Forklifts can bench-press better than we can, and rocks can sit longer and stiller than the greatest Zen masters, but we don’t let them do that work for us because we all know that the value is in the doing.

Even now as I write this essay, I’m engaged in the push-pull of figuring out what I think about this very complex topic. Sometimes the journey sends me outward to learn something I didn’t know; sometimes it sends me inward to question my deepest values. In any case, there’s no way I will come out of writing this essay the very same person who began it. And that’s what I want for my students (not to mention the rest of us): those opportunities to deepen, grow, transform. Of course, those opportunities won’t ever go away completely, but every English teacher I know is wringing their hands about what sorts of at-home writing assignments we can still give in good faith.

Indeed, what makes me most suspicious of OpenAI’s stated goal of building AI systems that are “safe and beneficial” is that, as far as I can tell, they invested none of their billions of dollars in preparing society, and in particular educators, for what was coming. A few days ago, MIT Technology Review ran an interview with Sam Altman, OpenAI’s CEO, which ended with his saying, “We want to educate people about what’s coming so that we can participate in what will be a very hard societal conversation.” To which I want to say: 1) Where were they months and years ago? and 2) Let’s be frank: that conversation will not be hard for Altman and his ilk; it will be hard for those whose lives their new toy has permanently upended.

As Stephen Marche observed in his recent take on ChatGPT in The Atlantic, the chasm between humanists and technologists has been growing for decades, and they desperately need to begin talking to and learning from each other again. Take the sad example of Sam Bankman-Fried, crypto billionaire, who notoriously admitted that he sees no value in reading books, and who is now being credibly charged with eight criminal counts, including wire fraud and conspiracy. One wonders if a few thousand hours engaged in literature and philosophy might have led to a different outcome for him and all of those he defrauded.

What worries me more about ChatGPT than the obvious detriments to education and employment are the unknown unknowns. How many of us foresaw the way social media would silo us from our neighbors? Did the creators of deep-fake technology, who were interested in inserting celebrities’ faces in porn, ever imagine Putin would use it to justify an invasion of Ukraine? Any technology is value-neutral in and of itself, but for tech companies to disavow responsibility for the downstream effects of their products seems analogous to gun companies producing AK47s and washing their hands of any responsibility for dead school children.

How will this technology be weaponized by the worst of us? What misinformation campaigns await? What political scandals? What frauds? Is there anyone out there who believes the quest for a better search engine is worth all that? I don’t want to ignore the many wondrous possibilities afforded by these technologies, but I do wonder what safeguards we can put in place now to give us our best chance of harnessing our new superpowers for the common good. Perhaps it’s time for our tech gurus, governments, artists, and teachers to put their heads together and create a guiding document along the lines of the UN’s Universal Declaration of Human Rights. This world we’re entering is uncharted territory, and we’d be well served by a compass.

Most of us have this felt sense that technological “progress” is some impersonal, inevitable force, coterminous with time itself. Of course, strictly speaking, this isn’t true. If programmers pulled the plug on these projects today, their programs wouldn’t continue working on themselves, at least not yet (Some techno-utopians mark recursive self-improvement in tech as the moment we will have passed into the quasi-religious wet dream of The Singularity). I don’t know if it’s possible, or even desirable, to put the genie back in the bottle at this point, but I do think it’s incumbent on our tech elite to show some mettle and lead their faithful with a meaningful sense of care for those whom their technologies will displace or deprive. I also think it’s time some more of us left that church and returned to the precious world that still exists outside of its facsimile on our screens.

It’s a truism that we vote with our dollars, but I think it’s even truer to say that we vote with our attention. We will not get the future we hope for in some vague, idealistic way; the future we will get depends on where we choose to place our attention in this moment, and this one, and this one… So it’s high time we ask ourselves what kind of future we really want.

If you enjoyed this piece, please consider clapping, commenting, sharing, or buying the author a coffee.

--

--

Tom Gammarino

Tom Gammarino is an author and teacher. He writes about those places where art and science intersect. Learn more at tomgammarino.com