The immediate promise of AI is to make us worse people

Joshua Clingo
Jingo
8 min readAug 15, 2024

--

Ironic (made from iron) AI render of itself as a dumpster fire

I grew up excited about the future, especially a future full of robots and flying cars and full-body VR experiences—and I’m still naively excited about all of these things… however, I have some concerns about the robot thing. Particularly, the AI that would drive them.

What we have today is an amazing set of pattern recognizers and generators—it isn’t quite intelligent in the William James functional sense. To him, intelligence is the ability to solve a variety of problems, which differs from what we have right now with our token generating, pattern-recognizing systems. The key difference is that these systems do not actually problem-solve in the same sense that we and other living things do, as they lack a central functional self that is interested in problem-solving as a matter of survival and flourishing. This is not to say that developing a system that has something like a self is impossible but that what we’re building is not doing this at all, leaving us with a simultaneously stunningly competent and incompetent set of tools. They can pass some of our most technical exams with aplomb and navigate mazes and rapidly recognize objects and patterns of all sorts—all while giving absolutely zero guarantee that they won’t completely falter or bullshit (the correct technical term) their way to an answer. In James’ old-timey words:

We may then, we think, consider it proven that the most elementary single difference between the human mind and that of brutes [(here, AI] lies in this deficiency on the brute’s part to associate ideas by similarity — characters, the abstraction of which depends on this association, must in the brute always remained drowned, swamped in the total phenomenon which they help constitute.

In other words, the limitation of these systems is that they are themselves patterns, relatively simple and sanded-down representational simulators of real-world interactions. They are inside a matrix of our own design.

My point is not to yap about what AI is not but to straightforwardly point to their fundamental limitations, as this has led to their usefulness being extremely strange.

Today, I received an email that I’ve successfully opted into Google’s Gemini AI agent, a cutting edge tool with billions of dollars of investment and millions of hours of brilliant human capital behind it. Here’s the list of recommended uses for this monumental achievement:

  • Writing a heartfelt letter to a friend for having done you a favor
  • Organizing a trip from travel receipts
  • Doing your homework for you
  • Writing a personalized introduction to a potential employer

It would be funny if it weren’t so sad.

Do I even have to give a commentary? I don’t, but I will anyway.

I mean, come on. If your friend does you a favor, please oh please do not make an AI-generated thank you note. Take them out to lunch or buy them a commemorative keychain or do literally anything else. Or maybe write your own damn letter. Maybe that’s just me being old-fashioned but I thought the point of thanking someone was to let them know you care, not that you have successfully brought them a can of Campbell’s Heartfelt Word Soup from the pantry. And your homework? I wish you could see me and all the other educators seething right now. A cover letter? It’s literally the only expression of humanity you are supposed to show when applying for jobs remotely and you decided to bust out the Campbell’s again.

But hey, it’s kind of nice that they can reach into your personal email and give you a personalized map, right? I know you know how to do this yourself but it might save you a few seconds. That’s a good thing… but wait, maybe it isn’t. Maybe, just maybe, one of the best parts of traveling is anticipating and researching the places and things you want to see and outsourcing building up your hopes and dreams about the one and only visit to the Land of Ice is something you should do for yourself.

It’s easy to be cynical about current AI capabilities when they are so cynical about the value of human experiences. Let’s blame someone!

Who do we blame? Developers? Tech execs? Ourselves for wanting these things? AI for being so well-positioned to address these non-problems?

Yes.

As is often the case (thanks, complexity!), it takes a village to raze a village. We shall spread the blame across the bread of existence.

Developers (throwing myself under the bus) have an annoying tick where they always try to reduce everything down to function. Does it work? Ship it. Since these things work, we ship them. They also have the equally annoying quirk where they see everything as a problem to be solved. Students are struggling with their reading? Here’s a tool that will make it so you don’t need to read at all. Hell, isn’t the only requirement that you pass the class with a good grade? We’ll give you the exact tools you need to do this. What’s this about the value of learning and learning to learn and enjoying it? Not sure what that has to do with getting a good grade. Let’s try to stay on track, kid.

Execs are, well, execs. They do whatever it takes to drum up funding and keep the company alive and growing. AI is red-hot right now and there are all sorts of great ideas for how we can use it so it only makes sense to do this as much as we can. Funnily enough, I almost blame execs less than anyone else here because I have low expectations of their moral compunctions.

Blaming ourselves? Now, that’s easy. We’re all just looking to do our best and these tools offer straightforward shortcuts to some of our end-goals. This is one of the oldest human weaknesses, however—an over-emphasis on the results over the process. Thanking your friends, connecting with employers, reading and analyzing literature, and planning trips are all things that require time and effort. However, even though they are all technically means to an end, the end is in no way meaningful if the means along the way are removed. Struggles and effort are what makes it all worth it and all these uses of AI promise to take that from us. I blame us for failing to recognize this, though I also recognize that allowing yourself to struggle can itself be a privilege that not everyone has. (If you do, I advise you to struggle and to enjoy it!)

Last we can blame AI, for the reasons already outlined. What we have is best positioned to solve only the dumbest of non-problems because it turns out that the real and deepest of problems are not nearly as amenable to automation. This is good news for the future of many people who value their careers. And conveniently, the careers that people tend to not value overmuch are those that lend themselves to AI stripping those miseries away (though this raises the question of where those people will go, lacking experience in careers where the means are meaningful—we’ll save that question for another day).

The central clickbait is that AI promises to make us worse people. Though I highlighted how it cuts out the meaningful journey, that really only addresses how it will worsen our situation.

How will AI make us worse? The most obvious way is a direct consequence of cutting out the meaningful journey—it will reify an already insidious, cynical belief that we are largely valuable because of the value we produce for everyone else. It does matter that we can benefit others, but it also matters that our personal experience of this is itself valuable. Even if I am “wasting time” enjoying hobbies or, heavens forbid, looking up maps of Iceland in anticipation of a trip there, these things are all valuable to me. And that value—my anticipation—can and will manifest itself to others around me. This soft value would never register in any obviously quantifiable way but it’s these soft values that make life worth living.

Current AI also promotes an anti-education narrative. Instead of cramming language-learning when I travel, I use convenient apps to do that for me. It’s nice to be able to function well in a foreign place but the lack of pressure on me to think and consider and struggle forms a barrier between me and what I could be learning. And that’s in a learning-for-fun context. I happen to work as an educator at a major university and let me tell you, it was already hard enough to get students to be present and engaged in the material—AI has deeply accelerated the dissociation process. The old meme went “Is this going to be on the test?”, where we would always softly sigh and mutter something about study guides. Now we’re lucky if there’s any question-asking at all.

Last I checked, the majority of students are using AI on their homework and remote exams. That’s irksome but what is far more concerning is that, when confronted with this as a violation of our rules, the response is a collective “Aw shucks, I didn’t know I’d get caught—I even used an AI-checker to make sure it didn’t know” (this response actually happened several times last semester, almost verbatim). This kind of cynicism and disinterest in learning is completely supported by the AI tools we’re building. Is this making us worse people? Yes. Learning is good.

I won’t go into the weirdness of having AI write heartfelt letters but I will at least say that the fact that this is presented as the first of four main options is damning of someone or someones. And since it’s well outside the scope of this conversation, I also won’t go into the by-far-most common use of AI, which is good, old-fashioned social media algorithms—or, as I always call them, bad, new-fangled social media destroyers of humanity algorithms.

Phew, that’s kind of negative for me! Lots of “get off my lawn” talk. To cleanse myself of this, I’ll briefly say that I still think AI has great promise. In healthcare in particular, we’re going to be able to improve so many different things (detection, personalized help, devices, robots) that it’s wild to think about the coming decades. Believe the hype there. More importantly, I think you should be optimistic at AI doing things that humans either can’t do well (fine detection, processing complex data, precision, data science/scientific modeling) or shouldn’t be doing because it’s not meaningful for anyone (email wrangling, trash sorting, bomb-sniffing). But the current suite of tools very much represent the worst possible set of dehumanizing things we could or should be doing. Beware what it does to you and don’t fall into the trap of making your life so convenient that all you end up with is a bunch of meaningless ends.

--

--

Joshua Clingo
Jingo
Editor for

Hello, this is me. So who is me? Me is a Cognitive Scientist who happens to like writing. I study meaning in life, happiness, and so on and so forth, forever.