Bewitched by AI

Showing is Not Creating

Sean McClure
NonTrivial
10 min readSep 18, 2023

--

I also discuss this topic on a NonTrivial Podcast episode. Find it on Apple, Spotify or wherever you listen.

AI ɪs Gᴇᴛᴛɪɴɢ Iᴍᴘʀᴇssɪᴠᴇ

Most people would agree that AI is getting impressive. Since 2011 — 2012 we have seen deep learning win image net competitions, showing how models are almost on par (in a narrow sense) with the human ability to recognize objects. Techniques like facial recognition are now standard software, and the “AI winters” of the ’80s are in our rearview mirror.

ChatGPT is the latest AI to strike our wonder, with its ability to hold conversations and produce all matters of creative output. Today’s AI can write software, hold philosophical debates, summarize text, produce poems and craft stories. The outputs that AI can make things most people thought were solely in the domain of human cognition.

Embracing machine learning has meant taking on a fundamentally different computing paradigm, moving us away from traditional engineering and classical statistical methods. We can no longer create state-of-the-art software by explicitly instructing a machine with rules. The features we need in today’s software require outputs that arise from complexity, not some deterministic set of instructions.

There is no researcher or engineer today that can peel back the layers of deep learning and describe exactly how it works. Our inventions are no longer the simplistic machines of the industrial revolution. Sure, we still must program the overarching rules a machine will follow, but these are just the scaffolding we put in place; it is a highly iterative and convergent processes that arrives at today’s best solutions.

We are bridging that divide between simplistic machines and genuine complexity; a testament to a new paradigm of science and engineering, bringing us both excitement and fear. The “alchemy” that makes technologies like deep learning possible force us to give up a lot of control.

But regardless of one’s comfort level with this lack of control, we are heading into new territory by embracing this new paradigm of computing. We now have a technology that is competing with humans in everything from producing music, making art, summarizing text, writing poetry, crafting stories and producing software. AI, for better or worse, it ushering in a new era of transformative automation, and if we’re all being honest, it’s all quite impressive.

Bᴇɪɴɢ Iᴍᴘʀᴇssᴇᴅ ɪs ɴᴏᴛ Pʀᴏɢʀᴇss

Being impressed is not really an objective measure of progress. Of course we are impressed for a reason; we wouldn’t be fascinated if there wasn’t something right about it. But being impressed is not the same as making progress. Being impressed by AI does mean we are creating anything. Think of how easy it is for people to be impressed by nonsense. Laymen are often impressed when they read some jargon-loaded article on a so-called scientific breakthrough, only to have real scientists argue that nothing new is being reported.

This shows us that being impressed has as much to do with us as it does with the the thing or person we’re looking at. People make a great deal of assumptions about what they’re seeing and hearing. People tend to assume another’s authority or intelligence based on little more than fancy words or flashy displays. Our ignorance about the situation often leads to some undue fascination rather than an objective measure of progress.

Imagine looking at a piece of digital art showcased on social media. It looks good. It’s impressive. But is it good art? How do we know? Do we need a professional artist to weigh in on this? Why are we impressed by the art? If it’s just about enjoying the art it doesn’t matter, but if we’re making a statement about genuine creativity it does.

Take a piece of summarized text produced by a tool like ChatGPT. You read the summary and say to yourself “well yes, that sounds like something a human would write. That’s a good summary.” But did you read the entire original text? How do you know that’s a good summary? What makes a good summary? Too many people assume something productive has happened when in fact it may be nothing new. To be clear, there is new engineering happening (this kind of automation has never before been possible) but that’s not what I’m talking about here. Has anything new been introduced into the world when a piece of text is summarized?

AI is producing a genuine type of creativity. AI can put words together that make sense, and seem to follow some kind of narrative structure. There is little reason to think humans are doing anything different. From a computer science standpoint AI is impressive. This is a real engineering feat. But in terms of genuine creativity, is AI producing something valuable?

If AI writes a story, is the story a good one? Perhaps we need a few hundred years to pass to answer this question. Imagine a novel comes out and becomes popular, but then 3 weeks later nobody talks about it. Was that a good novel? Perhaps only by standing the test of time can a story be assessed as being good or not.

What exactly are we being impressed by when we look upon outputs produced by AI? Again, not talking about the engineering feat, but rather the creative output that is produced.

Being impressed is not an objective measure of progress. Something else must play the role of determining progress. Perhaps it’s the generation beyond our own who can assess the survivability of what has been made. Maybe it’s the content’s utility; how well someone can use what has been made to improve the quality of their lives. Maybe it’s the propensity for a creative output to find its way into our economy.

Being impressed is not an objective measure of progress.

AI is getting impressive in terms of what it can produce: true. We have embraced an engineering paradigm that sheds the shackles of outdated determinism and rules-based engineering. We have entered the realm of building things that converge on results in ways nobody fully understands, and this is a good thing. This is the direction engineering should be headed in. But this does not mean people are producing genuine value with AI, as a tool.

At the end of the day, a human can still write a poem, write a story, summarize legal text, make art and compose music. Sure, we do it far slower than a machine, but measuring progress in terms of mere increased efficiency is highly problematic. Defining efficiency as producing the same outputs in less time is too unsophisticated to capture reality. If someone logs 8 hours at work, what does that mean? Adding value is what matters, and anyone can fill up an 8 hour day with nonsense. Thinking of how people interact as a purely transactional affair ignores what makes communication effective in the first place. Organic, messy interactions, what humans evolved for, carry far higher information content than anything efficiency can measure.

We all know we get our best ideas in the shower, or on those long walks full of distraction. The working parts of our lives depend on those hours when we don’t work.

Tweet

Back to AI; yes it’s impressive. It’s a technological feat that produces outputs, which at face value are genuine accomplishments. But being impressed is not enough to declare progress. What are we doing with the new tool? What are we building with AI that AI cannot do on its own?

Doing something faster is a “dumb” definition of efficient. If the law firm can now summarize legal documents more quickly, okay; now what? If you replace the slide rule with the digital calculator, and can now do the same calculations faster, the point is to use that extra free time to do something new.

When we look at what most people are producing with AI we are seeing the raw outputs of AI itself, not novelty or innovation. We are getting bewitched by AI because it can do what humans can do, but faster. But that’s not us creating anything. There’s a difference between showcasing what AI can do out of the box and actually building something novel. Whatever we do from a technological standpoint, if it’s truly innovative, we should be able to use it as a tool to produce something that has yet to exist.

Nᴏᴛ Eɴᴏᴜɢʜ Pᴇᴏᴘʟᴇ Aʀᴇ Rᴇᴀʟʟʏ Cʀᴇᴀᴛɪɴɢ ᴡɪᴛʜ AI

I argue that not enough people are actually creating with AI. What most people show is a display of AI outputs. In the early stages this makes sense. ChatGPT came out just last year, and what better way to learn the tool and get people excited than to show what it can do. But we get it. We get that it can make simple software, compose music, summarize text and produce stories and poems. But so can people. There is innovation in the AI, but people need to take the next step.

The purpose of a tool is to create new things. If you are given a piece of technology that automates something only humans could do a year before, the onus is on you to build something new with it. That’s the point. Not to just do the same thing humans were doing but quicker.

Until the day comes when the robots get their own rights, AI is a tool. We’re supposed to use tools to create new things. Yes, AI is impressive as an engineering feat, but being impressed is not a sign of progress.

Making an image that nobody has seen before does not equate to creating something new if the art has been automated. Some will be quick to say “but wait a second, that’s what artists do all the time. If somebody comes up with a new painting, they show it off, and it’s something nobody has seen before.” But you have to realize that the artist before AI, or any artist today not using AI, is not using a tool that is automating the creation of art. Artists who don’t use AI are taking it upon themselves to produce something new that their tools don’t produce by themselves. The paintbrush, palette and canvas do not create a painting. The original synthesis arrived at by the human, by the creator, is done solely by them. That’s what makes their creation new.

So, if you’re an artist not using AI, your work is new. But if you’re embracing the AI tool, then your work is not new unless you bring about something AI cannot do on its own. An artist using AI needs to create more than just a piece of digital art. They must take the next step; one that sits directly above the automation. Otherwise, it’s not really creating.

The paintbrush, palette and canvas do not create a painting.

Sure, tools like ChatGPT are not plug-and-play. It’s not like you just press a button and these tools spit out good work. There are entire courses and books on so-called “prompt engineering.” To some extent there is an “art” to prompt engineering. But at the end of the day, prompt engineering is still just producing what AI outputs on its own. The synthesis is still arrived at by AI.

If you run a law firm, and your lawyers can now summarize legal documents, I don’t think you should just do business as usual, but quicker. For one, the time spent summarizing legal documents might be where the majority of value comes from. Only dumb definitions of efficiency would suggest summarizing faster will itself lead to better outcomes. If you didn’t create the summary, then you’re going to miss critical learnings. The effort one puts into creating summary amounts to compressing information, and through that struggle there is important knowledge imprinted on whoever did the summary; the struggle to summarize text might just be what makes one an effective lawyer.

Now this doesn’t mean law firms shouldn’t summarize text. If you want to use AI to summarize text, by all means, but don’t just summarize text. Don’t just produce raw outputs faster. In addition to potentially missing what matters, merely summarizing doesn’t create anything new. You have to take the next step. You have to decide what your next level of compression is. What’s the next struggle?

AI ᴀs ᴀ Sᴛᴀʀᴛɪɴɢ Pᴏɪɴᴛ

We need to see AI as a starting to point to something we create, which itself needs to exist at a new level of aggregation. This is why creativity is a synthesis. Creativity is not just collecting things into some new combination; the combination takes on new properties that the pieces do not have on their own.

This is how we can use AI to build things that AI cannot do on itself. This is how creativity has always worked. AI does not change that. We don’t have to know what the new synthesis is, we just have to start building and see what precipitates. The recent example of making the video game Flappy Bird with AI is 1) cool and 2) nothing new in terms of creativity. This is an output created by AI. A genuine feat of engineering yes, but not something creatively new, because humans can make flappy bird. Once Flappy Bird can be created without humans Flappy Bird becomes the starting point, not the creation. To create things with AI we must view it as a tool that provides a new starting point, not something to merely showcase raw outputs.

It’s not enough to just say we can do something. What do you do with it? So much of hype in AI could be done away with if we moved from being impressed to building genuine utility.

If we really want to transform society using technologies like AI, it cannot be just about efficiency. We’ve got to blend what we have into something new, and then perhaps we can use AI to do more than just impress.

--

--

Sean McClure
NonTrivial

Independent Scholar; Author of Discovered, Not Designed; Ph.D. Computational Chem; Builder of things; I study and write about science, philosophy, complexity.