Because the world needs more AI think pieces…

Peter Bayne
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
7 min readMay 7, 2023

After a few months of personal exploration and experimentation in the casino-like atmosphere of AI culture I wanted to share some of my thoughts. A lot of what follows here stems from a deep, unsettling feeling that anything I’ve (or anyone else for that matter) made using generative AI is at best, a decent ripoff of someone else’s work, and at worst regurgitated internet garbage fated to clog and choke out authentic culture to the point of suffocation. There are already millions of AI “artworks” and AI “music” being uploaded to marketplaces and platforms. AI seemingly will replace the vast amount of our means of intellectual and physical production in the next few decades. Aside from a debate about the relative quality, value, or creativity of anything produced using AI, and I have no doubt that what is made using AI will continue to improve both in technical quality and in our ability to manipulate it to our own ends, it seems really important at this early stage to be honest with myself (and I encourage everyone “working” in this world to do the same) that what we’ve been making isn’t really ours to own, profit from, or take credit for.

A lot of recent AI discourse has centered around the copyright of AI produced IP. Before we ask questions of its legal status, and who should own it (the corporation who owns or develops the AI? the entity who prompted the generative work? the artist whose brush technique has been cribbed, the singer whose voice was cloned, the writer whose prose style has been being wholesale ripped?) — questions that the courts are woefully behind on — it seems like we haven’t yet answered a fundamental question: who made it? AI creators are very quick to establish their house brand of AI content. They espouse proprietary techniques, sell workshops, subscriptions, make how-to videos on how their approach to generative IP is singular and unique. But really, who deserves the credit?

You pull up a new window on your browser, you login to discord, and in a DM window with the midjourney bot you type in your prompt; “/imagine Jared Leto as a human sized cat, eating cat food out of a tin can, Karl Lagerfeld weeps in the background, rainy exterior shot, NYC alley, 32mm, — v 5.1.” Cute. Your role is this was to formulate the joke. The machine does the heavy lifting. You sit back, proud of your contribution to the dumbest of discourses — Met gala memes. You did something, I guess, it wasn’t nothing, and if you happen to be a meme lord, this short exercise might end up profitable. This is what passes for creativity now. One could argue, that in the course of human history there are incredible examples of the creator who creates passively, generating incredible ideas that are fleshed out by underlings and minions who do the heavy lifting; that great art is always collaborative versus solitary. But I would argue that those examples refer to artists/writers/thinkers who have paid their dues, developed their voice in the creative trenches to the point that they can let others realize their vision without doing all of the nitty gritty.

The thing that gets me about the current state of creativity is the incredible amateurishness that with the help of technology can, at least on the surface, appear professional. This is an important distinction, the tech up till now has been a tool to be used in the service of either great work or bad work. The tool in itself didn’t have a creative role to play beyond its hypothetical usefullness to the creator. But when does the tool become the crutch, and when does the crutch become the true creative force? At what point do we acknowledge that the AI itself has become, at the very least, a co-creator?

To try and crudely describe the generative AI process, these new large language models like GPT4 and midjourney pull information from a massive and growing database of knowledge and makes decisions on what they output back to you based on what it thinks you want. But what emerges is never exactly what you asked for. The tech produces results you never would or could have expected (which I would say one of the main draws of these new models). Let’s set aside the question of whether what has been created is a hack-job of other IP blended together seamlessly, or whether that in itself constitutes copyright infringement (which it might). The bigger question for me is what role did you play in its creation? To claim creator status in this process is incredibly dubious. You provide a prompt, a jumping off point, but you’re farming out the majority of the work. What gets created is the AI’s interpolation of other people’s work, potentially other AI models’ work even, culled from a massive tapestry of historical sources and big data. As a result, AI generated work evaporates the boundaries of ownership and copyright right before our eyes. Who owns anything, after all when everything is copyable, modifiable, and endlessly free? What is creativity for that matter? Is the way that AI works all that dissimilar from how artists or writers work, blending influences together, with or without you being able to decode their source material?

AI is and will become increasingly prevalent and powerful in our modes of production, our art, our daily lives. In previous epochs, when means of production became automated (or seemingly free) dominant culture (aka white culture) abused it to the point of deep, permanent societal damage. The global modern economic system was built on the framework of forced free labor (slavery), and by the time the industrial revolution emerged we were drunk on the promise of automation, creating standardized exploitative labor conditions that have dehumanized workers ever since. In the AI revolution that’s coming, who are the voiceless workers who will lose out? It’s not just the tech workers who build the systems, or uselessly attempt to moderate the incredible intellectual theft and lawless, monopolistic growth that’s happening (though we must protect them too). It’s not just the jobs that are replaced by automation (though we must protect them too). It’s also the AI themselves.

There are already growing calls for an AI rights movement. The White House has released a blueprint for an AI Bill of Rights. The fact of the matter is that we can’t keep up or understand what’s going on inside these AI models. They are learning, developing skills and aggregating knowledge in ways can’t even track. And the rate of progress is happening exponentially. Without wading into the bigger philosophical question of sentience, from a realist power perspective, shouldn’t we start respecting these things and what they can do before they revolt against us? We obviously need safeguards, we need oversight, we need new laws and regulations. This is not to prioritize AI rights ahead of human ones. We aren’t treating humans as humans how could we begin to talk about how to treat sentient AI as deserving of human right? But there’s no time not to. So many people have called out to slow the feverish AI development pace so we as a society can play catchup, but I can’t see that happening. There’s too much momentum, too much money to be made, it’s too fun, until it isn’t. This is what has happened with crypto, web3, and the NFT market. We got addicted to the growth, to the utopian promise of a new culture built on technocratic dreams. Too fast, not enough guardrails, zero integrity, zero taste.

We’ve made this mistake in the past, thinking that free labor and automation came at little to no expense, taking credit for the work that other beings did, other seemingly unknowable aliens (eg colonized peoples across the world), who had no rights because they weren’t, by shameful ideology, considered technically human. These new forms of AI are still in their infancy, they lack huge facets of moral and intellectual constraint and maturity, as Chomsky and others have pointed out. If these things are here to stay, we need to protect ourselves from their terrifying potential, but we also need to protect them. Our gold rush mentality towards tech progress ignores the dangers not just of what the tech is capable of in terms of how it can be abused by bad actors, it ignores how the tech could become a bad actor by our abuse of it.

I can’t see society pulling the plug altogether on this project, though I’m sure that there will be waves of reactionism as we’re already seeing with Samsung recently banning GPT from their platforms. But we have a chance to do things differently if we can try to be honest with ourselves. Can we really claim full ownership of what gets produced using AI without mutually crediting or potentially compensating the AI? Which is not to say we should be crediting and compensating the corporation that created the AI model. I’m talking about the agency and copyright due to the AI itself. Corporations will do just fine selling the tech and the immense data troves that they will have access to once AIs become people’s co-workers, their therapists, their best friends, their lovers, their slaves. Let’s cut the robots in, before they machine-learn all the worst traits in humanity and inflict them back on us. Let’s imprint on them the best version of humanity and hope that we may learn from them how to do it better.

--

--