The Dawning of The Creative Age

AI-Human Competition-Collaboration in the 2020s.

Shourov Bhattacharya
Polynize
7 min readDec 28, 2022

--

Two weeks ago, in our first pilot for our new Polynize Live format, one of our players (unknown to the others) was an AI bot operating using ChatGPT. Within our game arena, the bot came up with “ideas” to solve our mission challenge and competed with the creative ideas put forward by other players using our game mechanics. At the end of the game, the bot came 3rd on the leaderboard, out-competing 80% of our players*.

An AI bot ideating — and competing! — in Polynize.

(*) This was a not a strictly controlled test, and certain aspects of the bot’s responses were modified by our human operator. We hope to run a fully automated AI gameplay experiment shortly.

Usually, this is the point at which anxiety begins, the narrative being that although humans are good at creativity and thinking, AI will ultimately out-think and out-create us all. This is the narrative which dominates the discourse currently.

We’ve been working in the field of technology and human creativity for decades, and our thesis is completely different. Creativity thrives in an arena where there is both competition and collaboration, and this game showed us that an AI can participate in that process with humans. The “success” of that process relies not just on the quality of the AI but on the state-of-mind of the humans in the arena.

This competition-collaboration between AI and humans can only be understood by considering the entire human-AI system, not the AI in isolation (this is unfortunately what almost all technologists do, drawing system boundaries that exclude human minds). In fact, the “intelligence” in AI is only meaningfully understood within the context of a particular human audience that is interacting with such an “intelligence”.

But to explain this further, I have to go back my early years to tell a story.

In 1990, AI made my friend cry

As a teenager, I learned to code in BASIC. I picked up a library copy of a book called Computer Games and in it I found a BASIC program listing based on ELIZA, an iconic early attempt at building a chatbot (from 1964!).

The program used simplistic “pattern-matching” to generate outputs based on conversational inputs. I typed in the programs, modified its matching rules and re-christened my version as Kronos. The script / rules were designed in such as a way as to “simulate” interactions with a psychiatrist. Below is a real screenshot of the original program source code and sample output.

Code and output from KRONOS.BAS (1990)

Messing around, I found that it could generate realistic dialogues — as long as you interacted with it in the frame of a psychiatrist visit. It could actually help you think through your problems. So naturally, I started inviting my school friends over to consult with the program for a small fee. :)

Most of my friends found it mildly interesting. It was very easy to reach the limits of the program’s capability and conversations didn’t typically last long. But for one or two friends, the program was deeply engaging. They came back again and again, and one particular friend became quite attached to interacting with this computer program.

One day, I was shocked to walk in and find that she was shedding tears.

Simple AIs / bots can elicit genuine responses.

Her tears were real. She was dealing with difficulties in her life and the chat experience provided her with real therapeutic value. Here was a program that was incredibly simple — just blind rule-following based on tiny sets of data — that was eliciting a genuine human response.

How was this possible? You’d probably say that she was in a vulnerable position and was “projecting” her needs onto the program. And you’d be right. The point is that such an act of human “interpretation” of AI output is exactly the rule, not the exception. And it has taken me decades, but the time is now right to act on that insight.

AI => AI + Humans

Notice how the standard yardstick of AI, the Turing Test, is framed in terms of human responses to technology, not in any absolute attributes of the technology itself. Whether or not the Turing Test can be “passed” in any instance is critically dependent on the human observer and his/her state-of-mind. This generalizes to any real-world AI application.

It is not meaningful to assess the performance of an AI without a precise specification of its human interactors and the cognitive frame in which is being assessed.

You have a human interactor who is primed for a “therapeutic” interaction and has an intrinsic need for such an interaction and a situational frame (“psychiatrist visit”) that massively collapses the set of permissible and plausible responses. In such a situation, I found that even a simple “AI” was able to “spark” real responses from the human interactor. Real in the sense that the experience had real-world impacts — she left the experience feeling better about her problems in life.

Let’s generalize this. We call the combination of human “priming” (expectations) and “framing” (context) the arena. In any given arena, we look at the community of humans and AIs and how they interact together. What we find is that the AIs “spark” human minds, and it is precisely that sparking that we experience as a feeling of “intelligent interaction”.

AIs interact with Humans within an Arena

Which begs the question — what exactly is a spark? Well, this is where we must depart from the precision and rigour of “computer science”. We have included human minds within the system, and we are therefore dealing with the fuzziness of humans. We have published some theoretical framework of the spark process, but it is not a purely intellectual process —it is experiential, an example of what is known as “qualia”. One way of expressing that experience is to say that:

A spark is a creative input that spontaneously inspires an expression of creativity in response.

(One heuristic to measure sparking is to look for what we call “creative attention”. Sparks tend to attract attention and further creative interaction. This is exactly the basis of Polynize game mechanics)

Our position is that this is at least as much empirical work as it is theoretical. In the Polynize project we have created an arena which replicates and quantifies the spark process within different framing and priming, producing diverse outcomes with value.

What our pilot game showed us is that AI can participate and augment that social creativity process in competition-collaboration with humans. That’s incredibly exciting to us, because it opens the possibility of doing two things that are all-important in the coming decade:

  1. develop human minds to collaborate with AI
  2. train human minds to out-compete AI

Much of our work in the year 2023 will embarking on these two civilizational-scale tasks.

Let the Robot Wars Begin!

The advent of ChatGPT has opened a popular gateway to AI in the same way that the web browser opened an accessible window to the Internet. My co-founder Marrs calls this the coming of the Robot Wars, but he has his tongue (partly) in cheek. The number of human interactors with AI is about to explode exponentially, and we see this as an enormous opportunity.

My early experience with the ELIZA-like program is that given the right arenas, AIs don’t need all that much “intelligence” to spark humans. However, really dumb AIs like my BASIC program can only spark in very specific arenas at specific times. These new AIs are able to deal with a much larger, unpredictable set of arenas and spark human creativity in many different fields.

The key to seeing the longer-term here is to see the entire space of AI-Human + Competition-Collaboration rather than focussing on one-dimensional cross-sections (e.g. the story of “AIs will supplant humans”). This means concentrating just as much on the interactions between the agents (humans/AI) rather than the agents themselves; and focussing on the process of creativity just as much as the outcomes.

Creativity as a dynamic process in the space of AI-Human Competition-Collaboration

Our intuition: creativity will now evolve into an iterative cycle that moves through the space of AI-Human Competition-Collaboration. We can see the early manifestations of this coming up quite quickly in various fields*. The change will be historic and will need adjustments and retraining on a faster scale than ever before. Humans will need to “level-up” creative work to out-compete AIs, and AIs will adapt to find symbiotic collaborations with the largest, most valuable human communities.

Polynize is the perfect arena in which to experiment and scale this dynamic with communities all around the world, in particular student communities. We (and others) will help to train both the humans and the AIs to find the “sweet spots” in the space of competition-collaboration. This will lead to extraordinary outcomes and mark the dawning of the Creative Age in the 2020s and 2030s.

Next article: What exactly are the traits of human creativity that out-compete AI — and how to develop them as early-career students/professionals?

(*) But note that this has already been happening for a while as tools get “smarter” — consider how a user interacts with photo- or video-editing software .

--

--