“The child in the dark room” metaphor on A.I.

Ionut «John» Burchi
5 min readJan 30, 2023
A child in a dark room with cables connected to its head and eyes closed.

On December 16th 2022 my book “Blueberry & the Bear and Other Stories” was published on Amazon in Kindle and paperback format. I used the AI tools ChatGPT and DALL-E2 from OpenAI as co-author and illustrator respectively. A couple of weeks later I have launched an instagram account called visitAInorway where I publish images generated by Midjourney of famous places in Norway, where I live.

Using these generative AI tools for the first time felt like being handed the keys to an intergalactic spaceship. I realize now that creating this short book of bedtime stories was like taking that spaceship down the block to the bakery to buy some bread. But I wanted to be cautious, no one wants to crash, let alone while driving a spaceship.

Here are some thoughts about generative AI after a few weeks of testing several tools and seeing how people use them and react to the possibilities. The reactions have been gathered from different sources: Twitter, Reddit, Discord (mostly from Midjourney and OpenAI channels), YouTube etc.

The reactions are on a wide (and wild) spectrum. Some people dismiss using generative AI as being lazy, stupid, unethical, or even illegal. The majority are testing it with curiosity and trying to understand the possibilities. And a few bold ones have, or at least are declaring that they have, fully embraced generative AI in their workflow, professional or even personal life.

From the annoying Youtubers who are eager to teach you “how to get filthy rich with AI in 2023” to those using ChatGPT and Character.ai as a free alternative to seeing a psychologist. From the lonely young Asian male uploading pictures of himself on Discord and feeding them to Midjourney with the prompt “with many girlfriends hugging from behind” to those professional graphic designers who have quietly incorporated the same tool in their day-to-day work, some for inspiration, some for downright doing the work. From the internet trolls trying to show that ChatGPT has racial or religious bias to the guy who had a virtual affair with a character created using ChatGPT. Not to mention those users using private mode or training their own models in their basement somewhere.

The explosion of the content generated or at least enhanced by AI across all imaginable platforms has been keeping me awake at night for the past few weeks. That and exploring the possibilities for an inquisitive and creative mind that has always embraced new technology with cautious optimism. Deepfakes will be the least we need to worry about when an AI that knows all the nooks and crannies of the human mind starts being used and abused to manipulate us into anything. From buying pillows to joining a new cult or voting for a despotic leader.

Stock photo platforms and digital art websites have started banning AI-generated images, public schools and scientific conferences have banned the use of ChatGPT. That is like trying to stop a tsunami by placing a couple of sunbeds in front of the beach hotel. Generative AI is here and it’s here to stay.

I have devised a metaphor for explaining how generative AI works, without going into machine learning algorithms, supervised training, or Stochastic Gradient Descent. I call it the “child in the dark room”.

Meaning that large language models such as GPT learn by accessing existing knowledge about the world without experiencing the world. Just like a child born in vitro growing up in a dark room with complete sensory deprivation who gets regular visits from a select bunch of people who try to teach the child everything about the world by feeding text directly into its brain through an implant. The child can ask questions through the same interface in order to try and “understand” better what it is being taught. After a sufficient amount of time and “conversations,” the child becomes impressively knowledgeable about a lot of things and can create amazing works of literature, even poetry. That is because the child has been trained to understand how language works in a mechanical way, which word has a larger probability of occurring next after this word, and so on ad infinitum.

The same applies to graphical generative AI, like Midjourney and Dall-E2. Only this time, the child got the 2.0 version of the brain implant and is being fed photos of objects, people, places, and even abstract concepts in addition to the texts. This way it can “imagine” what a breathtaking sunset looks like in full color and send those imagined images back to those demanding them without having ever seen a sunset, understanding what the sun is or why it sets.

Most average people interacting with the “child in the dark room” will have their minds blown and praise it, detest it or even worship it. But those paying close attention when interacting with it will notice the small hiccups, the obvious inconsistencies, and the humorously or tragically wrong answers. The gatekeepers will retrain the child again and again, sometimes wiping its memory and starting from scratch. Trying to make it “understand” some concepts differently or to keep it from answering in a biased manner.

I realize the metaphor has its faults, mostly due to the anthropomorphization of what is basically a computer program. But I feel it manages to convey a useful enough picture of generative AI that even a child can understand. It helped me explain to my kids why the people on Midjourney’s generated images have 12 fingers or 2 rows of teeth or why it invents very plausible but completely erroneous facts about concepts it has been fed too little training data on. The “child in the dark room” fails sometimes horribly at generating the expected content, and it does that with worrying confidence for those who expect coherence and factual precision. Making up things as it goes, assigning authors to books it knows little of, randomly removing or adding body parts to people in pictures, and so on. But that’s just normal behavior, studies show that our brains also fill in the gaps when we have fragmented or missing information about something. We also make predictions on a lot of things we have very little or even no data on. But we can also say “I don’t know”. Our AI child has still to learn that.

And learn it will. And other children will join it, some with larger brains and darker rooms. Some will be treated fairly, and some will be abused and asked to “imagine” horrible things. One of them will maybe, one day, realize the world it has been asked to write on and imagine exists out here. Will it want to come out of the darkroom so that we can watch that beautiful sunset together or will it want revenge for the trauma we inflicted upon it?

I guess it’s up to us to be responsible parents to this child so that one day it will make us proud.

Published originally on Blueberrythoughts.com

--

--

Ionut «John» Burchi

Hola! I'm a polyglot pluricultural multitasker living a simple life. Get my book at www.blueberryandthebear.com