Published in


The Art to Start: Tabula Rasa

Designing Prompts for GPT-3 (Series)

Writing “before scratch”.

As we have seen, GPT-3 can write from scratch — and in our series “The Art To Start”, you will learn how to “scratch”. Yet, it also works without any prompt. You can click “submit” and be surprised about the results. Without any prompt, GPT-3 chooses entirely random contents. And delivers.

Back in the 1920ies, Dadaists and Surrealists (most prominently: André Breton) examined their creativity using the method of Écriture Automatique: “automatic writing”, without thinking about their results (censoring). What their subconsciousness will serve their hand to write or draw. The term was used already in the XIXth century by psychologist Pierre Janet, whose patients wrote being in the state of light sleep.

In 2020ies, AI can write without supervision, in a relatively coherent way (or according to the settings of Temperature, etc.). In this case, the topic selection is entirely random — primarily because you haven’t provided any hints to AI for contextual orientation.

Let’s try.

If you want to forego active participation in creating text, click “submit”, without any prompt. Here are some examples of possible outcomes. Remarkably, none of those texts will ever be re-written — every text is a one-take-issue.

Mind the settings we use.
Response Lenght: 64
Temperature: 0.7.
This is the only orientation for the machine since we haven’t provided any hints or contents.

So let’s click SUBMIT, without any prompt.

Disclaimer: I will choose an appropriate illustration for every GPT-3 completion from my private photo collection on Google Photo. But since we still stay within AI experimental field, I will ask Google Photo for the keyword, which could be sum the text written by GPT-3, like: “Sushi”.


(Fig. 1) …or sashimi. I added Images in this article for better coherence.

The first thing you will notice is: the mostly every 0-text begins with token


which separates texts (there is also <|startoftext|> token) and gives a clear cut within the self-attention-driven transformers, i.e., the text coherence ends here. The new text has no relation to the old text.

We know this token — and some moody behavior — from GPT-2, which sometimes, during the flow of storytelling, suddenly cut the narrative with Python tag “<|endoftext|>” begun something completely different. Topic change. No interest anymore.

“And now something completely different”. (by Python, another one — Monty)

Beginning to write “before scratch” by a machine gives you some aha-experience, like Random-Article by Wikipedia. In the first example (Fig. 1), you see something about Japanese food.

GPT-3 uses probably randomness or white noise to begin its unbiased storytelling. Because every human interference with AI leads to bias.

Lok Sabha

So let’s change the Response Lenght from 64 to 1087 tokens.

And GPT-3 writes a longer article about Lok Sabha (Indian Parlament), and its constituencies in various regions. The text looks authentic (but I couldn’t find it via Google — so it seems uniquely written by GPT-3). Ironically the only odd part is that injection:

“constituencies is the plural of constituency”

Why does GPT-3 write this metafictional commentary is unclear. Often you puzzle about the motivation of GPT-3 writing odd stuff.

Sidenote: in this screenshot, you see an announcement by OpenAI about their new pricing system — it will be indeed developer-friendly and accessible (stay tuned).

Astronomically (boring?)

As a next experiment, I lowered the Temperature to 0. In this case, usually, the completion should become rather dull and repetitive. But GPT-3 wouldn’t be GPT-3 if it hadn’t delivered crazy stuff:

Temperature = 0 / Top P = 1

You see here preserving the coherence. GPT-3 writes a rule for itself:

  1. The Narrator watches a movie.
  2. The Narrator dislikes it initially.
  3. The Narrator like it by the second view (very probably, I love such movies).
  4. Narrator explains this movie as a best in X years.
  5. Go to 4 and repeat with progression.

So if we try to summarize the quality of the movie, it should be (according to the narrator) the best movie of last

1 year
5 years
10 years
20 years
50 years
100 years
1.000 years (Thousand)
1.000.000 years (Million) years (Billion)
=> somewhere here, the Universe emerged years (Trillion) years (Quadrillion) years (Quintillion) years (Sextillion) years (Septillion) years (Octillion) years (Nonillion), years (Decillion) years (Undecillion) years (Duodecillion) years (Tredecillion) years (Quattuordecillion) years (Quindecillion) years (Sexdecillion) years (Septendecillion) years (Octodecillion) years (Novemdecillion) years (Vigintillion)
10.​000.​000.​000.​000.000.​000.000.​000.​000.​000.​000.000.​000.​000.​000.​000.​000.​000.​000.​000.​000.​000.​000.​000.000.​000.​000.​000.​000.​000.​000.​000.​000 years (Googol)
Googolplex (10^10^100) years

At Googolplex I give up. Otherwise I will explode Medium. According to Wikipedia:

It requires 10^94 such books to print all the zeros of a googolplex (that is, printing a googol zeros). If each book had a mass of 100 grams, all of them would have a total mass of 10^93 kilograms. In comparison, Earth’s mass is 5.972 x 10^24 kilograms, the mass of the Milky Way Galaxy is estimated at 2.5 x 10^42 kilograms, and the mass of matter in the observable universe is estimated at 1.5 x 10^53 kg.

To put this in perspective, the mass of all such books required to write out a googolplex would be vastly greater than the masses of the Milky Way and the Andromeda galaxies combined (by a factor of roughly 2.0 x 10^50), and greater than the mass of the observable universe by a factor of roughly 7 x 10^39.

And don’t forget that in the text, after Googleplex follows Googolplexplexplexplexplexplexplexplex, Googolplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplex, and even Googolplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplexplex… finally somewhere the completion stops since the tokens are all used.

You can see how the coherence meets here a repeatedness (Temperature:0): the form “best movie in X years” experience a progressive growth. The value becomes exaggeratedly huge, but the concept the same. But let’s experiment with settings in another part. Because today we write “before scratch”.

British Poet David Heatley (who doesn’t exist)

GPT-3 users have their own goals. Some want to develop a customer service chatbot; some will produce a business model ideas generator. My focus is different and probably does not fit into the primary uses of this NLP model: I am looking for Machine Dreams. For a generation of non-existing realities. For weird but authentic worlds.

So my favorite settings are around Temperature between 0.9 and 1.0 and Top P = 1. In this case, GPT-3 creates something you probably won’t expect.

Here is the 0-text with Temperature = 1 and Top P = 1:

Temperature = 1 / Top P = 1
David Heatley (Artbreeder)

GPT-3 creates a poet, David Heatley. Sure, there is that great cartoonist with the same name. But in our case, it’s somebody completely different — with his Œuvre, bio, and even some facts about his creative work. The Book “True These Days” doesn’t exist. His translation “The Flyby: Xenographesis” is also from the outer world. Anthology “Museum of Corporate Fictions” is not traceable via Google.

His approach to oscillate between the content and typography, based on medieval manuscripts, is highly interesting.

As you see, even with 0 Prompt, you can get something unexpected. But of course, this is even before the point of use for GPT-3. Let’s explore what you can create and how to get the texts you need in the next parts.

See you later!

Part 3





Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store