Every writer fears it. Every writer knows it. The Blank-Page-o-Phobia. Some even try to give it a scientific name like Atelodemiourgiopapyrophobia. You are facing the empty paper sheet. Tabula rasa. You don’t even know how to start. Don’t feel alone; most of the writers have this experience. There are many ways to break this writer’s block. But it’s easier said than done. And once your mind is liberated, the text is flowing through your brain&soul and finds its place on the paper.
With GPT-3 it is quite different. As long as you don’t begin — it cannot start. NLP model needs a call to action, some orientation; it wants to understand what you want to get. It needs your PROMPT. And then your Submit.
Essentials of Playground.
Here is, again, like in a header (unoriginal me), the GPT-3 Playground. This is the place where you should begin to explore the creative potential of GPT-3. Even before you have a concrete idea for its implementation, this is also mostly the right place for artists and writers who want to generate one-time text.
Here, you will find the UX simplicity — and you don’t need more, really.
- Prompt/Completion window
- Preset bar
- Submit button.
Preset bar — is useful if you designed presets you want to re-use or to share.
Settings — here, you can control the nature of the text:
- Engine — GPT-3 provides four engines, which vary in output quality, latency (output speed) etc. “Ada” is the quickest one. “Davinci” is the most sophisticated engine — but sometimes you have to wait longer for text rendering.
My choice: “Davinci”
Note: with semantic search (I’ll write on this one day later), you can get relevant and quick results even with Ada.
- Response Length (1–2048) — Length of the text, in tokens (approx. 1 token for 1 word, varies from engine to engine).
My choice: 700–1000
Note: Your input is calculated within 2048 tokens. The longer text you put in, the more appropriate becomes the output, but the smaller it is (everything must fit into 2048 tokens).
- Temperature (0.0–1.0)— controls randomness, between boring redundancy and chaotic fiction. The higher the value, the more random texts appeared — but are still coherent, using Transformer’s self-attention.
My use: 0.9 — in this case, it is not boring, but still not too repetitive.
Note: try the same prompt with various temperature.
I warmly recommend you an analysis by AlgoWriting about Temperature in GPT-3:
GPT-3 Temperature Setting 101
Along with a prompt, the temperature is one of the most important settings and it is worth spending some time to…
As you may have noticed, I use the parameters above at the moment — I am already overwhelmed with the quality of variation between Length and Temperature. But you have even more ways to control the text (which is still not created) with the following settings:
- Top P (0.0–1.0) — controls probability and diversity of completion
- Frequency penalty (0.0–1.0) — looks for frequency among used tokens and decreases/increases the use of the same text patterns. The higher the value, the lesser is the possibility to get repeated patterns in the completion, compared with used tokens.
- Presence Penalty (0.0–1.0) — By increasing the value, you can widen the possibility of new completion topics.
- Best of (1–20) — generates x variants and shows the best one. Warning: it will use more of your credits than shown in the Completion, so use wisely
These settings are also suitable for saving specific presets, which works at the best for you — or experimenting with the same prompts but various parameters.
The last part is essential in case you generate some very structural text, like a chat. In the example on the left, you can see the settings of a default chat preset.
Stop sequences help GPT-3 to detect where to stop and to jump into another line.
Inject Start Text — this is the part of AI (in preset above shown as “AI”) — here the GPT-3 writes till it decides to stop + jump into the next line and provide your part:
You can see it as the precise determination of the characters.
In this case, GPT-3 wrote, “Hello, may I ask…”, jumped to the next line, and defined your part: “Human:”. Now it’s your turn. And after you write your text till end and click “SUBMIT” (alternatively CTRG+Enter), it will continue as “AI:”+GPT-3-written contents.
Note: if you delete Stop Sequences, GPT-3 will continue writing a dialog using “Characters” defined in Inject Start/Restart Text, but — unsupervised.
I used this method for the unsupervised dialog “AI Bots on the Run”. Which became a short movie:
As you see, there are various control options, which you may use to move the still unknown text into some specific direction to give precise nature to a text.
Using Show Probabilities, you can get insights into generated content with all the probabilities; you look into the Matrix, so to say:
I advice you to watch short video presentation of GPT-3 playground by Andrew Mayne:
But which texts can we generate now?
This is the topic of our the part… No, wait, I will talk about NOTHING in the next chapter, or rather: what happens with Tabula Rasa?
See you next time.