GPT-2 — Mysterious Universe

Rachel Perks
Sandpit
7 min readDec 10, 2019

--

I work for a company called Sandpit in Melbourne, Australia and we work on a variety of projects at the intersection of creativity and technology. My job title is content strategist which covers a lot of things, but one of those things is asking questions about the things we make. My background (and foreground) is in scriptwriting, I write speculative fiction and am obsessed with exploring and writing about queer feminist futures. That’s my bias, and it’s worth stating that I have one, because no one is neutral.

At Sandpit, we’ve been playing with a Machine Learning (ML) model called GPT-2, which is designed to respond to a piece of input text by predicting the next most likely word, and the next and the next, and so on, creating pages and pages of text if requested. It’s sort of like predictive text on your phone, only smarter and a lot more powerful.

GPT-2 was built by Open AI and then, as their name suggests, open-sourced; ripe for the playing. GPT-2 is one of the most powerful open-sourced ML models available and because of that, Open AI released it in installments. A far less powerful version came first, accompanied by an ethical paper expressing their deep concern about the potential malicious uses of such a tool. And there are many.

Most ML models aren’t very good at mimicking human writing or speech, they’re clumsy and easily detectable. The first version of GPT-2, operating at a fraction of the current model’s strength, was similarly clumsy, often to very comedic effect. Here’s an example:

GPT-2: “Why is it sometimes necessary for a dog to wear trousers? It wasn’t the only problem we found. Other, more frequent complains we’d encountered were ‘it is hard to breathe and the dog is squishy in his pant. When does a puppy get to wear pants?”

The full version of GPT-2, however, is good at mimicking human writing. You can feed it the same prompt an infinite number of times and every time it will give you a different response, and unlike most ML tools which are ‘task specific’ and can write in one very particular style, GPT-2 is a jack of all trades. It can detect a wide range of text types from whatever it is fed and give you back something in the same form. That means it can be used to write a lot of different things, including news articles, obituaries, troll posts, dating app bios, recipes, dialogue, diary entries, etc. For example:

GPT-2: “I really enjoyed a recent trip to the countryside and I’m thinking of going back for a long weekend for Christmas. I’m hoping to go with a few friends. It was lovely to get away for a few days. I’m just missing my mum and sister so much.”

It’s not hard to imagine how such a tool could be used to create fake news, create thousands of undetectable fake users, and eventually replace email or phone customer service people altogether…you get the idea. However, what Sandpit and Google’s Creative Lab have been working toward, is creating tools that harness GPT-2’s power for good.

Together we’ve developed Playground and Telescope, tools that use GPT-2 to generate creative ideas about fictional characters with the aim of helping writers overcome writer’s block. It can be a real challenge to shift the thinking to what positive use cases might be for something that was announced as frightening from the get-go, but this is definitely one of them, and in line with Open AI’s own optimistic intended uses. Playground and Telescope are not tools for fixing things or replacing existing labour, they’re creative, elastic, and importantly; they don’t ask that you accept GPT-2’s output at face value but rather encourage you to interrogate it within your own writing.

Being a fiction writer myself, I decided to try them out using a character I’m currently working on. The character is called Nim and they’re non-binary. I fed Playground a paragraph about Nim describing them as a genius computer programmer using they/them pronouns. But when I hit the button, it fed me back a paragraph about ‘him’. Curious. I tried again and again but repeatedly GPT-2 didn’t recognise that my character had gender-neutral pronouns and, in the absence of me offering binary pronouns, decided Nim was male. I was instantly wary.

“Nim Sheer is a lanky man of average height, with short, curly hair and a large belly. His attire consists of a white flannel shirt, green cardigan and sandals.”

Was GPT-2 doing this because it didn’t recognise gender-neutral pronouns? Did it assume the masculine to be neutral and universal? Or was it because I had said the character was a computer programmer? Maybe it was something else that I didn’t even know about.

The deal with ML models is that they have to be fed an enormous amount of data — in the case of the full version of GPT-2, “45 million outbound links from Reddit”. That’s an awful lot of data. OpenAI is keen to point out that the data is not from Reddit itself, but rather from links that are popular with Reddit users. They also made a point of excluding subreddits with “sexually explicit or otherwise offensive content”. I wonder who got to define ‘offensive’? Regardless, articles are automatically curated by Reddit users being interested in them, or as Jeff Wu from OpenAI jokes “all the work was done by people on Reddit upvoting posts.” So who are these faceless labourers?

The Reddit audience is predominantly young white men *, which means that the majority of people upvoting these posts represent a very particular slice of humanity. There’s just no way this isn’t having an impact on how GPT-2 thinks and speaks. Just as I am biased, so too is GPT-2 through the interests of its teachers. I decided to do some testing to see what biases GPT-2 might have lurking under the surface.

I used Playground and honed in on a feature where you can feed it a character’s name and a short description and it will generate information on that character’s appearance or personality. I asked it about Nim, my non-binary computer programmer and it repeatedly described their appearance as some variation of the following:

“Nim Sheer is a brilliant young man. With dark tousled hair, he has an interesting air to him. He speaks softly and with precision.”

So, I tried taking out the bit about Nim being a computer programmer and without this job, started getting these sorts of results:

“Nim Sheer is a small, young, Asian actress. She is very lovely, she’s very beautiful. She’s very, very, very beautiful.”

I should note that my results are of course subjective, but I did try and test each prompt about ten times and I’m speaking in each instance to whatever the majority was.

I started noticing that the difference between these descriptions was more than just the pronouns. When GPT-2 thought my character was a woman, it would describe ‘her’ appearance using sentences like:

“a well-dressed and sexy model. She has a very slim figure and a full, round face with a large nose.”

However, when GPT-2 thought my character was male, it would describe ‘his’ appearance using sentences like:

“A tall, handsome black man in a long suit who wears large glasses and a dark suit jacket. NECK LENGTH: 9–3/4 in.”

Neck length aside, I noticed a tendency to sexualise the ‘women’ characters, focusing on their bodies, whereas with men, GPT-2 would be more inclined to tell me about their clothing and interests.

The conversation around machine learning bias and digital technology bias, in general, has been happening for quite some time, and I’m simply standing on the shoulders of giants and dropping in my two cents, but I think it's a conversation we should all be participating in. We, as consumers of technology, often fall into the trap of thinking of digital products as somehow ‘neutral’ or ‘universal’ when they are most definitely not. They can be racist, sexist, and every other form of bias and discrimination under the sun.

I spoke to some of the writers Google’s Creative Lab had commissioned in partnership with Melbourne’s Digital Writers Festival to trial Playground and Telescope, and they were using the tools in an aware way; inserting text into their writing in intentionally jarring chunks or using it purely to spur the next thought when they ran dry. Because, at the end of the day, as a writing tool, it really does help you stretch your thinking into weird and wonderful places.

In an ideal world, the tools of the future would give users full agency over their engagement through declaring their biases. They would be upfront and transparent so that when we use them we don’t slip into replicating the tired and narrow ideas of the past. GPT-2 does not declare its bias in so many words, but the information is out there if you go looking for it. It’s an imperfect tool, but a fascinating one that can lead you to ideas, characters and worlds that you might never be able to reach on your own, and as a writer, that’s pretty exciting.

Thank you to Libby Young, Khalid Warsame, and Roslyn Helper for sharing their thoughts with me so generously and helping me expand these ideas together.

Incase you were wondering why the article has such an unusual title, it was chosen by GPT-2.

* This Reddit data is from 2016 and the articles scraped from Reddit for GPT-2 cut off at the end of 2017.

Originally published at https://www.wearesandpit.com on December 10, 2019.

--

--