Precursors to a Digital Muse
Google’s Creative Lab in Sydney, Australia explores the possibilities of machine learning in a new collaboration with the Emerging Writers’ Festival. Story by Rupert Parry, creative technologist, and Kartini Ludwig, producer.
From the quill to the printing press, writers have used tools to help get stories out of their minds and onto the page. At the Google Creative Lab, we’ve been particularly interested to see whether machine learning (ML) — a recent leap in technology — can augment the creative process of writers. To do this, we brought together an eclectic cohort of writers, developers, engineers, and industry professionals, to build three digital writing tools based on ML. These tools were then handed over to three emerging writers, culminating in a piece for publication.
Why use machine learning?
If we’re trying to build tools that help with human creativity, machine learning is a natural fit. Machine learning models can detect meaningful patterns in huge quantities of complex data, but unlike conventional computing, these patterns aren’t rigidly programmed in. Instead, they are gradually learned through repeated exposure, meaning the machine is able to determine its own understanding of what it sees. This understanding can be far more complex — and deal with more uncertainty and vagueness — than anything we could explicitly program. This capability of ML is vital for reproducing human language — where the rules are fuzzy and ever-changing, and will often be bent or broken by good creative writing.
Unlike simpler language generation tools like Markov chains, which can only process a fixed vocabulary and assign simple probabilities, ML models can internalise larger patterns of grammar and semantics that they can re-apply to completely different contexts. This ability to adapt, even when confronted with new writing they’ve never seen before, allows the models to produce brand new original text that is still coherent, often making unusual and surprising leaps. It’s the closest thing to “creativity” we’ve ever been able to build with code.
“Machine learning models embody the closest thing to ‘creativity’ we have ever been able to build with code.”
To kickstart our process, we sat down with generative language expert Ross Goodwin to survey the tech landscape for this project. Very quickly it became clear that we’d want to use a transformer model, which has quickly become the standard for language-based machine learning tasks. In particular, the transformer architecture is great at remembering long-term structure and keeping long outputs (say, an article) coherent.
In the end we settled on three tools:
- Between The Lines — a plot building tool where machine learning fills in the gaps between plot points. Start with just the very first and last line of a storyline, and the tool is trained to interpolate between them, generating what would happen in the middle. You can keep doing this until you have an interesting plot to use as a starting point, or inspiration for, a story.
- Once Upon A Lifetime — a character life story generator. Writers can input keywords that describe a life they want to generate, perhaps the biography of a character in a story, and get a complete life story that draws from those keywords.
- Banter Bot — a character chatbot, where you supply some information about what your character is like, and then can converse with it through text. As you talk more, the character evolves, taking the conversation that’s taking place and learning from it.
For more details on all these tools, you can head to our Google Experiments page.
Choosing a dataset
Like many machine learning projects, our first step in building these tools is to find a dataset. Machine learning models need plenty of data to train on to make any decent predictions. A generative transformer model will try to replicate the structure of the text that you train it on, so it’s worth paying special attention to what this data is. For example, Between The Lines is a plot-based tool, so we used an open dataset called WikiPlots which pulls from Wikipedia plot summaries of books, movies and films. Similarly, because we wanted Banter Bot to generate believable human-to-human dialogue, we decided to train it on public domain film and play scripts prepared by Cornell University.
Accuracy in ML isn’t just about the type of data you train on, but also the sheer quantity of this data you have. Large models require a large amount of training data and we found through experimentation that things worked best with around 5–10MB of data to train our model, which equates to around 800,000 words. While more is better for your final results, be warned that it will extend the amount of time you’ll need for training. For Once Upon A Lifetime, we used 34,000 Wikipedia biographies, which gave us more than enough data to produce a working model.
Preparing the data & training the model
While transformer models are great at producing general text, like articles and lists, our tools needed to do quite specific things (for example, generate based on keywords, or fill in sentences between two plot points). Because transformer models understand text structure so well, we learned that we could format our input text before training to get the particular outputs we were after.
In the case of Once Upon A Lifetime, we conducted a keyword analysis of each of the roughly 34,000 life stories using Pattern to extract the most common words used for each (ignoring stop words like “and” or “because”). These common words gave us a set of terms which captured key aspects of a person’s life. Then, we formatted the text so that the keywords came before the life stories, with special characters separating them. So we got something like this:
racecar ^ driver ^ dog ^ veterinarian ^ accident ` Jane Herman was a racecar driver and dog vet, known for having a huge driving accident during the…\n
While those symbols are arbitrary and have no real meaning at first, the model will pay attention to them once it sees them repeatedly in our dataset, and can learn to extrapolate the connection between keyword meaning and biography subject matter. That means that once we’ve trained it on tens of thousands of examples of text like the above, we can input something like:
gymnast ^ sweden ^ author ^ gold ^ award `
And receive output like:
Yan Svenssen was a Swedish gymnast, who, after winning his gold medal, became an author...
Our two other experiments were similarly trained on highly structured data, with Between The Lines using text with swapped sentence order so that the model learned to interpolate between them, and Banter Bot using scripts that had been formatted as strict A/B conversations.
We trained on remote Google Cloud Compute instances with NVIDIA Tesla P100 GPUs, based on the Google Cloud Platform Deep Learning VM Images, and left our models training for a week or two (though saw promising results after only one day).
Sharing our tools with writers
The ultimate test of a creative tool is how it performs in the hands of the creators. The culmination of our work was our three day workshop, where we had the opportunity to host our collaborators at the Emerging Writers’ Festival. During the workshop, writers Tegan Webb, Khalid Warsame and Jamie Lau — as well as festival organisers Izzy Roberts-Orr and Ruby Pivet — learned about machine learning from scratch and experimented with our tools. They tried them out in their own writing practices over the next few weeks, with the goal of producing a piece to feature in the Digital Writers Festival, a Melbourne-based online literary festival with a focus on technology and the art of writing.
Working with the writers was illuminating — they described the tools as falling nicely between the intentional focus of the writer, and the ambiguous, strange nature of subconscious inspiration:
“ML tools is a good middle ground. There is what you look to for influence and inspiration… and there’s you and your subconscious as a writer… and I think it sits nicely in between that.” — Jamie Lau
For all the writers, the most surprising aspect of working directly with machine learning was how it was able to get them out of their own heads. For Khalid, interaction with our machine learning models helped him to escape feeling trapped by his thoughts around a story. “That proximity they have to randomness allows me to tease out elements that are interesting much more readily,” he said. Similarly, Tegan enjoyed the element of play that arose from being able to interact and push back against something that had the capability to respond to anything she input:
“You think about writing as this very serious thing… having an element of play in the construction of the writing was eye opening for me.” — Tegan Webb
Machine learning also proved effective at adding believable detail to stories. For example, Jamie found the specificity of Once Upon A Lifetime to be useful when writing realistic scenes. Trained on a corpus of real Wikipedia data, the model could generate events, places, book titles, and numbers easily. In one of Jamie’s sessions, the Once Upon A Lifetime dreamt up a character in a now-defunct band called “The Kraggs”, whose debut album “Down Where the Rivers Don’t Flow” sold “15,000 copies”. That creative specificity encouraged her to take notice of names and numbers like these to enrich the world she was building, and make it feel real.
Machine learning tools in writing are, as Khalid put it, “novel recombinations of knowledge”. They allow us to draw from a huge corpus of the written word, mix up all the social conventions, historical accidents, language structure, and word meanings that are hidden underneath, and give it back to writers in surprising and provocative ways.
ML Tools for Writers’ is an experiment initiated by Google’s Creative Lab in Sydney, Australia. The team behind the project includes Byron Hallett, Nicholas Cellini, Eden Payne, Rupert Parry, Kartini Ludwig, Kirstin Sillitoe, Tea Uglow, Jude Osborn, and Jonathan Richards. It was made in collaboration with The Emerging Writers Festival, Sandpit, Marcio Puga, and Ross Goodwin. You can view the entire AI + Writing collection on our Experiments with Google site.
Questions? Feedback? Tweet us or email: firstname.lastname@example.org