Feed the machine with articles to get a provoking question.

Making of Fabricating Alternatives

Tom Power
Imagination of Things
5 min readMay 14, 2020

--

The creative code community has the potential to redirect and reuse the open AI code that companies like Google and Amazon originally produced for their commercial products (think Amazon Alexa or Gmail’s auto-complete). At Imagination of Things, we did just that with our project, Fabricating Alternatives: an AI-powered card deck for inspiration and provocation in ideation sessions, games, and the creative blocks. By orchestrating different NLP (Natural Language Processing) methods such as semantic similarity, Part-of-Speech tagging and text generation, we have created a tool which can ask provoking questions to aid a creative.

Natural Language Processing or NLP at its core is the field of research investigating programming computers to understand human language. The field goes all the way back to Alan Turing, whose Turing Test deals with a computer responding in human language to questions. In the end, your computer only understands binary numbers and mathematics. NLP frameworks manage this by mapping the world of mathematics to the world of human language. After many years of work, accelerated by the demands of advertisers, voice assistants and data scientists, the field has evolved into a thriving field of research. New algorithms and papers are released almost every month, pushed by the ever nearing dream of being able to talk plainly to your computer.

At the beginning of our design process for Fabricating Alternatives we were inspired by the release of GPT2 by OpenAI. GPT2 is an unsupervised model that has been trained on over 40gb of internet text data. From this data it has constructed a viewpoint on how we communicate with each other in human language. It can be used to generate coherent sentences from a prompt — so coherent in fact, that OpenAI were initially afraid to release the model in fear of allowing people to spread fake news and false information with it. Struck by the creative potential of the model, I saw its ability to generate coherent text as an ideal partner in the creative process.

From this initial inspiration I began to sketch out an architecture for the project, choosing to develop the backend in Python, given its rich array of NLP & machine learning packages. The process starts with the user’s intention: what is the topic or problem they are trying to tackle? With a short paragraph written by the user, and using a Python package called sentence-transformers this description is transformed into a vector embedding. In this case, BERT tokens are used to transform the text into a vector of real numbers. BERT, which is a similar model to GPT2, has already learned how to represent words as vectors from its training.

Once this vector is calculated the user is asked to supply up to 3 URLs to articles that they feel elucidate the topic they are exploring. Beautiful Soup 4 is used to scrape all of the text from these articles. NLTK is used to parse this text data into a form that is useful for our application. NLTK is a natural language toolkit which has many utility functions to parse and organize raw text. Here the raw text is first split into a list of sentences. Next the vector embedding is calculated for each of the sentences in the dataset. Then the list of sentences is ordered by semantic similarity to the description submitted earlier. This is achieved by measuring the distance between each of the article vectors to the topic description. The list is then sorted by distances. The shorter the distance the more semantically related the sentence is to the topic description. The top 20 sentences are taken from the list. These sentences are then subjected to Part-Of-Speech tagging, which categorizes the words in the sentence into types like noun, verb or adjective.

At this point the initial input from the user has curated an organized bag of words which are related to the topic sentence. To augment the bag of words further GPT2 is used to generate text based on the topic sentence. This GPT2 instance is not the vanilla model however. We have fine tuned the model using the wonderful work of Max Woolf who has written a great Colab notebook on how to finetune GPT2. In our case we have assembled a database of fiction and non-fiction that we feel have inspired our work. GPT2 learns from this text to produce text in a similar vein. This adds a speculative flavour to the output of the model and allows us to infuse this flavour into the bag of words has been assembled.

From this bag of words the questions are generated for the app using an adlib structure. So for example we have the adlib, “What if a [adjective] [noun] was a [past verb] [noun]?”, where we take words at random from our bag and fill in the grammatical categories. 1000 samples are generated for each question. These questions are then filtered again by our user topic sentence to gain the top 50 sentences that correlated the most to the topic at hand.

These sentences along with the bag of words are then delivered to the app to display as question cards. The bag of words offers the user the ability to craft the questions to their liking by tapping on the words to switch to another word. Now it’s up to the user to curate and collect cards that appeal to them.

By saving their cards to the insights deck the user activates the final part of the system. For every two cards that are saved, GPT2 is used to answer the combined questions. This gives a machine insight, due to the nature of GPT2, the answers often have contextual relationship to the input problems which gives an intriguing machine viewpoint to the questions that have been highlighted.

Curious to try it out? Play with our beta version here and let me know if you are curious to learn more.

--

--