Designing With Unpredictable Technology

Will Turnage (he/him)
May 27 · 3 min read

Text chat, voice interfaces, and omniboxes have enabled a generation of products powered by language AI. This technology can be mind-blowing when it works, and it can be equally frustrating when it goes awry. Product teams are used to designing within constraints and restrictions, but it’s a different story handling a product experience built on a technology known for being inconsistent. This raises the question:

Is your team prepared to design with an unpredictable technology?

Natural language processing (NLP) transformers were introduced in 2017, and since then the language AI space has reinvented itself every year. First, BERT established the NLP space in 2018, then GPT-3 dominated headlines in 2019–2020. Most recently, Google upped the stakes with the announcement of the next generation of conversation technology, LaMDA.

For folks building products with any of these systems, they’re typically focusing on solving one of these four focus areas:

  • Text Classification
  • Information Extraction
  • Text Summarization
  • Text Generation

You can make a language-based demo fairly quickly, but soon you’ll start experiencing some unpredictable outcomes. Sometimes, language AI produces unintentionally hilarious or horribly offensive results. When your results vary that wildly, it’s easy to imagine a scenario where an errant reply causes you to lose customers. What seemed at first like an innovation leap starts to feel like a risk to your overall customer experience.

For a few years now, the team at I&CO and I have worked with language-based AI for industries from fashion to home cooking to enterprise support, and while we are still just scratching the surface, we’ve gained a few valuable insights:

1. Feedback and reinforcement must be built in to every user interaction

Any system needs to know how good or bad it is performing. This reinforcement is crucial for improving language models. For many of our chat-oriented products, we’ve focused our experience using a conversational feedback loop called G.R.A.S: Greetings, Repairs, Acknowledgements, and Signposts. By utilizing this cycle, not only do we create a pleasant experience for users, but we also gather data to help improve our customer intent classifications.

2. Your language AI technology stack will change frequently

Organizationally, many companies treat new technology as a project that is built, launched, and left to run on its own. For language-based AI, the space is changing so rapidly, you should expect your core technology and infrastructure to keep up. If not, you run the risk and cost of maintaining a home-grown system that doesn’t perform as well as newer peers in the marketplace.

3. Text generation can empower creativity, but it’s your responsibility to protect your users

Newer NLP systems like GPT-3 can be used to generate text, and that can empower all sorts of creativity with your users. Startups like Hidden Door are using GPT-3 to help kids write stories and develop their worlds and narratives. Tech companies like Keys are using GPT-3 to generate empathetic replies for dating apps. These companies have identified markets where unpredictable text is actually what users want.

However, large language models like GPT-3 are not safe and many researchers consider them extremely flawed. GPT-3 is trained on large blocks of text from books, Reddit, and Twitter. Responses often make broad generalizations about genders, races, and religions. This language is filled with volumes of biased, hateful, and obscene language culled from a narrow subset of the world’s population. Language is so fluid and without context, it’s nearly impossible to filter out everything offensive. If you’re developing a product with text generation capabilities, then undoubtedly you’ll have an issue where the AI generates something inappropriate for your customers. It’s crucial that you create an experience where your users feel safe to communicate, and safe to report and/or block any harmful experiences they encounter.

In short, NLPs aren’t disappearing anytime soon. Each iteration gets us closer to a “natural” conversation, and to stay a leader in this space you have to learn how to manage and leverage the unpredictability.

I & CO

A collection of articles by the I & CO crew on the future of business