NLP & Conversational AI. The AIX Design guide.

Ethical design principles for the user, the business, and the AI

Viola Miebach
ArtInUX
Published in
9 min readApr 11, 2021

--

Hello, World! Let’s talk about conversational AI. As my team and I are working hands-on developing a product with an NLP AI, this article will guide our product's AIX Design. We want to display it to you to grow with the community’s feedback, and in exchange, share our experiences with you if you're going to work on conversational AI.

The evolution of NLP technology

You might have heard about ELIZA, an early natural language processing computer program developed by MIT computer scientist Joseph Weizenbaum in 1964–1966. It was created to show the potentials of the communication between humans and machines. ELIZA processed user inputs and engaged in discourse following the rules of a script. The most famous script simulated a psychotherapist, who was well-known for mirroring what the patient had just said. It was the first chatbot ever developed that was capable of attempting the Turing test.

A conversation of a user with ELIZA; source.

In these early stages, chatbots were rules-based and developed with a machine learning technique called ‘pattern matching,’ not remembering previous chat inputs of the user or contextualizing it. Even nowadays, for voice-to-voice interaction such as the one with Amazon Alexa, we can see that after a certain amount of back-and-forth, it cannot contextualize your input anymore and starts a conversation from square one. But this was yesterday.

The capabilities of the NLP technology GPT-3; source.

Today’s smart generation of conversational AI services is NLP-powered agents that get smarter each day (such as the multi-layer neural network GPT-3 from the non-profit organization OpenAI, founded by Elon Musk). These technologies combine computational linguistics with statistical, machine learning, and deep learning models. Those technologies empower programs to process your input and understand the whole meaning, including your intent and sentiment, and respond with relevant and helpful answers in their own words. Language generation engines moved from pre-defined templates to end-to-end language transformers and attention mechanisms, producing text, legal language, images, wireframes, mockups, spreadsheets, and even code.

The challenges of building an NLP powered product

Natural Language Processing can be applied in any industry, reaching from healthcare to education to fin-tech to e-commerce and many more. And it is a fast-growing industry. From today’s worldwide revenue of 17 billion USD, it will almost triple by 2025.

Chart of revenues from the NLP market worldwide from 2017 to 2025; source.

It is a scaling industry, and looking at past scandals in the field brings up many questions on how policies around developing an NLP product are generated. In 2018, Amazon’s AI-based recruitment tool, which was scanning applications and sorting them according to Amazon’s training dataset, had to be scrapped as it penalized women. In 2019, a scandal came to light where the conversations between the customers and Apple’s voice assistant Siri were analyzed and listened to by third party-entities.

AI data being trained with biases and the misuse of AI technologies is nowadays a big concern. With technologies such as GPT-3, we face the capability of technology producing deep fake content. It is terrifying and mind-blowing at the same time. On the one hand, having the capability of creating songs, poems, generating ideas, and easing processes. And on the other hand, the capability to create content that, worst-case, leads people to make wrong decisions or discredit public figures, spreading a mistrust in these technologies. The constantly upcoming headlines in the news reporting misuse and deep fake are more than concerning.

The core considerations of building an NLP powered product

For people building products and businesses that use NLP to help users effectively reach their goals, there is the need to implement end-to-end ethics. We don’t want to make the same mistakes as the big players did, who, rather than proactively implementing ethics from the beginning, are reactively trying to retrofit implemented errors.

There is currently no global standard procedure or framework on how AI products are designed, nor how AI should be designed to create experiences that work with the humans whose lives will be impacted by AI. To some extent, it is still up to each working on AI to decide how they shape AI systems. And the sad truth is:

acting legally is often NOT sufficient for acting ethically!

People like Sudha Jamthe, Carol Smith, and Sam Wigglesworth, which are at the forefront of shaping ethical frameworks for building AI technologies, are exemplary idols to me. They are taking a significant step towards laying the path for all stakeholders involved in shaping ethical AI products and services.

I have gathered the necessary steps to follow in this manifesto when shaping AI products with ethics implemented by default. There are three primary considerations to implement into upcoming businesses:

  1. The users’ needs
  2. The ethical AI needs
  3. The ethical business needs

The user needs

First of all, we have to understand the user of our product.

Through thorough research, we can discover people's needs, motivations, and goals towards interacting with your NLP product. We can also dig deeper into their pain points with existing solutions to do better. By laying out the user's detailed journey, starting from before the interaction, throughout the interaction, and after having finished interacting with our product — we can discover all micro-interactions to understand what is needed in our product to enhance the users' experience.

As we are designing a conversation between the user and the product (text to text-, text to voice-, or voice to voice- interaction ), we have to research and develop the following:

  1. User Personas (who is your audience); more information here
  2. Define multiple use cases (the role of your product in the users' life); more info here
  3. Create the persona of your product (tone of voice, humor, avatar,…); more info here
  4. Outline the conversation of both (script and prototype paths); more info here
  5. Test the conversation (Wizard of Oz technique or rule-based testing)
  6. Test the concept (the interface of your product in multiple use cases, environments, and devices)
  7. Document and iterate on findings

Entering the new territory of designing the interaction with technologies such as GPT-3, it is tremendously important to document all your work. Especially with testing the interaction in a prototype, there is not much research done yet, so you can share your knowledge and help others test prototypes in the same way.

When designing your product, make sure that the user is aware of using an AI vs. when they are not. Always give the user the chance to report issues with your AI easily. You can design your product according to the Nielsens Heuristics for Artificial Intelligence. You can also set your own design principles extending those heuristics to leverage your ethical approach (here are some examples of principles from worldwide known companies). Our core values for our business, our product, and our technology are privacy by design, transparency, and assisting — never replacing.

Chart of the core values of our product.

When designing your product's privacy policies, not only make them ethically justifiable but also try to make them auditable. Users are anxious about sharing personal or sensitive information, especially when interacting with an AI and not a human. Tech companies nowadays entangle their privacy policies in complex and long documents, which gives the user a hard time going through them. Since we will have early adopters as our primary customers, it is important to share the technology's benefits and safety understandably and be transparent with the privacy policies. It is crucial to spend effort on building a policy people actually want to read.

The AI ethics

As mentioned earlier, the upcoming technologies such as GPT-3 give us mindblowing opportunities both for the positive and negative sides. Same case as with all inventions done in the past. It is up to us to take the right path when designing AI.

“An ethical, human-centric AI must be designed and developed in a manner that is aligned with the values and ethical principles of a society or the community it affects. Ethics is based on well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations, benefits to society, fairness, or specific virtues.” Markkula Center for Applied Ethics

To take an ethical approach in developing your NLP AI, first go through the following consideration and try to adjust:

  1. 360 degree (who could be hurt with the technology?)
  2. No-go’s (What line won’t our AI cross?)
  3. Abusability testing (looking at different ways the system can be abused or misused)
  4. “Black mirror” scenario (layout use cases for the future)
  5. Mix it up (test multiple scenarios with multiple personas)

Carol Smith developed a solid checklist for designing an ethical AI experience to reduce the risk of preventing bad things from happening and being prepared for if bad things do occur.

The business ethics

First things first — your values. If your business values are ethical, you are in a good place to have an ethical product approach. Even better, having a diverse, multidisciplinary team with a solid spread of skill sets onboard, where everyone is in the loop of developing the AI. This will help you implement less bias in your AI dataset. To all teams out there, I challenge you to test your team's biases to become aware of possible effects and do so with Harward's implicit association test.

Starting with a new product and business opportunity can quickly lead to a big success (hopefully) — and you have to be prepared for it. How will you handle scale once you do? How will you track your progress? And how will you collect metrics according to ethics? How will you collect feedback from users towards their satisfaction with ethics?

Chart of ethical approval board as an audit function plugged into your business, designed by Viola; inspired by this source.

To establish ethical behavior as a default in a company, there should be an entity implemented inside all processes that increases the awareness of ethics, supervises, vetoes, and approves outcome in every step from research to deployment. There must be an ethical approval board plugged into your business having an audit function, taking the role of an independent contributor, and not being a third-party entity. With the constant assessment of your technology's ethics, in your business, and your product, you will lay the ground for humane technology in the future.

Suppose you want to read more about implementing an ethical approval board into your processes. In that case, I can highly recommend the paper Ethical by Design: Ethics Best Practices for Natural Language Processing.

Let’s take the chance and build humane technology!

Through this article, we want to embrace AI ethics, bring forward transparency in new technologies, and encourage businesses to take responsibility for the future and do the same. This guide is not set in stone. It is an evolving framework for applying ethics in business and AI, so please feel free to comment, give your feedback, and add essential considerations we might not yet have looked at.

--

--

Viola Miebach
ArtInUX
Editor for

Spirited UX Designer working at the intersection of bringing forward technological transformation and understanding and representing the user’s needs 🧬