Artificial Intelligence image by geralt on Pixbay
Artificial Intelligence image by geralt on Pixbay

Co-authored by David Mueller, Richard Walls, Katherine Brown & Vignesh Harish

In order for an AI solution to be successful, it needs to be trustworthy. Users must be to able trust that the solution was designed with their experience in mind. Since AI is trained on human sample data that is provided by human designers, there is always a possibility that a solution will inherit human biases. In the case of a chatbot, training data that reflects the communication habits of one particular group, at the expense of others, creates a biased solution that can undermine users’ trust.

Many chatbots engage in unstructured interactions that can be extremely difficult to assess for bias. Often, there is no real record of who is using a chatbot or whether they regard the interaction as successful. Even when a chatbot solicits feedback from users, there is rarely any way to determine whether a negative response results from bias or some other problem. For instance, even a chatbot that obtains a favorable rating 80% of the time can be biased if its 20% unfavorable results are disproportionately concentrated in an underrepresented community of users. …

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store