Detecting Biases in AI Search Interactions

Álvaro Ibáñez Coedo
Empathy.co
Published in
4 min readOct 24, 2023

Companies, especially those that operate across different geographical subdivisions or even across different countries, more often than not, need help connecting with communities. The reasons for this are varied: different languages, dialects, cultures, and customs that lead to biases and make it challenging to discover and understand the preferences of every specific community and every individual within them.

Furthermore, there’s the difficult task of enabling an enhanced understanding of communities through the analysis of search interactions while avoiding unethical practices related to using individual information extracted from users in very opaque ways, such as cookies and IP registration.

With the implosion in popularity of generative AI, the public is more conscious of how developers’ biases impact AI judgements, as stated by Leonardo Nicoletti and Dina Bass in Humans are Biased.

Ángel Maldonado’s (CEO & Founder of Empathy.co) tweet. However, we cannot trust every individual involved in AI to be this concerned.

With the intent of overcoming these difficulties, we began developing the GeoMap project. This online map makes use of Empathy.co’s holon-based search experience to display search interaction information corresponding to both abstract (such as companies, brands or divisions) and concrete, geographical entities. As a result, stores are better able to understand their customer base and detect specific biases associated with search trends.

As you can see, there is a set of circles accompanied by filter buttons with information about the different KPIs. By default, only the number of queries is shown in each circle, which represents the number of searches per query in each defined entity.

Key Performance Indicator filters

Selecting KPI buttons on the map modifies the information displayed, providing additional insight at a glance. In the example shown below, information regarding the percentage of clicks, items added to cart, and no results instances can be shown. One, multiple or all buttons can be selected to adjust the data displayed on the map.

Having information about the different KPIs can help detect strange search trends for a particular community or a certain time of the year. For instance, if a specific query (i.e., “soda”) yields a very high percentage of no results, merchandisers could infer that it doesn’t present the search results expected by that particular group of people. This helps erase the geographical and linguistic bias.

Not only can information about the different KPIs be displayed, but it can also be filtered. By adjusting the values of each KPI, the map will adapt to display only circles containing data within the defined range. This makes detecting the aforementioned search trends easier and helps the same kind of biases.

Note: The KPI buttons and filters depend upon the available data. The limited selection shown in this article is for the sake of simplicity.

Geographical layer filters

In order to display data for diverse communities, we determined it would be most effective to group it into different layers. Accordingly, we developed three different visualisation layers for this demo: the non-geographically-localised brand and division layers (representing the divisions of a company), and the geographically-localised US states layer.

By separating the data into layers, entities of the same type and/or comparable size can be visualised alongside each other, facilitating comparison and helping to further reduce geographical and linguistic biases.

Search and synonyms

Finally, there is the search and synonyms functionality. With this feature, after searching for a query, the information displayed in the circles will automatically show the values of the KPIs. Furthermore, synonyms for that query will also be shown below the search bar.

This feature also contributes to reducing biases by enabling merchandisers to detect other words that refer to the same concept in other communities and compare the KPIs’ values among the words used to refer to that very concept for each circle or entity. This is especially useful for combating linguistic bias.

Conclusion

We are all equal, but we speak and express ourselves differently. This new data visualisation tool will help companies get closer to the communities they serve and combat biases. Query data combined with Empathy.co’s holon-based search experience means merchandisers and analysts can quite literally see what their shoppers mean.

What else do you want to see?

As mentioned throughout the article, the tool shown here is a prototype. Any feedback, comments or suggestions will be considered and greatly appreciated. Let us know what other elements you’d like to see included. We want to hear from you!

A note from the authors

This project was a collaborative effort with Alejandro Lorenzo Ocaña during our time as Academy Fellows.

--

--