Biased digital assistants — the Yin and Yang of AI

Next Visions
#NextLevelGermanEngineering
4 min readAug 29, 2018

According to Wikipedia, “in Chinese philosophy, yin and yang describes how seemingly opposite or contrary forces may actually be complementary, interconnected, and interdependent in the natural world, and how they may give rise to each other as they interrelate to one another”.

A force that is heavily discussed today in the AI space is bias. Many articles consider bias in AI a problem that needs to be resolved, but in our recent project focused on implementing a digital business assistant, we identified a true dualism of bias that qualifies to be coined as the yin and yang of AI.

How can Artificial Intelligence be biased?

Bias is often described as a negative character trait that provides a disproportionate weight in favor of or against one thing, a person or a group compared to another. In science and engineering this form of a bias is well known as a systematic error. Statistical biases can for example result from an inappropriate population sample.

While AI in its first wave was trying to understand and implement the structure of human cognition, the new wave of AI we are witnessing today is predominantly based on applying statistical methods to large quantities of data. Today, we are no longer trying to explain a computer how a cat might look like, we let a computer learn from a large sample of cat pictures what a cat should look like and then categorize a new picture based on the learned data. This statistical approach to AI makes it vulnerable to statistical bias in its input data sets.

As the data we are using to train AI is created by humans, it obviously contains known and unknown biases that reflect culture and society.

Is bias a bad thing by nature?

In order to avoid biased AI solutions, you could try to explain why your models behave as they do to identify and remove potential biases. However, not every bias is a bad thing. There is a form of bias that we value and for which we use a different term: experience or domain knowledge. A factory worker “knows” that a certain sound originating from a manufacturing plant is alarming, while other sounds indicate normal operations. This bias is wanted and is actually crucial for our “sound detective” AI project.

There we have it — the yin and the yang of AI bias. On the positive side, the bias related to experience is what we actually expect from AI. On the negative side, discriminating bias is what we want to prevent.

The situation gets even more tangled up as soon as politics come into play. While a bias that leads to an unfair treatment of a social group is not wanted and rather considered as the dark side of AI, another unequal treatment of a social group may be wanted from a political perspective in order to overcome a historic imbalance in society. A model may be specifically altered to steer against a development in society that is not wanted for political reasons.

Photo by Ming Jun Tan on Unsplash

How can we handle this dualism of bias in AI?

The struggle around good vs bad bias reflects on our human struggle with good and bad behavior. There is no technical solution to this problem — evolution has provided humans with conscience to differentiate right from wrong. As long as we have not created systems with a conscience — and given the potential implications, I would question if we ever want to — we need to handle AI responsibly. From a humanistic perspective, we need to make sure that AI bias does not harm or discriminates human beings or groups of them. From an economic perspective we also need to ensure an adequate handling of bias, as only a system that people can trust will lead to its adoption.

Identifying biases and understanding them is a hard task, as we do not understand our own biases in the first place. However, if we work with others we become aware of our biases as well as the biases in others. Intercultural teams can help identify cultural biases that might have crept into our AI models. Diverse teams may help to identify social biases and may help introducing biases wanted for political reasons. Interdisciplinary teams in turn can help to reflect on experiences and domain knowledge in our models that generate the value of AI.

There is a bright and a dark side of AI and we need to define for ourselves what we consider yin and what we consider yang with regards to AI bias. This process of considering our values and our biases will reflect back on us, potentially making AI a technology that will help ourselves evolve not only by using AI but by gaining a higher level of consciousness altogether.

Ingo Brenckmann

Ingo Brenckmann is Associated Partner at Porsche Digital Lab Berlin. Please find more about inspiring men & women on Twitter, LinkedIn and Instagram.

--

--

Next Visions
#NextLevelGermanEngineering

There’s more to Porsche than sports cars // #NextVisions is a platform about smart technologies and the people that drive our digital journey.