Addressing bias in artificial intelligence

A new ISO/IEC Technical Report addresses bias in AI decision-making systems, writes Antoinette Price

IEC
e-tech
3 min readJan 28, 2022

--

Abstract Face Futuristic Geometric Head Human
Image by Oleg Gamulinskiy from Pixabay

Artificial intelligence (AI) technologies and systems are used increasingly in daily life and across diverse industries to streamline processes and enhance products and services. They enable machines to learn from experience, adjust to new inputs with the objective of performing specific and human-like tasks.

However, much work still needs to be done to ensure that people trust artificial intelligence. One way to achieve this is through standardization. A newly published Technical Report, ISO/IEC TR 24027, for example, considers bias in relation to AI systems, especially with regards to AI-aided decision-making. It is the work of the IEC and ISO joint committee, SC 42, which develops international standards that address the whole AI ecosystem, including societal and ethics concerns.

“Ensuring a trustworthy and ethical AI system is vital to realizing the promise of AI and eliminating barriers to its adoption across application domains”, said Wael William Diab, who chairs SC 42. “Understanding, addressing and managing unintended bias is key to enabling a trustworthy ethical system. The recently published work complements the portfolio of horizontal AI standards SC 42 is developing and is part of our ecosystem approach to AI standardization.”

Dealing with bias in AI systems

In AI systems, bias can manifest from a large variety of sources. Some people believe it is as simple as ensuring datasets are diversely sourced, but there are many nuances to this. Bias can originate from human sources, such as labelling decisions made by crowd-sourced workers. It can also be a result of engineering decisions about how multiple components interact with each other, or decisions about how data is prepared.

Another issue is that some data may be legally protected from being collected for privacy or other reasons. This can make it difficult to see whether people with such protected characteristics are being treated fairly or not. Bias resulting from AI systems affects people differently, depending on the context. The new TR describes these effects as either positive, neutral or negative.

A positive effect, for instance, could be an AI system for hiring that introduces a bias towards one gender over another in the decision phase to compensate for societal bias inherited from the data, which reflects certain historical underrepresentation in a profession.

An example of a neutral effect could be an AI system for processing images for a self-driving car system that systematically misclassifies mailboxes as fire hydrants. Of course, this statistical bias will only have a neutral impact if the system has an equally strong preference for avoiding each type of obstacle.

Negative examples include AI hiring systems favouring candidates of one gender over another or voice-based digital assistants that fail to recognize people with speech impairments. The can have unintended consequences of limiting the opportunities of those affected. Such examples can be categorized as unethical and compromise the trustworthiness of the AI-based system.

How ISO/IEC TR 24027 can help

Bias is a complex issue, and definitions and terminology spans ethics, statistics and engineering. The aim of the TR is to introduce this topic and address its treatment to ensure a trustworthy system, for instance, terminology and language around bias, to describe the different sources of bias, and methods and techniques that can be used to mitigate bias-related issues.

It is important for bias to be treated throughout the life cycle of an AI system. There are lots of ways to do this, starting at the inception stage, by ensuring that the goals of the projects are themselves unbiased and compliant with regulatory and internal policy requirements. At this point, it is useful for the team to make certain that is has the appropriate trans-disciplinary expertise. Selecting and documenting data sources is also very important.

The design, data preparation, training and tuning, as well as deployment stages of the life cycle each carry a lot of different risks. The TR enumerates these and gives pointers on how to detect and treat the issues.

The measurement techniques and methods for assessing bias are described with the aim of addressing and treating bias-related vulnerabilities. The entire AI system life cycle phases are included, such as data collection, training, continual learning, design, testing, evaluation and use.

--

--