AI Bias: In Conversation With Experts

Lauren Toulson
CARRE4
Published in
7 min readApr 19, 2021
Collage create by me, with images from Unsplash

AI bias, thankfully, is becoming an increasingly hotter discussion topic in the technology industry — amongst academics and ethicists, and now with public awareness. Digital Bucket Company, with the vision to leave behind a legacy with a fairer, more sustainable world, organised a global AI Summit to bring together global experts to discuss the current state of AI and how we will move forward to tackle the big issues.

The Summit was live streamed on YouTube, where you can watch here, and started with a focus on algorithmic bias, discrimination and inequality.

Introducing themselves, Márcio Burity, Diplomat at the UN, starts the discussion by saying that while AI has become a tool that is essential to humanity, the more we use it the more we start to recognise its issues.

Following this, Tech UK’s head of Data Analytics Katherine Holden adds that in order for us to be able to scale AI to benefit from it, we need to get it right:

AI is going to disrupt every sector of society, but if it’s not diverse it is not ethical — Katherine Holden

This is something that will be further explored in the conversation below.

The conversation focuses on the idea of mitigating risk, the difficulty of achieving a system that is 100% bias free. With diverse inputs and multi-stakeholder feedback, at every stage of development, we can work towards a less biased AI, says For Humanity’s Director Ryan Carrier.

And to create change, Lobna Karoui, AI Ethical Expert at the European Commision, emphasises the need to build bridges between AI experts, policy makers and all decision makers.

Here’s the conversation

What issues regarding bias most urgently need addressing?

Katherine: Anywhere an algorithm has an application that will significantly impact on an individual life absolutely must be pushed to the top of our agenda.

Márcio: Lack of inclusivity, and lack of diversity, are both resulting in the racial and gender bias that we see in algorithms today. Addressing the issue of inclusivity and diversity is crucial.

Our Audience asks: How can we move from complaining about bias to evidencing that diversity is necessary?

Ryan: Bias and discrimination are loaded words, we need to break them down to understand how to tackle these issues. An issue that we have when evidencing what diversity is necessary is deciding on benchmarks to use. For instance, with hiring, are we hiring to meet the male:female ratio globally, which is almost 50:50, or do we hire based on ratio within the degree, like computer science which is significantly higher ratio of men.

We need to find the sweet spot that improves equity but doesn’t contribute to reverse bias — Ryan Carrier

How do we ensure an algorithm enables equal opportunity for all sectors of society?

Katherine: The algorithm isn’t accountable for equal opportunity, the accountability lies with the people behind it, the designers, leaders and the board.

Humayun: Bias and ethics need to be something thought about right at the beginning — it needs to be at the forefront of learning.

We first have to define what a fair outcome looks like — Humayun Qureshi

Lobna: Building on Katherine’s point about accountability falling to leadership, technology creation involves a lot of subjective decisions and its a long process — we need to mitigate risk at all steps throughout the decisions process, which means holding leadership accountable for mitigating that risk.

But who decides which notions of fairness to prioritise?

Ryan: Individual people don’t know what’s fair and unfair, they might see plenty of things that seem unfair to them but they cannot see it from a societal perspective. We need an external body to tell us what is fair, to decide on those principles.

Our audience ask: Data is often called the oxygen of AI. With defective data, comes defective AI?

Márcio: AI can only do what we train it to do. If the data is not right, the outcome won’t be either. If we are talking about fairness,

we also need inclusivity and democratic discussion — not just experts but experts working to include the public into the discussion — Márcio Burity

Apple did this by addressing public concern over the yellow emojis, and created a range of skin tones.

Can we have AI systems that are operating on the basis that the advantages outweigh the risks?

Katherine: There is no perfect solution and there will always be trade offs. Some risks are higher than others, like automated recruitment decisions. Identifying bias is really difficult, and often not identifiable until it’s gone wrong.

Ryan: We might start out with an AI that makes worse decisions, that has bias, but with the correct transparency, oversight and mitigation we could actually get better, less biased decisions. We need to be able to allow this process to happen.

Lobna: Bias is human. If we have no bias, we are robots. Bias depends on the product and the societal context, and whether that situation is fair or not. It will always be difficult to remove 100% of bias, but we will need co-operation from leaders, external regulators and customers to be sure we can reduce risk.

Data can also be used to investigate and understand bias — for instance using 100 years of data on women in the workplace and recruitment to understand gender bias there.

We can use AI to change how we see the world and reduce the gender gap — Lobna Karoui

Humayun: It is concerning to think that a little bit of bias is okay. Yes, it depends on the context, but it’s dangerous to think a little bit is okay.

If it harms one person, it’s not good for society as a whole — Humayun Qureshi

The host asks, Do we have a place for positive bias?

Márcio: Decision makers will play an important role — as soon as we find a way that we are reducing bias, we need to decide if that is the right, good decision.

Ryan: If we know what bias is, as Humayun said, we will be in a position to say that we don’t want that. But we don’t know. We know when something is unacceptable because it’s below the line, but if we go too far we get into reverse discrimination. We need to find a reasonable range where we no longer suffer from discrimination but don’t fall into reverse discrimination. Using audits that require transparent decision making, we can explain why we no longer need to remediate bias, because we have found that ‘sweet spot’.

With that kind of transparency, we create the possibility for stakeholder feedback on these decisions — Ryan Carrier

Our audience ask: Should bias be regulated like accessibility regulation, or even be part of accessibility regulation itself?

Ryan: We should build an infrastructure of trust through compliancy by design: Auditing the entire design process means the requirement for multi-stakeholder feedback at the design level, the risk analysis level and the output level, all of which are documented for full disclosure. We advocate for this auditing process on an annual level, including the criteria ethics, bias, privacy, trust and cyber security.

Our audience ask: In the future, how can we assure society consuming AI that they are dealing with ethical, trustworthy outputs?

Humayun: We will need to have the right ethical frameworks and regulation to make ethical AI systems that can be trusted.

Lobna: We also need regulation that can accelerate policy changes in this area. Secondly, we need to ensure we have education about this technology globally, so that everyone can bring their part to the table. Finally, diversity and inclusion will be very important.

Lastly, our audience ask: Is anyone looking at self-reinforcing feedback, which can lead to increasing bias?

Katherine: For this, we need to go back and look at the data for training the models, and focus on getting data quality right.

Ryan: If you haven’t been able to mitigate bias, like that of self-reinforcing bias, then you are non-compliant. The process stops there.

Finally..

Humayun: On behalf of Digital Bucket Company and our audience, we really want to say a huge thank you to our speakers for joining us from around the world: from the UK, New York, and around Europe. It has been a really rich discussion.

The Key Take Aways:

  • Inclusivity and diversity are key to tackling bias — not only in the training data but also in the decision process. We need experts, leaders and the public, from all backgrounds, to work together.
  • We need to be able to define what we are aiming to achieve — What is fairness? Are we going to use fairness benchmarks to reflect the world as is, or use AI to build a world we aspire to be?
  • Bias and ethics need to be at the forefront of education — Globally.
  • We need a transparent design and decision process which can allow for multi-stakeholder feedback to mitigate emerging biases.

You can watch the full event here on YouTube, the discussion within this article starts at 17 mins and ends at 1:48.

--

--

Lauren Toulson
CARRE4
Writer for

Studying Digital Culture, Lauren is an MSc student at LSE and writes about Big Data and AI for Digital Bucket Company. Tweet her @itslaurensdata