Responsible AI: How We Do We Build Systems That Don’t Discriminate?

The main points from BIMA’s breakfast briefing, 27th February 2020

Catriona Campbell
The Startup

--

This morning, it was my pleasure to host BIMA’s latest breakfast briefing, Responsible AI: How Do We Build Systems That Don’t Discriminate?, the latest in its Age of AI series, in partnership with Microsoft UK.

At The Curtain Members Club in London’s trendy Shoreditch, I was joined by three guest speakers, all experts in the field in some way:

After a giggle or two — clarifying Josie doesn’t have coronavirus (phew) and making everyone jealous by informing them my mother dated Sean Connery in the 60s (swoon) — we got to what we came for.

Each of our wonderful speakers brought their own fascinating insights as we considered how to reduce bias in AI systems. Before I run through these, here are a few common themes that arose during the session, and the key takeaways:

--

--

Catriona Campbell
The Startup

Behavioural psychologist; AI-quisitive; EY UK&I Client Technology & Innovation Officer. Views my own & don't represent EY’s position. catrionacampbell.com