How to ensure that our AI systems are fair?

Five key insights from Timnit Gebru’s talk on Fairness in AI at MLSS 2019, London

Saasha Nair
5 min readJul 25, 2019

“Woah, Timnit Gebru is speaking at your summer school! Definitely attend the session, she is an amazing speaker!”

Topic of the talk at MLSS 2019; Video of the talk can be found here

In all honesty, I feel that a major part of the research community does not really focus on presentation skills, resulting in the death of some amazing research due to low accessibility by the general public. So, when I heard my friend rave excitedly about Timnit Gebru, I was really sceptical of what to expect. When I entered the lecture theatre on July 24, 2019, for a 2-hour session by Timnit on ‘Fairness and Transparency in AI’, as part of the Machine Learning Summer School 2019 (MLSS) hosted by University College London, I went in not expecting much.

Marc Diesenroth (one of the organizers of MLSS2019 London) introducing Timnit Gebru

For those not familiar with Timnit ( LinkedIn; Twitter), she is a research scientist at Google in the Ethical AI team, and prior to that worked in the Fairness, Accountability, Transparency and Ethics in AI Group at Microsoft Research. She is an advocate for inclusion and diversity in technology to fight biases from creeping into AI/ML systems, and is the co-founder of Black in AI.

Importance of considering intersectionality, with an example of how the effects can be seen in current day commercial Face Recognition softwares (sorry for the weird angle :P)

As the popularity and acceptance of Machine-Learning-based solutions increase, so does the need to ensure that these solutions do not result in certain groups of individuals being marginalised. Though I might be a bit biased, due to my own interest in topics relating to safety and fairness in AI, I loved this talk and learnt a lot from it. Timnit’s session at MLSS2019 London did a great job at making me question my own assumptions and biases. Listed here are my key insights from the talk.

  1. The world is diverse, but our data is not. A major focus of work in Fairness in AI today targets de-biasing algorithms and ensuring that the output of the system is not skewed. But Machine Learning, in the current state, comes down to Garbage-In Garbage-Out, and so if the data fed into the system is skewed, then that would reflect in the output as well. Thus, in the trade-off between spending time on accumulating a dataset representative of the diversity in the world around us vs spending time on that 1% accuracy gain, one should focus on the former.
  2. Don’t just think about making systems fair, but question if the system being “made fair” is even ethical in the first place. With the explosion of interest in fairness, the ML community for most part currently focuses on testing systems to ensure that no particular group of individuals is disadvantaged by the developed solutions. But Timnit raised the point that before even thinking about de-biasing the system, the first question we should ask ourselves is whether it is even ethical to build the solution at all. For example, Timnit brings up the question of ethicality of Automatic Gender Recognition systems, which take a reductionist approach in using visual cues for binary classification of the gender of individuals, asking if it is even our place to decide the gender of a person based off visual cues and how it would negatively affect certain groups of individuals.
  3. Out in the wild, unintended use-cases can emerge for your models and datasets. While building an ML-based solution, or releasing a dataset to the public, one should always remember that humans are creative creatures, and so one must be conscious that their systems might be used in cases that they had not originally intended. To combat this, Timnit suggests improved transparency by accompanying the release with heavy documentation, or Datasheets, listing details about the data used to train the system, the metrics used to test the system, the groups of cases that the system has been tested on, details of what use cases are not suggested for the dataset being released and so on.
  4. Automation bias is real. As the world gets more automated, our dependence on automated aids increases. This leads to automation bias, wherein individuals end up being extremely dependent on automated systems, even in complex decision making contexts, and fail to notice errors and biases being exhibited by the system, thereby unconsciously mirroring those biases themselves.
  5. Intersectionality matters. Intersectionality, a term I was not familiar with before this session, is a framework that models individuals or a group of individuals as being affected by multiple interconnected factors, such as race, gender, class and so on. As an example, Timnit talks about her work from her Postdoc days at Microsoft, where she and Joy Boulamwini (LinkedIn; Twitter) studied intersectional accuracy in commercial Face Recognition softwares. Joy and she noticed that the error rate in identifying faces of people with darker skin tones was in general higher than that of those with lighter skin tones, but when looking specifically at darker skinned women, the error rates increased significantly.

Timnit concluded the session by emphasising the need to push for standardisation in AI/ML. Similar to other disruptive technologies, such as automobiles or medical drugs, which created upheaval in the community in their nascent stages, but with passage of time have come to be seen as commonplace technology with established industry standards and certification, we, as ML enthusiasts need to work together with people from diverse backgrounds, to ensure safe and ethical AI systems.

All in all, you know those talks that give you goosebumps, as you realise there are ramifications of your work that you had not even considered before, this talk was one of them! If you do ever get a chance to attend a talk by Timnit, please do, not just for the content of her talk, but also for the energy she brings in to her talk (a recording of the session at MLSS 2019 can be found here). One can clearly see her passion for her work being transferred over to her presentation, which was extremely inspiring. Thank you Timnit Gebru, Marc Diesenroth and Arthur Gretton for the lovely experience! :)

Originally published at https://saashanair.github.io on July 25, 2019.

--

--

Saasha Nair

Predoc Research Fellow at Scuola Sant’Anna, Italy | Interested in topics related to safety, fairness and transparency in AI/AGI | Tw: @nair_saasha