Ethical Futures of AI workshop #1

lex fefegha
The Comuzi Journal
Published in
3 min readFeb 22, 2018

On the 23rd of January, we linked up with the team at Fearless Futures to run an all day workshop on understanding bias, privilege & identify how we could encode inclusion into the design and development of AI systems.

We had 15 individuals passionate and intrigued around the debate about AI ethics attend from companies such as Google Deepmind, Microsoft, BBC R&D/News Labs, Mars Chocolate, Doteveryone, Universities. We started off the day getting to know everyone who attended and developed ground rules in order for all of us to be open and share our views.

As doing a workshop of this nature in regards to uncovering bias and privilege can be a bit uncomfortable. However, we had excellent workshop facilitator partners in Fearless Futures who perfectly made the workshop an enjoyable and educational experience for all.

We explored comfort zones, self/group reflected intensively on the concept of privilege, we identified scenarios and played games exhibitioning how bias might construct our view of the world as researchers, designers and technologists working in the space.

Wrapping up the day by introducing the workshop participants to an algorithmic racial bias case study. Leading to discussion about what does accountability look like regarding AI and who does one hold accountable regarding AI in an ethical dilemma.

Key Takeaways

1. Poorly considered/design features can no longer slip into the development process because you are working quickly using an agile methodology. Furthermore concepts of fail fast and fail often which are also synonymous with this framework need to be reconsidered.

As much as you’ve failed fast with a product, while it was live, how did it impact its users and to what extent? Did your AI product or service make an inaccurate irreversible decision on a individual based on false information?

This especially needs to be considered in industries such as banking, insurance and healthcare, in which AI technology is being touted as the next “Big Thing”.

2. Assessing the quality of the data input. Bad sources of data can be just as much of a hindrance as unconsciously biased programming in the AI product or service amplifying or compounding bias.

Seen in the Pro Publica case study of racial bias with AI, in the prison industry, looking at the risk and likelihood of repeat offenders. Bad data plus unconsciously biased programming can further socially exclude those already on the fringes of society.

3. Opening up the design and development teams, to a more diverse and more representative group of people. Wide peer review and testing can help quickly assess intent vs. impact. Intent creates a reality in which we want to achieve something but impact is what actually happens.

Building on the two former points it is important to change our thinking to not only thinking of the primary implications for target end users, but secondary and tertiary implication for those who may be affected by such AI products and services, experiencing consequences from a decision which they had no control over.

The world of emerging technology development is rapid and fast-moving, we must engage in good practice and further consider the world we are building for. It will take time and we need more conversation amongst technology companies, technologists, designers, people and policy makers.

--

--