Toward ethical, transparent and fair AI/ML: a critical reading list

In the past 5 years there’s been a lot of enthusiasm about AI and specifically machine learning and deep learning. As we continuously deploy AI models in the wild we are forced to re-examine what are the effects of knowledge symbolisation, generalisation and classification on the historical, political and social conditions of human life. We also need to remind ourselves that algorithms don’t exercise their power over us. People do.

This reading list is made for engineers, scientists, designers, policy makers and those interested in machine learning and AI. It’s an open ended document that examines machine learning as a sociotechnical system and contextualises its critical discourse. For suggestions and comments please tweet @irinimalliaraki or drop me an email at e.malliaraki16@imperial.ac.uk

These sections aren’t in any particular order. There’s overlap and interaction between these topics that you can jump around as much as you want; Reading “out of order” could lead to interesting connections.


CRITICAL AI

Must

Optional


AI ACCOUNTABILITY & GOVERNANCE

Must

Optional


AI TRANSPARENCY, EXPLAINABILITY & BIAS

Bias

Must

Optional

Transparency and Explainability

Must

Optional


AI FAIRNESS

Must

Optional



AI ETHICS

Must

Optional


AI and LABOUR

Automation and inequality

Must

Optional

Discrimination


AI and SOCIAL IMPACT

Must

Optional


AI POLICY & LAW



AI AND DESIGN

Must

Optional


AI AUDITING & SECURITY

Must

Optional


PEOPLE & ORGANISATIONS

There are plenty of research groups and initiatives both in academia and in the industry start thinking about the relevance of ethics and safety in AI: