AI, AI on the wall — Who’s the Fairest of them all?

Audrey Lobo-Pulo
Phoensight
Published in
6 min readSep 20, 2020
Photo by Lukáš Dlutko from Pexels

“A world perfectly fair in some dimensions would be horribly unfair in others.” - Kevin Kelly

Fairness” in Artificial Intelligence (AI) applications — both as a concept and a practice — is the focus of many organisations as they deploy new technologies for greater effectiveness and efficiencies. That machines are faster at processing large amounts of information and the notion that they are ‘more objective’ than humans, appear to make them an obvious choice for progressivity and seemingly impartial actors in ‘fairer’ decision-making.

Yet, algorithmic based decisions have not come without their share of controversies — Australia’s recent ‘robo-debt’ government intervention which wrongly pursued thousands of welfare recipients; the UK’s ‘A-Levels fiasco’ of downgrading graduating grades based on historical data, its controversial visa application streaming tool; and concerns about Clearview AI’s facial recognition software for policing are raising new questions on the role of these technologies in society.

Risk assessments are part of the fabric of modern society, but what we are dealing with here is not just ‘scaling up’ human capacity for decision-making without the unwanted human biases and errors — we are also extolling the ‘virtues of objectivity’ under the guise of ‘fairness’ (which is inherently subjective!) and failing to recognise the many inter-relationships that are being unraveled through the use of these algorithms in our daily lives.

And it is these inter-relationships that are holding together the systems we find ourselves in.

Issues around ‘trust’ when it comes to AI are multi-faceted — explaining how these technologies arrive at a particular outcome in decision making and its reliability is just part of the story! Understanding how our societal systems will change as a result of AI goes to the ‘trust’ that is within the relationships between people and their changing world.

What we have really created is a way to disrupt these old structures and the perceptions on which they were founded. AI and other decision-making algorithms are forcing us to revisit the moral underpinnings of how we think about ‘fairness’ and its role in our society and where the trust really lies...

Facing up to AI

“Against the infinity of the cosmos and the silent depths of nature, the human face shines out as the icon of intimacy” — John O’Dohohue

The use of facial recognition in AI has received much attention in the media, particularly when it comes to human rights and privacy. In a recent article, the New York Times covered some of the many risks in using facial recognition technology including: its reliability and limitations; how it’s implemented and used; and the legal and moral challenges faced by society in navigating this ethical minefield.

Copyright © Audrey Lobo-Pulo (CC BY-NC-SA), 2020

Calls for more transparency in AI applications, though critically important in understanding hidden biases and uncovering the underlying values in the algorithmic design, only scrape at the surface of deeper societal issues around justice and fairness.

To view ‘transparency’, in this context, as a window into the decision making process from data to output, is to miss how interfacing AI with society is altering our current systems.

No amount of ‘band-aid solutions’ to either the algorithms or the underlying data will be sufficient in addressing what are inherently system-wide problems.

AI transparency and the toolkits and guidelines for building trust in these technologies do not go far enough in providing insights into how our systems are being affected across many different contexts such as social, economic, financial, political and educational amongst others.

Take for example, using AI to analyse a job applicant’s facial movements to determine their suitability for employment in a particular industry or the claim that AI is able to predict a job applicants propensity for job-hoppingboth which evoke sentiments similar to early eugenics!

While the algorithms and underlying data may be the focus of much scrutiny, and questions on ethics and human rights come to the fore — what’s been largely missing is a deeper understanding of why the problems that these technologies seek to address are occurring, and how these ‘automated solutions’ actually affect the resilience and performance of these industries.

Algorithmic transparency alone cannot comprehensively examine the inter-relationships within these contexts, or how they are changing as a result of these technologies. Why? Because historical data and a rules-based ethical framework cannot accommodate a continually evolving world, especially when not everything can be measured, and much in our world is ‘trans-contextual’ and learning or responding to the changing conditions around us.

At the heart of the matter lies many questions of how these technologies shape our society, what powers of control we have in directing these changes and how we perceive and think about fairness. Do we really want to go down the path we’ve prescribed for ourselves?

Good, Better, Best…

Photo by Mikołaj Idziak on Unsplash

“We can be fully human without being in complete control of our world” — Douglas Rushkoff, Team Human

The business of decision-making is fraught with judgement — choosing between various alternatives, discriminating between different features and weighing up multiple possibilities — are all in the hope that any actions taken as a result of these decisions will achieve the desired outcome.

Underlying this human desire to predict and therefore ‘control’ outcomes to a pre-determined future lies the implicit assumption that these decisions will shift the system (be it an organisation or a society) to a ‘better’ state.

These ideas are not new — one historical example being the desire for ‘improving humanity’, which gave birth to “Eugenics”(improving the genetic composition of the human race). First originating during the time of Plato (around 400 BC) and later developed after being inspired by Darwinism in the early 1900s, eugenics is supposed to literally mean “good creation.

Contrary to popular belief, Darwinism does not fully explain the phenomenon of evolution— Mendel’s research into the hereditary and variation in peas, along with William Bateson’s interpretation of Mendelian principles, suggest that it could not explain ‘new species’. Interestingly, at around the same time, English Philosopher, G. E. Moore, in his Principia Ethica (1903) contended that “good” could not be defined.

These insights are important when using AI technologies in ‘selecting’ features or, in the example of recruitment and employment, choosing humans for a particular job or task. What this means is that what is ‘good’ and what is ‘fair’, is not only open to interpretation — but that even if these could be agreed on, the outcome that’s been engineered may not be as robust as we may have thought!

In his book, “Out of Control”, Kevin Kelly talks about how “a little touch of randomness… actually creates long term stability”. So what might appear to be sub-optimal choices, could actually be critical elements in ensuring the resilience of systems through diversification! Moreover, Kelly emphasises the importance of ‘symbiosis’ (mutually beneficial interactions) in relationships, noting that in “one mutual relationship, evolution could jump past a million years of individual trial and error.

What AI applications are as yet unable to capture are elements of ‘mutual learning’ or ‘symmathesy’, as Nora Bateson refers to, which are dependent on the many contexts and the responses within that environment. It is within these ‘learnings’ that evolution and adaptation is occurring.

In our earlier example of AI recruitment, determining the optimal facial features, which in turn supposedly pre-determines personality traits, misses the inter-relationships and learnings that happen within an organisation. Not only that, the opportunities for innovation and growth within the organisational ecosystem are also limited.

AI technologies may be able to optimise for what we think are ‘best case scenarios’, but may be dismissing key attributes and features that are essential for our long term viability — whether it be organisation, industry or nation.

Phoensight is an international consultancy dedicated to supporting the interrelationships between people, public policy and technology, and is accredited by the International Bateson Institute to conduct Warm Data Labs.

--

--

Audrey Lobo-Pulo
Phoensight

Founder of Phoensight, Public Interest Technologist, Tech Reg, Open Gov & Public Policy geek. Supporting the interrelationships between people, society & tech.