AI and Ethics: possibly one of the most important debates of our time, and it gets boiled down to a bizarre GCSE-like maths problem.
The test goes:
“There are three cars speeding along…
In the Renault is a young single mother, with two jobs and her rent is paid.
In the Chrysler is an aging judge, a philanthropist and a pillar in the community.
In the Ford is a convicted criminal: a real bad egg.
The cars are on course to crash and one individual will perish.
If you, me or AI had to decide, who would meet their maker?”
This is the introduction to a panel discussion on the Ethics of AI. The crowd are asked to stand in an area labeled with the car manufacturer. As an participant observer, I’m in the Ford corner, along with, I’d guess, the 50% majority. The Judge and Mother share roughly equal pickings.
So who was “right”.
No one knows. The moderator didn’t continue the experiment. We were shuffled back to our seats, slightly deflated. That was it.
I sat there thinking, “It’s weird that we have such conflicting opinions, even though we’re all part of the same cultural clique: young(ish), London-based, tech enthusiasts”.
Although no “correct” answer was hypothesised by the moderator, Peter Hotchkiss (UX/UI Manager at Clarksons), it opened a critical vortex in the Ethics of AI discussion. If we can’t agree on the facile death of a fictional criminal over a fictional mother — and we are cut from the same cultural cloth — how is AI going to make decisions, globally?
Indeed, what cultural values system will AI be born into?
Surpringinly, for the panel, what seems at first glance to be an impossible question — What culture will AI belong to? — actually has a more straightforward answer.
AI will learn a moral code from existing human values. Most likely from their designers: so, (mostly) white, privileged men, educated in The West.
That said, panelist Charlotte Stix, a Research Associate on the AI Policy and Responsible Innovation Project at University of Cambridge, says we should hold on to the notion that human values are transient and destined to change.
“My worldview and values base are very different to my grandmothers”, she said.
Stix calls for a global values system to be created to feed into the development of AI. This shouldn’t be a rigid a doctrine such as the UN Human Right Declaration because anything set-in-stone does not have the ability to change (easily).
Rather, Stix argues, “we have to give an algorithm the tools to reason when it faces an ethical dilemma.”
That way, the value systems is in a constant state of flux, much like our own.
Algorithms As Decision Makers
“Algorithms will make more and more decisions about: legally, educationally and in terms of employment”, stated Head of Privacy & Data Protection Practice at Gemsery, panelist Ivana Bartoletti.
Fellow panelist, Seyi Akiwowo, founder of Glitch!, a UK not-for-profit for online abuse, concurs. She speaks about how young black women’s car insurance in Newham is going up, because crime rate in the district is on the rise. The women are being legally judged, and financially implicated, on their postcode.
So how can we prevent this level of bias?
Bartoletti believes algorithms’ development should be made public so researchers can scrutinise tech companies and hopefully guide safer, more holistic AI development. French president, Emmanuel Macron is leading the way.
In March 2018, Marcon “guaranteed that all AI algorithms that his government creates will be open to scrutiny to mitigate the threat to democracy”. I wonder where the UK, Russia, and USA are on that front?
Global Problem Needs A Global Solution
The ‘problem’ [with this debate] is that we are in London.
Well not just London, any wealthy metropolis where these debates happen — and AI algorithms are being developed — cannot be representative of the whole the world, and that is the critical issue at hand.
The pragmatic solution is to make AI a truly global project and debate.
That means, we need AI developers in Africa, Latin America, the Middle East, Asia-Pacific and everywhere else. And the best developers from these regions shouldn’t have to ply their trade in Western tech ecosystems for job security and better wages, but in the their own local ecosystems that are also thriving.
The logical way an ethical AI can begin to develop is if there is a focussed nurturance of startup ecosystems in emerging countries the world over. Then, smaller nations will be able to enter the AI expansion project. And — crucially — feed their local, culturally-specific values into the global AI system.
Of course, even then there will still be cultural discrepancies. In London we couldn’t even agree what hypothetical stereotype to perish in their car! However, the evolution of developing world startup ecosystems could democratize Western tech hegemony, and help create an AI for all.
Either that, or we start giving ethics lessons to Harvard grads!
Craig writes for Calcey Technologies, a boutique software product engineering agency with roots in the Silicon Valley, that lends its software development muscle to start-ups and scale-ups around the world. Calcey’s team of 100 engineers, based at its development centre in Sri Lanka, serve multiple startups in London and are keen to engage with more, particularly those applying AI to disrupt industries. Calcey’s clients also use it as an R&D centre to carry out proof of concept projects, when productising new concepts. Calcey’s client portfolio includes well known brands such as (eg: PayPal and Stanford University) and exciting startups.