3 Lessons from the Ethics panel at the Artificial General Intelligence Forum, Biotech Campus, Geneva, January 22, 2019
The speakers were Prof. Jean-Henry Morin from the Institute of Information Sciences at the University of Geneva, the ethicist Dr. Johan Rochel, Co-Founder of Ethix.ch, the filmmaker and futurist Piotr Reisch and Jérôme Berthier, Head of AI, Big Data and Innovation Lab at ELCA.
I was eager to moderate the session, not because of my expertise in AI (I have none), but rather my skepticism from it (I wrote about this here).
Here are 3 lessons from the panel:
Lesson #1: Bias and “Deep Fakes” are some of the known fears of AI
In order to mitigate AI and “Bias”, it is important to understand what AI actually “sees”, or “understands”. Fundamentally, AI systems are only as good as the data we put into them and avoiding “bad” data that contains implicit racial, gender, or ideological biases is necessary to keep AI systems “fair”.
“Yes, AI can be racist” (Rep. Alexandria Ocasio-Cortez)
In addition, Generative adversarial networks (GANs), which involve two dueling neural networks, can create extraordinarily realistic but completely made-up images and videos (“deep fakes”). Algorithms can decide what news and information should surface on social media and can amplify misinformation, undermine healthy debate and isolate citizens with different views, instilling fear and mistrust from “fake news” (think Cambridge Analytica).
Designing a “fair” system devoid of racism, sexism and idealogical zealotry is not however only a technological challenge. It requires transparency, collaboration and diversity. Steering away from a “Black box” mentality can help avoid photo mis-identifications (think google photos), inaccurate answers from voice assistants (think Siri) and poor medical advice (think Watson).
We need to open the “black-box” and develop AI in an open, transparent and diverse community.
Lesson #2: We should fear “Natural Stupidity” more than Artificial Intelligence
In the attempt to align machine with human ethics we discussed how AI should be developed, deployed and used with an “ethical purpose”. AI must be grounded in- and reflective of fundamental rights, societal values and the ethical principles of Beneficence (do good), Non-Maleficence (do no harm), Autonomy of humans and Distributive Justice.
The panel acknowledged that while AI brings substantive benefits to individuals and society, it can also have a negative impact in forms as aberrant use such as:
- Identification without consent (counter to GDPR)
- Mass Citizen Social Scoring without consent (as practiced in China)
- Lethal Autonomous Weapon Systems (LAWS, as sought by the US and UK)
To avoid such abuse it is necessary to work within an ethical framework, such as the one developed by the European Commission’s High Level Expert Group on AI (AI-HLEG) (below).
Many conditions necessary for a Trustworthy AI (transparency, immutability, self-sovereignty, incentives) can be enhanced by the concomitant use of AI and Blockchain.
Lesson #3: Blockchain is actually a form of AI
He theorized as follows:
“Imagine in the future a code that will run by itself, improve itself and get smarter and harder to beat by the minute…”
“…Imagine in the future a code, that as it gets more and more intelligent, it will require more and more energy and resources, to the point that it will strain planetary energy supply…”
“… That code already exists: it is called blockchain…
Instead of thinking of Blockchain as a necessary part of Trusted AI, isn’t Blockchain itself a form of Trusted AI ?
Blockchain enhances data security, enables safe collaboration among previously non-cooperative parties (opens the “black box”) and establishes peer-to-peer networks, effectively eliminating intermediaries, cost and time.
Blockchain can provide accurate, verified data necessary to reduce bias and “deep fakes” and allows AI to become a reliable source of information and knowledge that can be used by retailers, businesses, financial institutions, health and educational organizations, scientific researchers, non-profits and governments.
But before we worry how to protect ourselves from conscious, self-sensing AI machines, let us concentrate on following ethical guidelines and protect AI machines from ourselves and our natural human stupidity.
If you liked what you read, go ahead and “Clap” below so others will see it too (up to 50 claps allowed!)