Embedding Ethics Into AI : How To Keep The Social Contract Alive
Despite our strong belief in the potential AI has to improve and do good in our society, we can’t deny its viable dark side. Whether intended or not, ethical violations — related largely to privacy, bias and lack of transparency in the decision process — are prominent.
Controversial examples of the ‘AI vs Ethics’ debate are taking place all around us. China’s abundant implementation of facial recognition technology — the most recent announcement being that of telecommunication carriers now scanning the faces of everyone signing up for internet or mobile or service — is one such case. It has even led to the blacklisting of China’s facial recognition industry by the US government — technology which the US sees as “abusive.”
The threat is so real that a Justice League has been formed to fight the potential evil of AI. This band of uncaped crusaders is called the Algorithmic Justice League, and they’re on a mission to increase awareness of algorithmic bias, provide a way to report coded bias and to develop a process to request a bias check during the design, development and deployment of any coded system.
But are these initiatives to keep the social contract intact enough?
The General Data Protection Regulation (GDPR), the EU privacy law that went into effect in 2018, is the toughest regulatory framework we’ve seen to date. But even that attempt is proving not to deliver its intended purpose of protecting the populace. In fact, it may be doing more harm than good. Some of the adverse outcomes noticed so far are negative effects on the EU economy, harming European tech startups, failure to increase trust among tech users and more.
Many countries worldwide have even implemented their own national artificial intelligence strategies. Areas of focus range from scientific research to private and public sector adoption to ethics and inclusion. As the list of countries including AI strategies continues to expand, it’s clear that policy and regulation when it comes to AI technology is a hot topic.
So we come back to the question: how can we guide the ethical and responsible use of AI by society leaders?
The group called fAIr LAC — an alliance between the public and private sector and the civil society to work together to ensure an ethical application of artificial intelligence — is having internal discussion on the following solutions:
- Regulation of the use cases
- Regulation of the algorithms
- Regulation on the quality of data used (Clean & unbiased)
But it seems that the most agreed way, given the speed of the industry, is to create a “regulation sandbox.”
A regulation sandbox is one of the latest buzzwords in the world of AI and can be defined as a type of “lab environment” to test a process for regulation. What is the objective of a regulation sandbox?
The best answer to this is:
“The risk management of disruptive technologies is one of the main reasons for the existence of a sandbox. Another one is the dynamic learning process in the sandbox which allows the regulator to be one step ahead and to perceive more accurately, the legislative challenges, as well as to react to those challenges quicker.”
We have yet to see regulation sandboxes in action in the realm of AI, so while the intention sounds promising, the real-world results are yet to be seen.
What it comes down to is that technology is not neutral; humans give every algorithm and code its intention and purpose.
As leaders and doers in this industry, we are the ones responsible for revising and ensuring the ethos of the AI product and projects we deploy.
Not only do we need to make sure privacy security is ironclad, but we also need to take every precaution against bias in every stage of research, development and deployment and keep transparency at the forefront. It’s a tall order — but we must take on the challenge. The survival of the social contract depends on it.