Why Do We Need an Ethical Framework for AI?

Jean-Francois Gagné
Element AI
Published in
5 min readJun 18, 2018
Photo by Joakim Honkasalo on Unsplash

Next week I’m headed to Brussels to be confirmed as a member of the European Commission’s High-Level Expert Group on AI (you can check the link for the mandate and full list of members). Non-European representatives are incredibly rare occurrences in the EC’s HLGs and I’m honored to have been accepted as we’ll be acting as the steering committee for the European AI Alliance’s work. It is an incredible initiative to support broad collaboration across domains of expertise, industry, society and nationality. I applied to join because Europe has been setting the example for creating the much-needed frameworks for cooperation and regulation locally and as a bloc. It is critical that we do not overcomplicate these frameworks, and create something that the rest of the world can build on.

Ethics is a topic of conversation everywhere in the AI community. Many organizations are flaunting the ethical standards that they’ve created or revamped for their organizations trying to show that they are on the right side of history. But while it’s clear to most people machines should follow ethical rules, I don’t think we’ve done a good job of explaining the limitations of implementing those rules and why we still need to develop an ethical framework for machines. After all, don’t we already have ethical frameworks to use? Yes, we do, but for the behavior of people in society, not machines automating our world. A productive conversation about regulating AI will depend on us figuring out how we even translate our stated values, whatever they may be, into a language that machines can understand.

How we currently shape our ethics

As people, we are born into a framework, a training system that starts with our parents teaching us their values and shaping the fundamental structures for our behavior. After only just a few years of development we mix in another, broader set of instructions at school. There we are taught how to engage in social relationships, learning stories about what’s right and wrong — starting out as simple nursery rhymes and evolving into detailed histories of the ongoing debate of Right vs. Wrong.

Eventually our values are more or less set in stone and we become full adults responsible for applying them, though the training is not yet done. Our businesses and institutions impose long-standing agreements for how those values are applied in day-to-day life. Through codified rules and objectives, we have a long list of explicit ethics of what one should do as a citizen. But, also embedded throughout society we have checks and balances on behaviour — subtle cues or outright whistleblowing — that enforce implicit ethics we have not yet formalized.

We have this gray area because some things are still up for debate, whether the behavior is as old as time or newly possible thanks to technology. We don’t always see how actions can accumulate harmfully or have carry-on effects that are bad for society. Thankfully, we have this robust system of checks and balances that keeps the debate going and act as certain guardrails against runaway behavior as we figure it out, an extension of the role our parents and teachers. Our overall ethical framework as people is ultimately a dialogue; it is constantly evolving, updating with new generations of people and the continuing debate of Right vs. Wrong.

The void of an ethical framework for machines

When we create models of the world to automate tasks, we isolate those tasks from our framework of evolving values. We use AI to encode models of the world by training the machine on data. It’s very useful because it creates models we as people are not able to fully understand (otherwise we would have coded them ourselves). These models are becoming exponentially cheaper and more accurate, but also more complicated and less easy to understand as they continuously improve using feedback loops of more data. We cannot comprehend all the possibilities, and therefore cannot preemptively set all of the needed rules for its behavior.

This is OK, if we are able to set guardrails, but right now we don’t have those either. While the machine’s model of the world may capture the ethics from the moment that the training data was captured and the intent was set (consciously or unconsciously), it can run without any further dialogue and effectively operates in a void of any ethical framework. That is because the language of our ethical framework as people (social relationships, institutions, words) is not the same as the language the machine operates in (data).

If we want to apply the power of these tools to certain areas, we will need to introduce new levels of hygiene to our data, and even ethics as people. A hospital can perform incredible feats of healing, but requires a sterile environment to perform. We can perform great feats of societal cohesion with AI, but will need to practice good hygiene with our data, regularly scrubbing for bias or for behavior that will never do well to be automated. It is in engaging with the feedback loops of training data that we will be able to create levers to extend our ethical framework into the machine’s model.

We must extend our ethical dialogue as people to machines. It is by adding more and more of these touchpoints throughout the machines’ development and use that we can speak the same language and become sure they will respect our laws and values. This conversation is going to be very challenging both with the machines, but also amongst ourselves to determine how to build the new framework. It will force us to become more conclusive about some debates we’ve allowed to stay gray for too long.*

This beginning in Europe is encouraging, though. We are off to the right start by bringing to the table experts who have deep knowledge of our institutions, laws, social relationships and debates. As technologists, we will need to do our part to build the means of translation and not avoid the certain hard questions to come.

////

*While AI has lots of potential for automating harmful bias, it can also highlight it in a powerful way. Right now, the lack of explainability in algorithms used in the justice system prevents them rationalizing biased decisions, letting the pattern of bias speak more plainly. This has helped fuel the debate on overall bias in the justice system and put the breaks on the deployment of algorithms while these very difficult conversations (hopefully) get worked out.

Originally published at www.jfgagne.ai on June 18, 2018.

--

--

Jean-Francois Gagné
Element AI

Serial entrepreneur and thought leader, Former: Head of AI product management and strategy @ServiceNow , Founder @element_ai , CPO @blueyonder , Ceo @Planora