Tech Giants and Vatican Sign AI Ethics Pledge

Bradley Ramsey
Supplyframe
Published in
5 min readApr 2, 2020

As artificial intelligence becomes more complex and capable, ethics will play a crucial role in deciding how we move forward, but like most ethical quandaries, it’s not that simple…

In February of 2020, both IBM and Microsoft signed something called the “Rome Call of AI Ethics.” This pledge was drafted by the Pontifical Academy, which was founded in 1994 with the intention of aiding the world in the pursuit of ethical guidelines as we develop new sciences and technologies. It also happens to be aligned with the Catholic Church.

This is how the Vatican became involved with the initiative. What we have here is little more than a shared agreement on how things should proceed, but very little on how we will actually make it happen. Even so, it’s a discussion that should be given more time and attention. It’s our duty as both engineers and pioneers, to set the proper precedent for the continued development of artificial intelligence.

Why Ethics Matters in AI Development

You’re sick of living in an apartment, so you decide it’s time to buy a house. Your bank has recently implemented a machine learning algorithm to assist with approving mortgage applications. You know your credit score is good, your income is secure, everything is in place. And yet, the system rejects you.

You ask the bank to investigate, because there’s no reason why this should have happened. They agree that it seems odd, and upon further inspection, it becomes obvious that the algorithm is somehow discriminating against specific candidates.

You demand an answer, as do others who reveal their own experiences. Finding one may not be easy. Depending on how the algorithm was created, you may never know why it’s skewing in this direction, simply because it’s too complex to parse.

In other cases, it could be possible to pinpoint the issue as something oddly specific, like the algorithm denying applicants who had prior history living in certain areas. This is similar to an example proposed by a paper entitled The Ethics of Artificial Intelligence, by Nick Bostrom and Eliezer Yudkowsky. It’s also not an unrealistic notion.

Whether it’s facial recognition leaning towards racial and gender bias, or recruiting tools penalizing resumes from women, there’s no shortage of AI stories that end with the stark realization that these things are not as unbiased we thought, and that’s an issue with the AI we have now.

Ethics can and should inform technology. It offers checks and balances to ensure we never forget the human element in what we do. Let’s take a deeper dive into this “Rome Call For AI Ethics,” and see if it offers a suitable place to start.

Promises are Good, Actions are Better

Michael Cohoen / Stringer — Getty Images

While such a call for ethics from a religious body would usually receive less attention, it’s the attachment of both Microsoft and IBM that gave this story legs across multiple audiences.

During the signing event, the term “algor-ethics” was used to denote the ethical use of AI. In summary, it encompasses six principles:

  • Transparency: Artificial intelligence systems should be explainable.
  • Inclusion: Taking into account the needs of humans from all walks of life.
  • Responsibility: The engineers and designers at the heart of AI development should proceed responsibly and with open transparency
  • Impartiality: Avoiding bias in both the creation of the AI and in training it how to act.
  • Reliability: Finally, AI systems should work reliably and within expected parameters.

Artificial Intelligence systems require training data to become autonomous in their decision making. Even at this early stage, we hit a roadblock. Everyone has their own personal biases, whether they are conscious of them or not.

This simple fact, combined with the fact that companies aren’t going to be transparent when their IP is at stake, show us just how difficult these principles will be to enforce.

A few solutions have been tossed around. One option is for the industry to regulate itself through the creation of internal advisory committees that incorporate cross-functional leaders and strategies that engage with stakeholders, customers, employees, and regulators.

A group like this would need to oversee the development of AI from the inception of its design, all the way into development, deployment, and its continued use.

This is honestly just the beginning, though. Much in the way that parents are expected to instill values and ethics into their children, those children will someday grow up to become their own people with their own learned values and perceptions.

AI, in a lot of ways, is no different, albeit on a much larger scale. As engineers across the industry, we need to be the ones who enforce and ensure ethics from the very beginning.

In regards to the biases that we all have (conscious or unconscious), Renée La Londe, CEO and founder of iTalent Digital offers a potential solution:

“There is only one way to ensure that AI truly considers every segment of society, including the most vulnerable. And that is to ensure that the group of individuals building and shaping AI is representative of the entire “human family,” she said. “Because of this, it is imperative that we attract people from all walks of life into AI development. Otherwise, unconscious (and conscious) biases will be baked into the technology, which could put certain segments at risk.”

Diversity in AI development is a powerful tool to ensure that we’re instilling the right values into these technologies that will ultimately think for themselves when released into the wild.

In reality, though, this entire initiative will only work if companies embrace ethics from the ground up. It’s a good start, and a nice idea, but it’s going to take a lot more than vaguely worded principles to enforce and ensure that artificial intelligence is used in the proper ways.

Down The Rabbit Hole

The thing about ethics and AI is that we could spend the next several thousand words going deeper into all the various ways it could be abused or mistreated.

What about surveillance? Healthcare? The list goes on and on. While we don’t have all the answers here, we are in a place to start asking the right questions. In the meantime, if you feel like taking a trip down the rabbit hole, here are some recommended pieces of reading I found in my travels:

I myself am always up for a rousing philosophical discussion to pair with my morning coffee, so feel free to share your thoughts on this topic in the comments!

--

--

Bradley Ramsey
Supplyframe

Technical Writer at Supplyframe. Lover of dogs and all things electronic.