What are we doing to make AI more ethical?

In which a slightly less uninformed woman dives deep into the world of AI ethics.

Solveig Neseth
daios
6 min readJul 20, 2022

--

Unsplash by Kevin Ku

For context, I am an opera singer with relatively zero experience or expertise in the world of AI. To understand further the purpose behind this blog series, you can read the epilogue here.

Last month, I had a lengthy and deep conversation with daios’ AI Ethics Lead, Dr. Thomas Krendl Gilbert. If you happened to read the conversation, and are anything like me, you were likely overwhelmed with the information Tom had to share. A lot of heavy philosophical issues were addressed in regards to the ethical pursuits in AI building, nearly all of them difficult to reconcile. It’s taken me a while to digest the conversation, but now, I’ve some follow-up questions. A lot of Tom’s ideas pointed to a need for experts and laymen alike to take a greater responsibility in the ways we move forward with building AI. My biggest hang-up: how is anyone going to be able to incorporate such heavy ideas into a digital system?

To see what current advocates for AI ethics are doing to tackle this, I spoke with AI engineer and UCL PhD candidate Adriano Soares Koshiyama. Co-founder and CEO at Holistic AI, Adriano had a long, winding professional path that led him to where he is today. While he professed his greatest influence in AI ethics to be his co-founder and COO, Emre Kazim, Adriano took his first dip into the AI ethics pool many years ago through the semi-related field of AI safety. Please do not ask me to differentiate between the two, I have asked my colleagues at daios many, many times and still do not believe my understanding to be reliable enough to educate the masses. Suffice to say, the two fields have similar goals, but different approaches and beliefs about automation. You can think of AI safety as Adriano referred to it — a “weird cousin” of AI ethics.

But Adriano was never entirely satisfied by the safety approach, and eventually was led further in the AI ethics direction when he was working for a company called MindX. There, he worked developing machine learning models for the scoring of game-based assessments.

“I built the model and my friend, Kiki, who was the psychologist there at the time, said, ‘that’s great, the models are amazing, but you need to do an Adverse Impact Assessment.’” For those who may have never heard of this, do not fear — adverse impact studies are exactly what you’d expect. It boils down to someone studying the data to see whether or not a certain group of people or demographic has been misrepresented, overlooked, or otherwise discriminated against. Adriano explained to me that in all his experience up to that point, he’d never been prompted to do anything regarding adverse impact and data. “I thought, wow, someone taught me something I had never thought about.” He then elaborated, “to this day, data scientists receive zero training for things like this. All training is to maximize accuracy and minimize loss function.”

Well, that’s a disturbing reality. Though not quite a surprising one, I suppose. I thought back to my conversation with Tom and remembered how he had to design his own Ph.D. Unfortunately, many degree programs just aren’t offering courses related to AI ethics, and, as Adriano went on to suggest, it’s an issue that affects engineers at every professional level.

“It’s not uncommon that you will walk into a meeting with a CTO and ask, ‘do you know how many AI systems your company employs?’ And they will often have no idea.” Adriano explained that, in the early days of Holistic AI, they worked toward building relationships with companies in order to offer tools and systems to help police systems more ethically. “We believed for a long time that trust by verification could work. We thought, ‘Let’s engage with some companies, audit some systems, and provide assurances for these systems.’ The more we engaged this way, however, the more we noticed that the readiness stage of several organizations is still very low. Companies not only have no idea how many systems they have, they also likely don’t have any system to track bias, no policies around mediating it, nothing.”

What this meant for Holistic AI was that, for now, they’re focused on developing tools and frameworks to assist data scientists in developing these systems and protocols within their own companies. “Our mission at Holistic AI is to solve trust in AI.” According to Adriano, his company researched two approaches to accomplish this. The first being through the way the system gets designed — developing new, holistic metrics for assessment that go beyond the typical metrics for bias and accuracy that already exist. “But then you need additional machine learning models to compute those metrics, which can, in turn, cause more reliability problems to occur.” The second approach is policing systems. “In this approach, you need to constantly monitor the system to make sure that it doesn’t deviate. Most of the work we’ve done so far has been with this approach.” Hopefully, companies like Adriano’s can help influence broader communities that employ automated systems to start taking more steps to ensure ethical AI behaviors.

When I initially approached this series, my intention was to educate myself and others like me about the AI ethics community and all the changes these experts believe need to occur within machine-learning engineering. The more people I talk to, however, the more I realize that it’s not just the plebs like me who have a lot of catching up to do. The entire field of AI engineering seems to lack proper protocols and mechanisms for incorporating more ethical practices into these systems.

I asked Adriano what other progress has been made, hoping he would have some relieving anecdotes I could include here. Much like Tom, Adriano told me that, while the AI ethics community managed to raise the profile of issues, progress has since stalled. Adriano referred to the EU AI Act as one example. “The EU AI Act is not really an ethical piece. It’s a governance piece, which is very different. Lawyers wrote it, not data-scientists, and, unfortunately, they pretty much boiled everything down to privacy. But the issue is actually way beyond privacy, too.” What’s redeemable about the EUAIA, though? It’s a risk-based approach, which Adriano likes. “I don’t like that it was just two risks, but it’s just the initial iteration. There may be more to come in the future.”

Finally some optimism. I’m all for tragic no-win scenarios when it comes to operatic story-telling, but when it comes to real-life issues there’s only so much dread a woman can take. I wonder, though, if governance is really the most effective way to introduce ethical issues into the machine-learning world. I posited this to Adriano using the low-hanging fruit that is facebook (excuse me, ~mETa~). Do we need government bodies to intervene in order to incentivize major companies, whose entire business models are so seemingly unethical, to incorporate ethical systems?

“That’s a more foundational question,” Adriano replied. The social network business model is ‘nudging,’ and, to be entirely fair, that’s essentially what advertising has always been — ‘nudging’ people to consume a particular product or idea. The thing that made facebook so overwhelmingly successful in this is its platform’s potency. “In regards to incentivization, though, I can see there’s sometimes a bottom-up revolution. Public pressure. For example, facebook had to shut down its Facial Recognition System. Whether or not relying on public pressure long-term is a good approach, though, is difficult to say.”

It seems (to steal a phrase from Tom) that the immediate future requires many elements to work together in concert. New tools, governance, industry incentive, and public pressure are all integral in getting AI ethics from where it is to where it wants to be — a fixture of machine-building. It’s anyone’s guess at this point, though, what will make the most efficient combination of these elements. Is it better to have external organizations enforcing protocols and compliance, or do we develop these checks within large corporate organizations to enable self-policing? Will there need to be a wider public culture shift to incentivize these changes, or will they grow from within the existing industry?

Time will tell.

--

--