What I got wrong about AI ethics

And what I would never have guessed.

Solveig Neseth
daios
4 min readAug 15, 2022

--

This piece serves as a mid-way reflection for my interview series in which I interview experts to learn more about AI and AI ethics.

Photo by Nigel Tadyanehondo on Unsplash

Let us take a brief pausa, shall we?

When I began this series, my goal was to learn more about the field of AI ethics and how it impacts society and individuals. I went in with a particular idea of how it would go, and, to be honest, it has not gone to plan.

So far, we’ve heard from two experts about some of the issues and challenges related to the field of AI ethics. And listen — I’m completely exhausted. There is truly so much information I’ve written about so far, and each tangent feels as though it could deserve its very own entire, drawn-out essay. So where do we go from here? What do we digest? Where do we focus?

I don’t know about you, but for me it seems that the hardest part about all of this is conceptualizing the work in a tangible way. As a craftsperson, for instance, what I do feels very tangible and effective. I know what practicing looks like. I know what it feels like to sing. I know how to read through and study a score, how to hit my marks on a stage, how to communicate with other performers and a maestro and perform. Do I know what it looks like to build even a rudimentary AI model? No.

And the reality is, I don’t really want to know how to build one. It’s not my job to know and, unfortunately, my interest in learning to program as a hobby is essentially non-existent. The problem with this, though, is I can directly see how that kind of complacency can permeate and create the exact ethical issues presented by current machine learning systems. If I’m not the one driving ethical action, and the people building the models aren’t even taught to consider it, where the hell are we supposed to go from here?

It’s difficult to write about these things. They’re theoretical. They’re difficult to put into practice. And if, as I’ve mentioned before, complex problems deserve complex solutions, who in their right mind is going to volunteer for this?

The good news is that there are people out there thinking about these problems and working on solutions. The less-good news is that it could be a long time before we see any standardized, impactful solution. We’re living in a technological age where things are moving faster than my friend Jeremy when he plays as Baby Mario in Mario Kart (annoying). And we can barely keep up. So how do we remain stalwart in the face of this onslaught of technological advancement?

In truth, I really want to be able to say more. I wish I had more answers. I wish talking to these experts provided some element of closure or direction. But the reality has been the opposite. The more I delve, the deeper the pool becomes. Perhaps that’s okay, but it certainly feels insurmountable. So, to help digest all this information I’ve received thus far, I will narrow it down into a couple of points that feel most poignant.

There is much more to the way AI systems impact us than just using our personal data.

As we learned from Dr. Thomas Krendl Gilbert, data is only one component of an AI system. There is also a model — which is the way the system interprets the data — and the ultimate goal of the system — how the information is utilized and to what end. There is a lot of talk about data these days, but, as Tom put it, does the data even matter if the goal of the system is negatively impactful? This is yet another example of the complexity of these issues. Furthermore, consumers and users of these systems don’t often have a decision in the way these systems are utilized. The “choice” of whether or not to accept a website’s cookies is not a way to decide how to engage with the system, it’s a “pay or don’t play” participation scheme that pretty much just sucks. We can’t just work to secure data while throwing our hands up at the negative implementations of other parts of a system. They work in tandem and both need to be addressed.

Programmers aren’t usually required to learn ethics.

As we learned from Adriano Soares Koshiyama, a lot of these ethical priorities aren’t even a part of a programmer’s education. What’s more, companies often don’t have protocols for detecting and correcting bias in the AI systems they already employ. The solution to this is an obvious one, but also a long-haul. Incorporating ethical perspectives into the training and methodology of machine learning will take time, but, as the field progresses, it will be more necessary than ever.

AI ethics as a field is still in its infancy.

Ultimately, the reality is that AI ethics is a new component to the machine learning world. A lot of people are working on the issues of bias and data misuse, but a lot of the work is still to come. Many of the potential solutions are still theoretical. A lot of businesses are re-working pre-existing technology to address ethical issues, while some are thinking up entirely new approaches to combat bias (think diaos 🙌). Only time will tell where we go from here, but, rest assured, it’s going to be a wild ride.

Stay tuned for my next installment, where I interview Dr. Qian Yang, assistant professor of Information Science at Cornell.

--

--