The continuing search for ethical AI

Łukasz Leszewski
Innovation at Scale
4 min readOct 28, 2021

--

As the use of artificial intelligence (AI) increases, there is also a growing debate about how we can deliver ethical AI — or, at least, avoid the obvious ethical problems with the technology.

There are several important debates and issues in developing ethical AI. We know, for example, that there are huge issues with the ‘black box’ nature of AI. As these algorithms learn, they also change. We do not, however, always know exactly what the algorithm has learned, or whether that is the right lesson. Sometimes what is obvious to an algorithm is both not obvious, and actively wrong, to human intelligence.

Protecting against problems

It is, therefore, important to build in safeguards — but this may be easier said than done. Cezary Głowiński, Chief Data Scientist and Head of AI at Algomine, in Poland, agrees that the sheer complexity of AI is part of the challenge.

“Algorithms, together with their development, have become more and more complicated. The decisions they make are not easy for people to understand later. In the past, it was easy to draw a decision tree. You could look and see what the algorithm had done, and whether you agreed. Now, though, decisions are not that transparent. However, even with more complicated models, you still need to know how they work, and particularly which features are important in the algorithm’s decisions.”

The conclusion that Cezary has come to is that it may be better to use simpler models, to improve the explainability. However, he stresses that monitoring remains crucial.

“You have to monitor the operation of algorithm-based systems on an ongoing basis because data and business conditions both change. If your model is going to support business processes effectively, you have to keep checking that it is working work well. It is in the nature of models that they will eventually age, and their quality will decline. I think monitoring and refreshing models is one of the most important parts of putting models into production.”

Algorithm-based models need to be easy to understand. This will allow businesses to react and respond when something in the environment changes, or the model becomes less accurate. This means building in a feedback mechanism.

“When we do something, it is good to have feedback. We want to know if it works or not, and then you can implement some sort of corrective action.”

Cezary believes that this issue of correctability is currently the most important ethical issue in model design. However, he also notes that as AI becomes more advanced, other ethical issues may arise.

“When I was working in insurance, I didn’t really come across issues about the ethics of models. However, I think in the near future, ethics will have to be addressed in artificial intelligence algorithms. If we are talking about systems that start to be autonomous, then we are talking about people ceasing to fully control the system. In other words, at some point the algorithm is going to make decisions. For example, if you are talking about autonomous vehicles, then algorithms are going to make decisions with potentially very serious consequences, and we need a debate about this.”

A question of autonomy?

Perhaps the issue with ethical AI is therefore a question of autonomy. In other words, considering all the ethical aspects is far more important where an AI-based algorithm is relatively autonomous. Where humans are making the final decision, there are already ‘checks and balances’ in place, because it is possible to override the algorithm.

This suggests, therefore, that there is a hierarchy of the need for ethical AI. We can — and perhaps should — concentrate our ethical thinking on areas that matter more. That would generally be those where the algorithm is more autonomous. However, Cezary suggests that there is another issue that should be considered, and that is the application of the algorithm, and particularly the level of consequences.

“I think ethical issues are much less important in situations that are more controlled by humans, and also where the decisions do not have such human consequences. That’s not to say that other algorithms cannot still have very dramatic consequences. Just consider an algorithm that supports investments. If that is wrong, then there is a risk of significant financial losses. However, I think this is probably less important than algorithms that have ‘life and death’ consequences. It does very much depend on both autonomy and the application of the algorithm.”

--

--

Łukasz Leszewski
Innovation at Scale

I support organizations from many sectors in effective solving data management problems so that decision-makers can make fast decisions based on accurate info