Ethical algorithms

Ralf Sohl
NoA Ignite
Published in
3 min readDec 2, 2019

Data ethics is an increasingly important agenda that is gaining ground in more companies and all of our industries.

Through tangible regulatory frameworks, including GDPR in Europe, and especially Denmark, we have gained the whereabouts of what we collect, what consent we have and what it can be used for.

I was privileged to be able to participate in a guest presentation by Søren Juul Jørgensen at Stanford University. Stanford describes Søren Juul Jørgensen’s work in this sentence; “Søren is working on ethics, law and tech and how organizations may be ethically compliant in their boasting with tech and data with a group of European and Silicon Valley-based companies”.

What dilemmas?

Marketing is one aspect, but what about all the master data we have on our customers — that is where we as companies must have mapped our ethics and create even higher transparency. In connection to algorithms for recommendations, assumptions, and automatizations, there are some really exciting paradoxes.

In both insurance and pension, there are also great dilemmas about ethics as the coming years will be challenged — or at least viewed differently — according to who you are. Should life insurance be different depending on what gender you have? — men live shorter so should it be cheaper? Today there are, of course, insurance policies that differ according to where you live — but what if the life insurance was also arranged like that? Technically, this is not a problem, but ethically it contains some issues that companies should decide on.

I believe that we as consumers are lacking transparency on why — or when — an algorithm is giving us a result. We want to decide from case to case whether what we are experiencing is commensurate with our ethics. I would like to have an insurance that reflects the life my family and I live — if it gets cheaper;-) — not the other way around. Conversely, I am also inclined to choose brands that take social responsibility, and thus the attitude of paying a premium to the ethics of the product (if you can talk about it in an insurance product) is in place.

Ethics and moral

Søren talked about a study from MIT (back in 2017) that showed that across borders and cultures there are no same assumptions about this topic, which is just making it even more interesting. https://www.media.mit.edu/publications/moral-machine-perception-of-moral-judgment-made-by-machines/.

Here they tested the following: How people think a self-propelled car should act if there was a danger on the road. In one of the examples, one sees a car where there are only 3 options; Drive into a concrete brick, drive into a family with kids or drive into a larger collection of older people in a pedestrian field. Gruesome, yes, but nonetheless an interesting outcome of a study?

In November 2019, it emerged that the algorithm behind the Apple Card evidently takes the gender into account, which has created a lot of debate; https://www.theverge.com/2019/11/11/20958953/apple-credit-card-gender-discrimination-algorithms-black-box-investigation. In that case, the bank behind the card has so far denied that the algorithm was born with bias for the gender, so it must have taught itself that.

Apple Card

In the presentation from Søren, he showed how his team at the time of writing is preparing presentations on how to get the work processes into organizations to solve the dilemmas along the way in the development of their product or business plan. I think this is an important and very good initiative that I’ll follow and I think that we need to set even more focus on the topic now.

--

--