AI : Will Predictive Models Outliers Be The New Socially Excluded ?

Christophe Bourguignat
5 min readMar 23, 2015

There is a rising debate about recent Artificial Intelligence (AI) improvements — is it a peril for humanity ? What are the risks ?

Elon Musk, and Stephen Hawking are the most iconic whistleblower about this topic.

For the first time, we even start seing few protesters in the streets, asking for “morality in computing”.

Without talking about Apocalypse, and “end of the human race”, let’s admit that, at least, predictive models are everywhere. Be it for good (disease detection, crime fighting, fraud detection, better user experience, …), or for something else …

Concerns

Let’s imagine a world where every single decision regarding citizens is driven by a predictive model. A world where machines will automatically recommand (not to say, decide) in which school children must go, what job an unemployed has to accept, which families are eligible for a social aid, etc. …

On average, the machine will take good decisions — as it would have been designed for that. However, because a prediction is always a missed target, there will be errors.

Now what would happen for the few citizens located within the predictive models error margin ? These minorities will be left behind, excluded from the system, and systematically discriminated.

John Foreman even anticipates dangerous hidden feedback loops, if machine learning models are applied at scale, without care. Automated decisions will alter the future, and coming models will be trained on data actually polluted by actions taken from previous models.

A prediction is always a missed target

Can An Algorithm Be Racist ?

It’s a controversial way of asking things, but behind this question we can find real-world cases, demonstrating the ethical blindness of algorithms.

Here is, for instance, a detailed analysis of discrimination in online ad delivery. Depending on the first name you type on Google, displayed ad will be different, implying a link between race and crime.

Other examples are given in this report, introducing the concept of “algorithmic accountability”.

What Can We Do ?

Transparency

One solution to mitigate the “black box” effect of a world driven by predictive models, is to be transparent. There are different solutions to achieve that : publish the machine learning algorithms and its parameters, explain decisions on a prediction by prediction basis, or open raw data.

Look, for example, at Lending Club, the peer-to-peer lending company. It releases its loan statistics over the last years. And not the aggregated, but the raw data. You can download data about all loans issued, and — more interesting — the list and details of all declined loan applications, that did not meet Lending Club’s credit underwriting policy.

This data allows skilled persons to analyze and challenge the company application rules, and reverse engineer the underlying automated selection algorithm.

However, giving back raw data to the Citizens is not enough. Indeed, they are not on an equal footing with services who use this data, because the latter know how to process it. And not the citizens.

That’s why some suggest to go further, and introduce the concept of “open models” : “In the best case, this means we, the public, should have access to the model itself, manifested as a kind of app that we can play with

Three levels for predictive models transparency

Education

Transparency is the ideal scenario, but let’s not be naive, this is an utopia.

That’s why in parallel, education should play a big role. Humanity needs to grow up, as suggested in this excellent article. People must raise there level of awareness and understanding of machine learning, to be able to debate about this real societal issue. Teaching code to children at primary school, is a good manner to help them to become actors, and not slaves of technology.

“Schools must evolve from the industrial age learning system, to one more suitable to the machine age”, writes the author. “It is up to us to decide how we develop ourselves to be relevant in the new machine age”.

And he concludes : “Developments in AI can lead us to a human Utopia, but there is a fork in the road, and where we end up will be decided by which road we choose to follow. Are you going to make the right choice ?

Legislation

Over-ruling things is not the ultimate solution, but it may help to enforce best practices.

French “Conseil d’Etat”, as an example, made several recommandations regarding digital technology and fundamental rights. Some of them apply to predictive algorithms :

“The Conseil d’Etat study advocates three ways of managing the use of algorithms : ensuring the effectiveness of human intervention in decisions made using algorithms; establishing procedural and transparency guarantees when algorithms are used to make decisions about an individual; and increasing the monitoring of results produced by algorithms, in particular for detecting the existence of unlawful discrimination.”

Impose to algorithm-based decisions a transparency requirement, on personal data used by the algorithm, and the general reasoning it followed. Give the person subject to the decision the possibility of submitting its observations.”

The spirit of this recommandation is to let the human being understand machine generated decisions, and validate them in full consciousness.

Data For Good

In fact, it will probably be hard to avoid any excess with the use of predictive algorithms. Hopefully, these very same algorithms will be also used for really good causes — counter balancing things, in a manner of speaking.

The “Data for good” trend is growing. Companies like DataKind, or Bayes Impact, will pave the way for ethical use of best of breed algorithms, showing once again that AI, as any technology, can be used for better of for worse.

--

--

Christophe Bourguignat

Data enthusiast #BigData #DataScience #MachineLearning #FrenchData #Kaggle