The opportunity to apply responsible AI (Part I): Why is transparent AI a hot topic these days?

--

By Jesús Templado, Director at Bedrock

Intro

Dramatic increases in computing power have led to a surge of Artificial Intelligence applications with immense potential in industries as diverse as health, logistics, energy, travel and sports. As corporations continue to operationalise Artificial Intelligence (AI), new applications present risks and stakeholders are increasingly concerned about the trust, transparency and fairness of algorithms. The ability to explain the behaviour of each analytical model and its decision-making pattern, while avoiding any potential biases, are now key aspects when it comes to assessing the effectiveness of AI-powered systems.

For reference, bias is understood as the prejudice hidden in the dataset used to design, develop and train algorithms, which can eventually result in unfair predictions, inaccurate outcomes, discrimination and other similar consequences. Computer systems cannot validate data on their own, but are empowered to confirm decisions and here lies the beginning of the problem. Traditional scientists understand the importance of context in the validation of curated data sets. However, despite our advances in AI, the one thing we cannot program a computer to do is to understand context and we consistently fail in programming all of the variables that come into play in the situations that we aim to analyse or predict.

“A computer cannot understand context and we consistently fail in programming all of the variables that come into play in the situations that we aim to analyse or predict.”

Historical episodes of failed algorithmia and black boxes

Since the effectiveness of AI is now measured by the creators´ ability to explain the algorithm’s output and decision-making pattern, “Black boxes” that offer little discernible insight into how outcomes are reached are not acceptable anymore. Some historical episodes that brought us all here have demonstrated how critical it is to look into the inner workings of AI.

· Sexist Headhunting: We need to go back to 2014 to understand where all this public awareness on Responsible AI began. Back then, a group of Scottish Amazon engineers developed an AI algorithm to improve headhunting, but one year later that team realised that its creation was biased in favour of men. The root cause was that their Machine Learning models were trained to scout candidates by finding terms that were fairly common in the resumés of past successful job applicants, and because of the industry´s gender imbalance, the majority of historical hires tended to be male. In this particular case, the algorithm taught itself sexism, wrongly learning that male job seekers were better suited for newly opened positions.

· Racist facial recognition: Alphabet, widely known for its search engine company Google, is one of the most powerful companies on earth, but also came into the spotlight in May 2015.

Mr Alcine tweeted Google about the fact its app had misclassified his photo

The brand was under fire after its Photo App mislabelled a user´s picture. Jacky Alcine, a black Web developer, tweeted about the offensive incorrect tag, attaching the picture of himself and a friend who had both been labelled as “gorillas” . This event quickly went viral.

· Unfair decision-making in Court: In July 2016, the Wisconsin Supreme Court ruled that AI-calculated risk scores can be considered by judges during sentencing. COMPAS, a system built for augmented decision-making, is based on a complex regression model that tries to predict whether or not a perpetrator is likely to reoffend. The model predicted double the number of false positives for reoffending for African American ethnicities than for Caucasian ethnicities, most likely due to the historical data used to train the model. If the model had been well adjusted at the beginning, it could have worked to reduce unfair incarceration of African Americans rather than increasing it. Also in 2016, an investigation run by ProPublica found that there were some other algorithms used in US courts that tended to incorrectly dispense harsher penalties to black defendants than white ones based on predictions provided by ML models. These models were used to score the likelihood of these same people committing future felonies. Results from these risk assessments are provided to judges in the form of predictive scores during the criminal sentencing phase to make decisions about who is set free at each stage of the justice system, when assigning bail amounts or when taking fundamental decisions about imprisonment or freedom.

· Apple´s Credit Card. Launched in August 2019, this product quickly ran into problems as users noticed that it seemed to offer lower credit to women. Even more astonishing was that no one from Apple was able to detail why the algorithm was providing this output. Investigations showed that the algorithm did not even use gender as an input, so how could it be discriminating without knowing which users were women and which were men? It is entirely possible for algorithms to discriminate on gender, even when they are programmed to be “blind” to that variable. A “gender-blinded” algorithm may be biased against women because it may be drawing data inputs that originally correlated with gender. Moreover, “forcing” blindness to a critical variable such as gender only makes it more difficult to identify and prevent biases on those variables.

· Most recently, mainly around 2020, AI-enhanced video surveillance has raised some of the same issues that we have just read about such as a lack of transparency, paired with the potential to worsen existing racial disparities. Technology enables society to monitor and “police” people in real time, making predictions about individuals based on their movements, emotions, skin colour, clothing, voice, and other parameters. However, if this technology is not tweaked to perfection, false or inaccurate analytics can lead to people being falsely identified, incorrectly perceived as a threat and therefore hassled, blacklisted, or even sent to jail. This example became particularly relevant during the turmoil caused by the Black Lives Matter riots and the largest tech firms quickly took action: IBM ended all facial recognition programs to focus on racial equity in policing and law enforcement and Amazon suspended active contracts for a year to reassess the usage and accuracy of their biometric technology to better govern the ethical use of their facial recognition systems.

All these are examples of what should never happen. Humans can certainly benefit from AI, but we need to pay attention to all the implications around the advancements of technology.

Transparency vs effective decision-making: The appropiate trade-off

For high volume, relatively “benign” decision-making applications, such as a TV series recommendation in an Over-The-Top streaming platform, a “black box” model may be seem valid. For critical decision-making models that relate to mortgages, work requests or a trial resolution, black boxes are not an acceptable option.

After reading in the previous 5 examples where AI is ineffectively used to support decisions on who gets a job interview, who is granted parole, and even for making life-or-death decisions, it is clear that there’s a growing need to ensure that interpretability, explainability and transparency aspects are addressed thoroughly. This being said, “Failed algorithmia” does not imply that humans should not strive to automate or augment their intelligence and decision-making, but that it must be done carefully by following clever and strict development guidelines.

AI was born to augment human intelligence, but we need to ensure that it does not evolve towards automating our biases too. AI systems should be deemed trustworthy, relate to human empowerment, technical robustness, accountability, safety, privacy, governance, transparency, diversity, fairness, non-discrimination and societal and enviromental well-being.

“AI was born to augment human intelligence, but we need to ensure that it does not evolve towards automating our biases too.”

This responsibility also applies to C-level leaders and top executives. Global organisations aren’t leading by example yet and still show no willingness or need to expose their models’ reasoning or to establish boundaries for algorithmic bias . All sorts of mathematical models are still being used by tech companies that aren’t transparent enough about how they operate, probably because even those data and AI specialists who know their algorithms are at a risk of bias are still keen to achieve their end goal rather than taking it out.

So, what can be done about all this?

There are some data science tools, best practices, and tech tips that we follow and use at Bedrock.

I will be talking about all this in the second part of this article as well as about the need for guidelines and legal boundaries in the Data Science & AI field.

--

--

Jesus Templado González
Bedrock — Human Intelligence

I advise companies on how to leverage DataTech solutions (Rompante.eu) and I write easy-to-digest articles on Data Science & AI and its business applications