How do we leverage the new opportunities of AI for greater financial inclusion?

Using AI to assess risk has the potential to increase access to affordable credit and insurance, but it could also exacerbate existing societal inequalities

DataKind UK
DataKindUK
5 min readOct 21, 2019

--

By Michelle Seng Ah Lee, DataKind UK Ethics committee

Photo by Fabian Blank on Unsplash

The use of artificial intelligence (AI) in financial services has the potential to increase access to affordable credit and insurance, but it could also exacerbate existing societal inequalities. How do we leverage the new opportunities of AI for greater financial inclusion? This was the topic of the DataKind UK book club meeting in October, hosted at Deloitte. Here, we present some of the thoughts and concerns discussed by those present.

The opportunity

The adoption of AI in financial services is in its infancy.,Over half of European and Middle Eastern financial services companies have not built AI solutions and 40% are still learning about AI, according to a Deloitte survey. In insurance, only 2% of insurers worldwide have seen full-scale AI implementation, and around half (47%) are in the ideation and use case testing phase. However, nearly all (94%) of respondents believe that AI will disrupt their business.

The opportunities are sizeable, not only for the business but also for the customers. In the “AI and Personal Insurance” snapshot paper, the UK Centre for Data Ethics and Innovation claims that AI could reduce prices for policyholders, lead to fairer outcomes by filtering out fraudulent claims, open up insurance to new groups, advise policyholders on how to reduce damage to people and property, and incentivise take-up of insurance. AI has been used to speed up the processes for providing quotes and managing claims. AI can find patterns between newly available individual data and specific risks (e.g. using sensors or wearable device data). AI can help “nudge” people and advise them on how to lead healthier and safer lives.

Compared to the traditional business, AI may be more fair than human decisions. In credit risk, a paper from UC Berkeley found that FinTech companies using algorithmic lending discriminate 40% less than face-to-face lenders in US mortgage lending decisions. What’s the catch?

Photo by Alexander Ruiz on Unsplash

The risks

The bookclub touched on three types of potential risk: fairness/inequality, privacy/autonomy, and explanation/inference.

Fairness/inequality

In the introduction of AI, there are winners and losers. What is concerning is that the “losers” are likely to be from previously marginalised and excluded groups. While algorithmic decision-making may reduce discrimination compared to human decisions, this isn’t always the case. Insurance companies have been accused of giving higher premium quotes to motorists with the name Mohammed. Another paper on US mortgage lending found that compared to simple linear models, complex machine learning models disproportionately put Black and Hispanic borrowers at a disadvantage.

These are cases of obvious unfairness — but it’s not always so easy to define what it means to be fair. Fairness is not an absolute concept — it is a value judgment that people disagree on, highly dependent on the context and affected by their philosophical, ethical, and cultural backgrounds. Moreover, we don’t have the ground truth. We don’t know the true risk of a car crash or whether those who were denied a loan would have defaulted. This makes it even more difficult for us to evaluate what is a fair decision. (For a summary of our earlier book club on fairness, see our blog post.)

Though issues of unfairness take on a new light under the use of AI, discrimination and inequality pre-date the introduction of AI to financial systems. And so, given the problem is not algorithmic, it is unlikely that an algorithm can fix it. A book clubber mused that as data scientists, we need to move away from “solutionism” and narrowly focusing on our model, and instead understand the model’s connection to the “real world.”

Privacy/autonomy

Another concern is around privacy and autonomy. More companies are leveraging “alternative data” and non-traditional sources to predict risk, such as location data, our social network, and our public posts online. This has been widely criticised, though one book clubber pointed out that the norm varies across countries. In the U.S., students can reveal their academic grades to receive financial rewards from credit card companies, which shocked some of the UK-based book clubbers. How do we know what data sets are acceptable to use in each context?

“Nudging” also came up as a potential infringement of privacy and autonomy. Some people may not be comfortable with feeling that a company is guiding their behaviour.

Explanation/inference

One of the issues is that the companies are using these data to make inferences, which may or may not be correct, and are not disclosed to the customer. Based on a customer’s supermarket loyalty card data, an insurer may infer that the individual exercises regularly. Academic accomplishments may suggest that a student has excellent time management skills. Should these assumptions be made accessible to the customers, so that they can challenge them? Some academics think so.

Accessibility, though, can be a challenge in itself. Sometimes it isn’t possible to understand what inferences are being made, as machine learning algorithms are often challenging to interpret. Perhaps the answer is to only build interpretable algorithms like the one released in the FICO credit scoring competition, and to require the company to show how the risks — of unfairness, of exacerbating inequalities, and of exploiting customer vulnerability — are being managed.

These are all important concerns on the use of AI, but who is accountable in ensuring these are sufficiently considered and addressed? A data scientist often focuses on the key performance metric provided by the business. The initiative in identifying and managing these risks needs to come from the top. Only when AI is appropriately governed will the leaders have the confidence to innovate.

Shaping the future we want

Overall, the book clubbers expressed strong mixed feelings about the use of AI and its impact on financial inclusion. The risks and concerns are significant as more companies are adopting AI. However, AI also presents an opportunity for us to meaningfully debate what we want from a fair financial system.

The goal of the algorithm shouldn’t be about reflecting the status quo. It may very well be that there is a statistical correlation between people’s given names and the quality of their driving. There may be discrimination in the job market that makes Black and Hispanic mortgage applicants’ income more unstable, increasing their default risk compared to others. Yet we are uncomfortable with this because we believe racial discrimination to be unethical. Algorithms should reflect the type of financial system we want, not the one we currently have in an unequal and biased society.

Photo by Jonas Jacobsson on Unsplash

The DataKind data ethics book club

The DataKind UK data ethics book club aims to create a space for data scientists and others to discuss the ethical implications of data science in society. The next DataKind UK book club will be on race and AI, on 27th November in London and online.

--

--