The Ethics of AI in Finance

Shubhi Upadhyay
Kigumi Group
Published in
4 min readApr 3, 2023
Photo by Nick Chong on Unsplash

In recent years, artificial intelligence (AI) has become more prevalent in the financial industry to expedite and optimize financial services.

For example, firms are starting to increase the use of natural language processing and chatbots to provide customer services more quickly and delegate repetitive tasks to technology. Chatbots can inform customers about account balances, payment due dates, and even process transactions, with reports showing that chatbots can help banks save up to 30% on customer service costs (Forbes).

Another way AI is being used in finance is to aid in fraud detection. An everyday example of this is when AI models analyze transaction data to scope out potentially fraudulent transactions. One common way this is accomplished is using anomaly algorithms, which are trained on large amounts of historical data to identify patterns of fraudulent activity and essentially ring a warning bell when they detect a potentially fraudulent transaction in the present. This is extremely beneficial to both financial institutions and customers because fraud that goes unnoticed can result in large financial losses, legal issues, and a loss of reputation.

Finally, banks and other financial entities are implementing the use of predictive analytics to inform their market-making and credit-scoring decisions. Given the above examples and many other similar ones, it is clear that AI has already had a profound impact on the finance industry — one that won’t be going away anytime soon.

Potential Risks

Therefore, it is crucial to understand the potential problems that could arise from AI implementation in the financial industry. One significant issue is the presence of data bias. AI models are trained on real-world, historical data, and much of the data relevant to financial institutions contains bias. One primary example of this is zip code bias, in which a person’s zip code can inform decisions surrounding loans and credit scores. However, zip codes have the potential to be highly indicative of one’s race or ethnicity. This means that using zip codes as a factor in creditworthiness assessments or other loan-related decisions can lead to outcomes that are biased and unfair, potentially resulting in systemic discrimination against certain groups of people. Another factor to be mindful of is the use of unrepresentative datasets, especially when there is a lack of data pertaining to women and people of color. This aspect is important because training algorithms on unrepresentative data can cause bias to seep in. For example, an article in MIT News provided the sample of a technology company that claimed its facial recognition system was 97% accurate. However, the dataset that was used to assess the system’s performance consisted of an overwhelming majority of white males. The accuracy rates of the model dropped steeply from the claimed 97% when it was tested on darker-skinned people and women. Overall, these two factors are very pertinent to the field of finance because incomplete and biased datasets can drastically skew a model’s decisions and have the power to detrimentally affect customers.

Steps Being Taken to Mitigate Bias

Two ways that potential biases like the above are being addressed are:

(1) the prioritization of governance; and

(2) the application of human-in-the-loop techniques (both of which are a part of the overarching responsible AI).

AI Governance

AI governance involves a set of processes that oversee the end-to-end development of an AI model, ensuring that it is not only accurate and effective but also ethical and impartial. Some of these processes involve policy development and regulation. Various stakeholders, including policymakers, leaders in the industry, and researchers, work together to create guidelines for the ethical development of AI models. Regulation also plays a large role in governance as it is critical in ensuring that AI technologies are developed responsibly. It involves establishing regulations and designating regulators to oversee AI models and ensure that they are developed in a manner that is in agreement with established policies. Regulators are usually responsible for continuously monitoring the AI models in real-time for bias and accuracy and taking action if something seems wrong or unethical.

Human-In-The-Loop

Finally, a human-in-the-loop design is also helpful in ensuring that AI models are accurate and reliable. It involves incorporating human decisions at certain points in the AI decision-making process to review and validate the results before the AI model’s decision is acted upon. For instance, a human analyst could review an AI model’s investment recommendation for a specific stock and provide additional context, such as news articles or regulatory updates, to verify the recommendation’s accuracy and further evaluate its potential risks and benefits. By combining human and machine intelligence, the HITL approach is most likely to improve the accuracy of AI models.

In summary, while the increasing prevalence of AI in the finance industry has provided many benefits, it is also crucial to recognize its potential risks and identify ways to address them.

References

--

--