Rise of the Machines: Ethics within Financial Services in an Automated World

There’s a school of thought that suggests a machine is prejudice free as it acts purely with logic and does not have thoughts and feelings. But in reality can we trust that a decision-making system equipped with Machine Learning will make fair and appropriate choices? Can we take it on faith that it won’t become some sort of automated prejudice machine, churning out it’s engineer’s bias quickly?

These were some concerns raised by some of the attendees at an Automation Within Financial Services event I presented at in Edinburgh earlier this year.

Is this something we need to worry about? I think it is.

Pale, Male and Stale?

One of the challenges that plagues Financial Services, along with many other industries (unfortunately), is that only a narrow spectrum of gender, race, ethnicity and age are truly represented. Whilst progress is being made, with many organisations striving to encourage diversity with the clear understanding that a diverse work force is a strong workforce, there is still some way to go.

Non-diversity in itself does not directly give rise to bias. However, the fundamental design of a solution could well, and often is, skewed during design and testing phases by the individual views of those creating it.

Virtual Problems

Let’s look at the early days of Virtual Reality as an example. The Verge reported that in many cases much of the hardware that works well for the average person doesn’t fair well for the average female — potentially reflecting the absence of females in the development cycle versus males. Additionally, it is reported that female users of VR are much more susceptible to Virtual Reality Sickness using current offerings. Whilst these shortcomings were almost certainly not intended by the design team, it does beckon the question as to whether additional gender diversity in the field would have yielded a more-fit-for-purpose experience for all, not just the XY amongst us.

So Why Is This a Problem?

Financial Services is going through a concentrated period of digital transformation where advanced technologies are being applied to real world challenges within a complex landscape. Of these technologies Machine Learning offers some of the greatest promise, but also some of the greatest challenges to manage.

Machine Learning is the technology that we’re also told will take our jobs /enslave us Terminator style. Though some say that it’ll give us the four day work week and a Minimum Guaranteed Income; it all depends on how dramatic, apocalyptically pessimistic or optimistic we are being. One thing is clear though, in the coming years we’re going to see more and more decisions taken by machines.

What is Machine Learning Anyways?

In a nut shell, Machine Learning is where a system is programmed to be capable to learn from its own experience so that it can continually improve. In order to do this, at some level, the concept of what amounts to a ‘good‘ result or success needs to be set; this of course inversely means the concept of a ‘bad’ result is also set either expressly or by implication.

So What Does This All Mean?

It is worth noting that Machine Learning doesn’t necessarily always need to make decisions that affect our lives. Sometimes Machine Learning can be used ethically without having an intricate understanding of the ‘how’ being widely socialised. Machine Learning is used to understand handwriting, create human sounding speech from virtual assistants and to make Thano’s eyes really pop in the latest Avenger Films. Equally machine learning used to figure out how long your bus ride home will take or the best time to book your online grocery shop are also unlikely to cause offence -regardless of the criteria used.

Situations that give rise to concern will vary, but broadly if the output is a decision that could seriously impact someone’s life then we know we are entering challenging territory.

One of the more extreme examples can be found within the US Justice System. Some states in the US use computer systems to aid their Judges when considering sentencing by looking at the final risk assessment score generated which, in theory, will give a steer on the individuals risk of reoffending.

This solution is essentially a black box — that is to say how it functions is not fully understood by those that are either using it or advocating it’s use. The creators of this system closely guard it’s algorithm (the essence of how decisions are made using the system) as it is commercially sensitive. This in turn means that detailed factors around how the decision has been arrived upon are often unavailable

This gives rise to arguably the biggest challenges in using this technology in this way, that is the idea of correlation vs causation. For example, if members of a certain community are expected to act in a certain way — is it ethical to extrapolate that and assume (to whatever degree) that an entirely separate individual your looking at will act the same way? Applying criteria such as an individual’s family history, their known associates and the area they reside also gives rise to similar ethical debates. Using a generalised statistical view of a community in isolation to form a view of likelihood to reoffend in this way is not going to the root cause of reoffending and is arguably instead, at its worst, an example of deep institutional prejudice, bias and in some cases, racism.

So How Does this Relate to Financial Services?

Money makes the world go around, Financial Services underpins global trade, product creation, logistics and everything in between. Constraining finances can have a devastating impact on people, companies and communities. Decisions as to whether to grant additional credit loans, mortgages, refinancing and risk ratings directly impact individuals and businesses in a real way. Heed must be taken in considering the application of technology to these areas.

So Should I Avoid Machine Learning?

Definitely not, Machine Learning is going to be central to the next age of Technology and Financial Services. But let’s be real, just because it doesn’t breath does not mean it lacks the ability to make bad decisions that destroy lives.

Elon Musk is amongst the more vocal and sensational individuals with concerns. Source: Recode

How Should We All Proceed?

Academic rigour and initiatives such as the AI Ethics Initiatives and the Y Combinator Universal Basic Income Study are essential to start to answer some of the more fundamental ethical questions around human kinds relationship with machine.

Whilst it’s easy to dismiss these initiatives as just being stimulated by paranoia of the unlikely robot uprising, there is still real ground to address. What does the human workforce look like when many manual tasks no longer have to be completed? When should we use automation and in what situations should we still use a human?

Things to Take Away

If you are involved in creating or overseeing key processes it’s worth take into account the following as a starting point:

1. Do I understand the basis by which the automation arrives at decisions? Do I understand it’s assumptions and weightings?

2. Would I be happy to stand behind the decision making formula if questioned?

3. How do my Customers/Clients feel about the use of this automation? Is there something lost by removing the human element?