Modern Day Bias

Pete Cherecwich
Face Value
Published in
4 min readJun 19, 2019

Monkey see, monkey do. Watch and learn. We’ve all heard these sayings. Whether in the case of an athlete on the field observing a particular skill to improve his or her game, or with an infant observing and mirroring the behaviors and words of their parents — we’ve all seen this theory proven true…that observation and imitation can result in learning and advancement. And while this “observe and learn” concept is a deeply rooted human characteristic, it can and is being applied beyond the human context.

Robot See, Robot Do…Better?

Across regions, industries, and job functions, we see more and more companies deploying AI technologies. From manufacturing, to healthcare, to financial services, organizations are realizing and capitalizing on the benefits that can come with AI advancements. Recently, one branch of AI seems to be making headlines — Machine Learning.

Take, for example, self-driving cars. With machine learning algorithms, self-driving cars are able to observe and learn the skill of driving, as well as adapt to changing road conditions. With this “deep reinforcement learning” approach, self-driving cars are learning to drive like humans. This type of reinforcement learning is also being applied in robotics. First, robots are given simple tasks to perform. The nature of the algorithms allows the robots to be trained and re-trained; learning new and complex behaviors that allow them to complete tasks not originally assigned. These algorithmic robots are able to “watch”, mimic, recover from mistakes, and adapt — not unlike an athlete learning a new skill on the field.

While this is all very fascinating, as a fan of Isaac Asimov, it’s also a bit terrifying. I have to wonder…what if the very algorithms created by mankind eventually become smarter than anticipated, learning and advancing faster than expected and in areas not intended…picking up trends and nuances we would rather they not. The cost and time savings as well as increased productivity that come along with machine learning programs could be enticing enough to allow the individuals benefiting from them to overlook these potential pitfalls. One, in particular, that should be top-of-mind for all organizations — algorithmic bias.

The Bias Plague

The question of ethics and, more specifically, bias in AI is a complex one. The fact of the matter is that bias can occur in any program that has the ability to learn. For example, machine learning tools are being developed in the HR space to help streamline the screening and hiring process. But what happens when the recruitment algorithms are actually relying on biased data inputs to make decisions? If recruitment algos are based on historical recruitment data, companies may run the risk of hiring the same types of people — potentially overlooking qualified candidates due to race, gender, ethnicity or background. The data inputs should be based on where a company wants to be, and the talent they need to seek to meet those aspirations. If recruitment algo inputs are not closely monitored and governed, they can become extremely detrimental to the future success of an organization.

I recently read an article published by IBM which states that, within the next five years, the number of biased AI systems and algorithms will drastically increase. If this is true, the question then becomes, how do we deal with them as they arise? How do we fix algorithms plagued and prevent those that are clean from becoming infected?

Preventing a Bias Epidemic

There isn’t one magical solution for avoiding bias. Machine learning won’t automatically structure your data in a way to prevent bias. There are, however, things organizations can do to help actively mitigate it from occurring.

The most obvious way is to start with a solid foundation. Potential algorithmic bias stems from biased inputs being fed to the machines, so generating reliable (unbiased) results requires solid underlying datasets. For example, at Northern Trust we have developed a sophisticated algorithmic securities lending pricing engine which uses machine learning and advanced statistical techniques to identify opportunities to help drive revenue growth for our clients. We rigorously monitor for and govern any potential bias within the algorithms used by the engine but, more importantly, developed a strong foundation for the engine itself. We do this by applying the same methodology equally across all securities, we don’t use any borrower-specific information thereby ensuring the machine-generated rates are client-agnostic, and we verify that only objective and relevant market data is utilized by the machine learning applications.

While there isn’t one solution for solving the bias problem, there are proactive ways we can get involved to make sure AI technologies are being used responsibly and ethically. AI ethics boards, strong governance, and a diverse talent pool are just some of the ways we can work towards more responsible practices in the AI space. Let’s continue the conversation of ethics and AI and push our organizations to strive to mitigate bias rather than turn a blind eye to it.

--

--

Pete Cherecwich
Face Value

‎President of Asset Servicing at Northern Trust Corporation