The Bias Machine

Loren Davie
Opus At Work
Published in
3 min readJun 23, 2019

How to Avoid Building Automated Discrimination Systems for Management

If you’re breathing, you’re biased. It’s a fact of being human, built into our makeup and at one time critical to our survival as hunter gatherers. Throughout our evolution, we’ve had to make decisions with incomplete information. Always a suboptimal situation, we developed a means to cope: decisions through heuristics.

Red Sky at Night: Management by Proxy

You may not be familiar with the term heuristics in this context, but you definitely know what they are. Heuristics are observable signs that we take as indicators of deeper meaning. The groundhog who sees his shadow, red sky at night — sailor’s delight, and the various lion vs. lamb states of March all represent heuristics. We’ve taken an observable sign and interpreted as a signal for a specific outcome.

We do this in management as well, because, again, we have to make decisions based on incomplete data. We look at things like what school someone graduated from, how late they stay at work, and how busy they appear as heuristics towards how effective they are at their job. None of these things necessarily correlate with job effectiveness, but we’ve come to rely on them in the absence of better data.

As we’re starting to understand, there is a problem. Heuristics, especially when it comes to managing people, are often tied to identity and thus bias. In other words, if you don’t have a certain identity, you won’t be able to generate the desired heuristics, regardless of how effective you actually are at your job. As a result, you will be disadvantaged because of your identity, not your capabilities.

Building the Bias Machine: Automating Problematic Heuristics

With machine learning, this problem can become exacerbated. Training ML models with identity-entangled heuristics can effectively create bias machines: giving hiring and management advice about people based on problematic data points, such as if they are white and male. Identity heuristics might feel like an effective shortcut to building such systems, but we need to make sure we don’t automate the broken parts of our society.

We have already seen examples of ML systems that propagate bias, sometimes in deeply critical areas. For example, a study showed the racism exhibited by a recidivism advisory system that influences prisoners’ chances for parole. Is this application truly racist? Perhaps not quite in the sense that a human can be, but it nonetheless is producing biased results, potentially with great harm.

We can expect to see many more of these advisory systems deployed in the near future, and it’s rapidly becoming apparent that it is not only the results that they produce that matter, but how they get to those results.

Outcomes Only: Management by Results

A new approach is starting to emerge. It has to do with replacing rule-of-thumb heuristics with non-identity-entangled signals that show the truth of workplace performance, while short-circuiting the bias.

Computers, unlike humans, are great at corralling large amounts of data. Through collection of performance data, KPIs, and AI-powered analysis, we can start to build rich profiles of people at work that represents their actual performance. These profiles can serve as proxies of people’s effectiveness.

While we do this, we need to selectively decommission signals that are entangled with identity. Systems that provide management assistance through AI must be both effective, providing accurate results, as well as embrace integrity, making sure there is a clean, non-biased path to get to those results.

By taking outcome-based, non-identity-entangled signals and representing them to management while removing identity information, we can effectively create a foundation for bias-free management, where employees are rewarded for their real contributions and accomplishments, instead of their identity.

--

--