Dear Amazon, your algorithm MUST be bias

Few days ago, Reuters reported that Amazon.com had killed their recruiting AI-powered engine. They claimed that the system was biased against women applicants. It panelized the applications with the word “women”. The report says:”Amazon’s system taught itself that male candidates were preferable”. However, in fact, the system did not teach itself as claimed. The company itself is biased, otherwise, the system would not be so. They fed the system with 10-years of historical data of the company’s recruiting experience. The company itself is male-domenated environment. According to the same report, 60% of Amazon’s employees are male. Another report by Bloomberg in 2018, states that 73% of Amazon’s professional employees are men.

What had exactly happened?

Machine learning algorithms learn to build predictive and descriptive models based on data passed to the algorithm. The data have a hidden distribution. The algorithm cannot learn without representing the data somehow. We either define the features of the data or the algorithm learns the features itself. In the latter scenario, we usually cannot justify the model’s decisions. However, in the first scenario we can say why the algorithm chose to go with the chosen decision. Yet, in both cases the features are extracted or selected from the provided data. And the data is the product of our measurements, decisions, observations, readings, … etc.

We as humans tend to be biased toward differences and new things. The cognitive bias affects our decisions and judgments. All that is natural due to limitations in the human brain. If we want to make an optimal, i.e. perfect, decision, we must have an unlimited time and resources. The limited time and resources lead to simplified approach of processing information we have. As a consequence, we tend to ignore lots of facts and small details and make shortcuts when making decisions. Human brain adapted a heuristic approach for thinking. That is why our decisions, judgements, choices, and even emotions are not optimal. Also, this is the reason why we don’t pass cognitive experience over different generations.

In Artificial Intelligence and machine learning, in particular, the accuracy of a model is validated against a baseline that is mostly generated from a human experience and observation. The best model is the one that will be able to mimic our cognitive ability. But, wait! We are biased. Therefore, the best model should be biased.

To avoid this bias, we can go with one of the following options:

  • Kill AI systems to avoid bias. It seems easy but the business will not always accept such choice due to the impact on the business growth and the disruptive innovation trends.
  • Aggressively preprocess the data. Since the data is biased by nature, we should study the data and remove any feature that contributes to the bias somehow. For example, in Amazon’s data we might remove anything that identify the gender. I believe that data cleansing and preprocessing is not as simple as that. We need to spend a good portion of time to prepare the data.
  • Tune model parameters. We may give more weight to certain parameters, such as: diversity, in the learning model. However, which one is more important, qualifications or diversity. But at the end, tuning parameters is not a very hard task to do and it could generate a publishable paper :)
  • Consider instance-based learning. Data distribution and underlying characteristics are changing rapidly, a.k.a. data shift. Thus, we need to take this rapid change into consideration. One way, is not to build a model and take all data points we have into consideration at prediction or classification time. This will mitigate the bias each time we have a new, presumably, unbiased sample. However, we might ended up biased against the other side, against male in Amazon’s case.
  • Have unlimited time, data and computational resources. While the time and computational resources can be resolved using AWS HPC capabilities, the unlimited multisource data is very unlikely to obtain and learn. Most of machine learning algorithms learn very organized and homogeneous datasets. It is not as easy to learn from unstructured and/or heterogeneous data.
  • Change the error metric. Current error metrics evaluate the learning performance using prior baseline labels against the predicted ones. If we think learning bias is not good, we need to penalize that bias. We should introduce the bias toward specific attribute(s) as undesirable posterior. But the question will remain, who should decide that? A human, who’s biased by nature, or another machine!
  • We just live with the bias. Amazon is dominated by male employees. Why they kill the system that is trying to mimic their behaviour. Why they don’t kill their bias behaviour. I believe Amazon’s HR will always be biased toward something due to the fact that they have selection criteria.

Center for Artificial Intelligence, Director