Bias in Machine Learning: It’s not just about the data

Gonzalo Ruiz de Villa
gft-engineering
Published in
5 min readMar 9, 2021

Nowadays, it is more than evident the great benefit that Artificial Intelligence and, in particular, Machine Learning offers to society. Improvements are happening very fast and continuously in topics as diverse as facial analysis, autonomous vehicles, interpretation of medical tests, process optimization, quality controls or determining how proteins fold.

However, one of the main problems in machine learning systems is that the models can be biased. This bias can have quite negative consequences depending on the context like, for example, when the algorithm applies racist or misogynistic criteria in its predictions. There are already too many examples of these problems and therefore it is very important to understand how these biases appear and how to fix them.

There is a widespread belief that algorithmic bias is exclusively a data problem. If the data used for training is biased, the bias will be learned by the model and, therefore, removing or correcting the bias present in the data will result in correcting the bias of the model. This is true but it is not the whole truth. There may be other causes that can increase this bias problem.

Let’s look at some examples of how other things can increase bias given the same data set.

Pruning and quantification of neural networks

Neural network pruning and quantization are increasingly popular techniques due to runtime limitations in terms of latency, memory, and power.

Quantification techniques achieve a very significant compression while a the same time the impact on the most popular performance metrics is usually negligible. Quantization is a technique of approximating a neural network that uses floating point numbers by another neural network that uses lower precision numbers. This dramatically reduces both the memory requirement and the computational cost of using neural networks. However, while it is true that overall precision is unaffected, this insight masks a disproportionately high error in small subsets of the data. In this post “Characterizing Bias in Compressed Models”, Sara Hooker of Google Brain and others explore this topic and propose techniques to detect this potential problem. They call this subset of affected data Compression Identified Exemplars (CIE) and state that for these subsets, compression greatly amplifies existing algorithmic bias.

Source: https://arxiv.org/pdf/2010.03058.pdf
Source: https://arxiv.org/pdf/2010.03058.pdf

Similarly, neural network pruning reduces the size of the network with little impact on neural network metrics, but can also disproportionately increase error in small subsets of underrepresented data, in this case called PIE ( Pruning Identified Exemplars). In “What Do Compressed Deep Neural Networks Forget?”, they shows how an over-indexing occurs in the PIEs and they propose a methodology to detect atypical examples in order to take appropriate measures.

Source: https://arxiv.org/pdf/1911.05248.pdf

Difficult examples are learned later

In this article, Estimating Example Difficulty Using Variance of Gradients, Chirag Agarwal andSara Hooker define a scalar score as a proxy to measure how difficult it is for a neural network to learn a specific example. Using this score that they call Variance of Gradients (VOG), they can sort the training data according to how difficult it is to learn.

Source: https://arxiv.org/pdf/2008.11600.pdf

In networks with an excess in the number of parameters, it has been seen that zero errors can be reached by memorizing examples. This memorization occurs later during training and the techniques proposed in the article help to discriminate which elements of the dataset are especially difficult and must, therefore, be memorized. Data storage can present privacy risks when the data contains sensitive information. It is interesting to note that measures such as VOG allow the detection of atypical or out-of-distribution (OOD) data.

Differential privacy may have a disparate effect on model accuracy

Differential privacy (DP) in machine learning is a training mechanism that minimizes leaks of sensitive data present in training data. However, the cost of differential privacy is a reduction in the accuracy of the model that does not affect all examples equally. Accuracy on those belonging to underrepresented classes or groups has a much larger drop.

In “Differential Privacy Has Disparate Impact on Model Accuracy” they demonstrate how, for example, the gender classification has a much lower precision on black faces than on white faces and that in a DP model the precision gap is much greater than in the equivalent no-DP model. That is, if the original model is not fair, this unfairness is accentuated in the model with differential privacy.

Source: https://arxiv.org/pdf/1905.12101.pdf

Conclusion

In the previous examples, it is observed that fighting bias in the models is not something that is solved exclusively at the data level (it is important to note that these examples are far from exhaustive). By applying various popular techniques on the models, the bias of the models can be inadvertently accentuated. Understanding, first, and measuring, next, this negative effect are the necessary steps to be able to take the appropriate corrective actions.

Sources

--

--

Gonzalo Ruiz de Villa
gft-engineering

Engineer, Google Developer Expert , co-founder of Adesis Netlife, Chlydro and Kenobi Ventures. CTO @ GFT Group