[ Archived Post ] Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

Please note that this post is for my own educational purpose.

A neural network → can encode bias from the training data → how can we remove this bias → and improves classification accuracy on a very biased dataset.

Currently, → CNN is used for many classification tasks → however they also have their own set of problems. (large face dataset → composed of celebrity → can have bias such as females being younger and white). (big data is very useful → but we need to take into account of these bias in the model → individual must know why a certain decision was made → how can we make sure this happen? → have to look more into being right for the right reason). (women → more likely to be a kitchen → but that does not mean that the only criteria). (label ancestral original faces are developed → new data set).

Domain adaptation → improves performance for a model → to perform well on different tasks. (again a lot of work → related to gradient is done).

This data set → spurious variation is contained → different queries such as German as well as a boy, girl, women, and men were used. (duplicates were removed).

How the dataset looks like. (IMDB → data set is very noisy → images might contain other actors as well). (the authors used Microsoft Azure Face API for cleaning up the data → cleaned up the data set).

The data set itself contains bias imbalanced classes.

Learn a latent variable → that is good for classification but not good for bias variable → very similar work to DANN → very good approach.

Joint learning and unlearning → learn the good features → that are not biased → so here → we are manipulating the loss function → the objective function itself. (classify the first task → but become more robust to another task). (so learning a robust set of features → good approach).

The full algorithm can be seen above → the adversarial training method. (so it does seem like → learning latent variable).

Now we can actually see this in a real-world approach → from a dataset that contains only young women and old man → perform age prediction.

The above is a histogram of the artificially biased dataset. (age classification is not a regression task → rather a classification task).

We can see that with a blind network → the learned features → are not able to separable via the only gender → however → the baseline network → we can simply look at the gender to make a prediction. (wow, such an interesting find).

Women → tend to be overestimated → while men are underestimated → however with blind network → normalizes.

So this method → really does reduce bias values → very interesting → quite a powerful model. (also we can remove more bias values → and the table above shows how removing certain bias effects the performance).

This paper proposed a method of removing bias → inspired by domain adaptation. (and gives a really good performance → by removing bias).

https://jaedukseo.me I love to make my own notes my guy, let's get LIT with KNOWLEDGE in my GARAGE