A model must be right for the right reasons → we cannot rely on models that make the right decision based on noise. (ask the classifier to make the decision not based on race or background).
Data-driven AI is the new kid in the block → but mostly people like prediction rather than explanation. (this might not be the most optimal direction in the future). How can we make sure that the classifier is not biased due to certain reason? (bias are in the data → we don’t want the model to take note of this). (shift in thinking is needed → explanation is much more important → we need to consider fairness and transparency as well). (we should not let the model take advantage of race and more). (this is a far stronger requirement → but more rewarding).
There are many methods → to remove bias → but there are still some problems with those methods. (we want a model that cannot represent bias from the ground up → this can lead to lower accuracy → but this might be due to not using bias → this is exactly what I am doing → represent data into a new dimension.).
Domain-Adversarial neural network → can be applied to achieve this. (feature extractor → prediction model and protected concept model is used). (during training two different losses are computed).
Quite an interesting loss → gradient reversal layer → is a good method. (interesting). (so the gradient is used to do all of this).
Two portions of the Imagenet data were combined.
Additionally, data augmentation was performed → and network structure is → a simplified version of VGG 16 → contextual information can be used to perform classification → such as for killer whales → the beach can give away the information.
The model trained on contextual information → gives good results. (but with DANN → we can prevent this from happening too much.
As seen above → more focus is given to the animal. (do they focus on the background or does it focus on the animal?). (background should not be used for making decisions). (this idea of removing bias → is a very good method).
We need to make models that make fair and unbias decisions. (this is critical!). (this paper propose a method → limits the model from learning unwanted information).