Aug 25, 2017 · 1 min read
I agree that removing a bias from AI can be solved — one costly bias at a time. And what it costs and whether it is worth the effort is a whole different discussion.
But first you have to be aware of a bias in order to be able to de-bias. The given examples are biases that were detected “after the fact”, in the field. That was my point: you cannot de-bias before you even recognized any. And because the reasoning in subsymbolic AI cannot be followed, you have no good means to analyze that and detect the bias “while it happens”.
So the problem as I see it is: AI says don’t give that person a job / loan / probation / donated organ / what ever. How can you make AI “explain” the reasoning? How can you make sure this is not unfair?
