Jeremias Rößler
Aug 25, 2017 · 1 min read

I agree that removing a bias from AI can be solved — one costly bias at a time. And what it costs and whether it is worth the effort is a whole different discussion.

But first you have to be aware of a bias in order to be able to de-bias. The given examples are biases that were detected “after the fact”, in the field. That was my point: you cannot de-bias before you even recognized any. And because the reasoning in subsymbolic AI cannot be followed, you have no good means to analyze that and detect the bias “while it happens”.

So the problem as I see it is: AI says don’t give that person a job / loan / probation / donated organ / what ever. How can you make AI “explain” the reasoning? How can you make sure this is not unfair?

)
    Jeremias Rößler

    Written by

    Founder of @retest_en (http://retest.org ), bringing #AI to testing. Speaker, blogger, dedicated developer & computer scientist.

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade