Isaac Wolkerstorfer
Aug 9, 2017 · 1 min read

Maybe this is a little nitpicky, but in my mind “affirmative” is (as the definition at the beginning goes) “favoring those who tend to suffer from discrimination”, while de-biasing an algorithm is merely about making it not have the same biases that society has. As an example: an algorithm that hires men and women at the same rate is de-biased; an algorithm that hires more women than men, or has a quota for women, or something of that type, could be said to be “affirmative”. Or at least, I think that’s how people would sort of interpret it — affirmative action isn’t just about being even-handed going forward, it’s about actively redressing wrongs and accounting for systemic injustice by attempting to tip the scales in the other direction.

De-biasing algorithms (like in the paper you linked) should be the absolute minimum requirement — replicating human prejudices into machines is a horrible sin. But I don’t want people to think that just fixing algorithms to not be racist is the same as “affirmative action” — that would be taking it one step further and saying “how do we create algorithms that favor people from historically disadvantaged groups”. Which, y’know, I’m not opposed to, but I think is a different matter.

    Isaac Wolkerstorfer

    Written by

    I'm a JS/Ruby developer. I work at @asana. I favor simplicity, anarchy, radical composability and transparency, in code and in life.