Did gender bias drive code review differences at Facebook?
Katherine Ullman
24016

Maybe the problem here is that real research isn’t what people are doing. In science, we seek to find the truth, not to prove our own preconceived notions about it. As this is a hot button topic, too many researchers are out to prove that bias does or does not exist instead of simply seeking to see what an objective research approach would reveal. An idea I have yet to see done: First we take a situation like code reviews and see what the numbers say on % male to % female for positive versus negative review for a sample (like the coders at Facebook). Then we get a large number of the same people to solve unrelated coding challenges and submit their results to a central pool for review. By experimental design: No one know who did what code when they are reviewing it. The question then becomes, will the numbers on male / female positive/negative response stay the same when no one knows the sex of the person whose code they are reviewing? Or will the numbers come out differently? If lack of knowledge does not effect the review, then for this sample, it may be normal to get results that skew in favor of one gender or the other. Further, there is some error since the code will be different and some people may in their minds try to guess by style who submitted it. But if the numbers change when people do not know who they are reviewing, then depending on how they change, you may logically have evidence for or against the bias that is being claimed. This might be a more practical approach to looking at this problem.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.