Is There Racial Bias in Artificial Intelligence?

Check the examples in the photo (click on photo to zoom). Eyes, skin color, nose. Blacks recognized as Gorillas. Asian faces not accepted because of “closed eyes” or considered “blinking”. Black noses changed into Caucasian nose types.

Racial Bias examples. Click on photo to zoom.

There is a very simple explanation for racial bias in face recognition AI algorithms. Most of these algorithms need examples to learn to recognize a face. Thus an algorithm is “trained” by 10,000nds or 100,000nds of faces first, before it can recognize one “on its own”.

White people’s faces are (by far) the easiest to recognize and analyze. Those are thus used as the dominant (or only) training set for algorithms to get them to work. The problem is when it sees a black or Asian face later, “in real life”. Then those faces are interpreted from a “white face point of view”.

Hence: racial bias. 
Solution: train algorithms on a more racially diverse group so it can understand nuances. 
Commercial Problem: many face recognition AI may not be able to understand faces to that level of detail. So they either launch biased, or they don’t at all.