Pangiam Continues to Eliminate Bias in Face Recognition Algorithms

Cyrus Behroozi
Trueface
Published in
3 min readSep 7, 2022

If you have followed our journey at Trueface, a Pangiam company, you’ll already know that eliminating bias in our face recognition algorithms is a top priority. We believe that machine learning algorithms should not discriminate against any ethnicity or minority group, and should have equitable performance across all demographic groups. To that end, we are pleased to report that our latest model, TFV5.1, achieved a large decrease in bias across all ethnicity and gender groups.

This post is a continuation of two previous blog posts outlining our drive to quantify and eliminate bias. If you are new to Trueface, you can catch up here and here. In an effort to continue our transparency, I will share the results of the same evaluation on our newest model, TFV5.1. This is the very same model used in our most recent NIST FRVT 1:N submission, where we ranked as the 5th most accurate algorithm in the West in the Visa Kiosk category.

For this evaluation, we once again make use of the Fairface dataset. The Fairface dataset contains a balanced number of face images from seven major ethnic groups and contains no more than a single image for each identity. In the evaluation, we generate a face recognition template for each image in the dataset, then compare every face template against one another to generate a similarity score. Generally, when quantifying the performance of a face recognition model, we generate and plot a Detection Error Tradeoff (DET) curve. However, since every comparison performed in our evaluation is an impostor match (a comparison of two different identities), we instead plot the False Positive Rate (FPR) vs similarity threshold. A lower and flatter curve indicates better performance because it means that there are fewer false positives at the given threshold.

Comparison of bias in face recognition models

The plots above compare the bias in three of our face recognition models: TFV4, TFV5 , and TFV5.1 (you can expand the image by clicking on it). As can be seen on the charts, the green line (TFV5.1) is significantly lower than both the orange (TFV5) and blue (TFV4) lines on every subplot. This indicates that TFV5.1 has less bias when compared to TFV5 and TFV4 for all demographic and gender groups.

Overall FPR against similarity threshold.

When we combine the above demographic and gender subplots to generate an overall plot of FPR, TFV5.1 has significantly better performance than the two preceding models for every similarity threshold value. Hence, the TFV5.1 algorithm provides us with better accuracy while also ensuring minimal bias across different groups.

As leaders in the computer vision industry, we have a responsibility to achieve parity of performance across all ethnicities and genders. The benefits of this powerful technology should be equitable. Let us state plainly that we will not stop until this goal has been achieved. We are excited to realize the advantages of this technology together.

--

--