The Role of Facial Recognition & Our Commitment Moving Forward

Shaun Moore
Trueface
Published in
3 min readJun 18, 2020
This is our community. These are our roots.

In the past few weeks, there has been a global spotlight on injustices which has caused the police’s use of facial recognition technology to come under scrutiny. This scrutiny stems from the reported bias of the technology. While some facial recognition providers have said they will no longer support police departments’ use of the technology, others have claimed they will continue to market and sell their technology to police.

What is missing from the statements of large providers like IBM, Microsoft, and Amazon is (a) an explanation of why bias exists in their facial recognition solutions and (b) what they are doing to address (and hopefully mitigate) those biases. Below, we address both.

Bias in facial recognition models exists not because the companies producing the models are biased, but because of a lack of balanced, representative data. The performance of computer vision (of which facial recognition is a subset) is dependent first and foremost on the quantity and quality of data. Algorithms require millions of clean, annotated data points, and the more information the algorithm can learn from the better it can identify its target outcome. Present it with 10,000 images of an apple and it will correctly identify an apple. But ask it to identify an orange and it will be stumped. That is to say, the subject material of the training data must match the real-life subjects the computer is being asked to identify, whether it be a piece of fruit or a person.

While the industry has acknowledged the implicit bias in facial recognition for years, we, at Trueface, are one of the few companies who have developed a plan for how we mitigate bias in our facial recognition software and are executing against it. More than two months before companies like IBM, Amazon, and Microsoft released their statements about bias in facial recognition technology, we publicly released the results of our performance on the FairFace Dataset, measuring the ethnicity and gender bias present in our production-grade face recognition solution. While there is indeed a minimal bias present amongst ethnicity performance, we are leading the facial recognition industry in closing the accuracy gap.

We support the call for federal regulation of facial recognition technology when it is being used by police departments and we recommend civilian oversight and regular reviews of the technology’s role in judicial cases.

At Trueface, we have always built our solutions with all of humanity in mind. We believe that through the responsible use of computer vision (including facial recognition), we can all live in a safer and smarter world. This affirmation acts as a north star for our company, guiding us to ensure our technology is being used to benefit our entire society. Accountability is at every node of our business and we will continue to push forward with the same principles of humanity first, data-security focused and total transparency, which we published in 2018.

If anyone, and we do mean anyone, would like to speak to a member of our team to learn more about our commitment to our partners, customers and our contribution to society at large, please reach out here, and we will be in touch.

--

--