Promoting Safety and Fairness with Real-Time ID Check

Dan Svirsky
Uber Under the Hood
4 min readOct 29, 2021

by Dan Svirsky, Senior Applied Scientist — Policy Research, and Uttara Sivaram, Global Head of Privacy and Security Policy

For Uber, safety is at the core of everything we build. This is because we’re not just connecting people online — we’re connecting people in person, in real time, in the real world. We’ve worked with emergency response agencies to make it easy for users to push a button and call for help and developed technology to detect car crashes. We’ve partnered with safety and consumer advocates to standardize how safety incidents should be measured and categorized, making these efforts public so that other organizations can contribute to this ongoing work. We’ve developed screening processes so that users feel comfortable putting trust in the person driving a vehicle or delivering food.

However, these screenings can only be useful if we can verify that the person driving passengers or delivering food is who they say they are. One way we do this is by using a feature called Real-Time ID Check, which prompts drivers and delivery people to take a selfie to confirm that they’re the same person who went through all the necessary screenings to drive or deliver on our platform.

In most places where Uber operates, the selfie and the driver or delivery person’s profile photo are then reviewed by an automated facial verification tool developed by Microsoft Azure.¹ If the photos are similar enough, the driver or delivery person is able to log right in. Where Microsoft’s tool detects a difference, three human reviewers are asked to review the pictures.

The use of facial verification technology is not a step we take lightly. We know from work by academics² and activists that such technology has historically worked worse for people with dark skin complexions. And the problem affects other groups, too, like transgender people³ and people with certain craniofacial syndromes⁴. People from these groups have raised concerns about the use of facial verification at any point in the identity check process. Below, we’ve summarized the main safeguards that we have put in place to use this technology responsibly.

First and most important: every case is decided by human review. No one can permanently lose access to the Uber platform based solely on the facial verification step. The technology first checks each photo for quality, and after verification it can report an immediate positive match if the confidence level is high enough. But if there is any doubt, three human reviewers review the photos to make the ultimate determination. All reviewers take a face matching test developed by cognitive psychologists who specialize in facial recognition. Reviewers must score an accuracy rate that would put them above the median of matchers tested in the original research.⁵ Those who reach these accuracy rates go through additional training, weekly quality audits, and weekly coaching sessions.

Second, users are able to appeal when they feel that something has gone wrong. The team that reviews these cases can assess a selfie in the full context of a user’s recent account history and can review previously submitted selfies to check for appearance changes over time.

Third, we conducted internal fairness testing to make sure the identity verification technology worked well for users with dark skin complexions. Our testing found no evidence that the technology is flagging people with darker skin complexions more often, nor that it is creating longer wait times due to additional human review for people with darker skin complexions. This confirms what we found when canvassing the academic literature that compares different facial recognition tools, finding not only that Microsoft’s technology compares favorably to others for people with dark skin complexions and transgender people, but also that Microsoft continues to improve considerably.⁶ Still, we know we have more work to do. In the coming months, we plan to expand our internal testing to (1) assess the system’s effectiveness and accuracy along dimensions beyond complexion (2) solicit feedback from external groups and experts, and (3) add to the academic literature and public discussion on this issue in order to help groups in the private and public sector that are struggling with whether this technology can be used responsibly.

The combination of human review and automated facial verification helps us strike a balance between helping to keep rides and deliveries safe and ensuring that drivers and delivery people continue to enjoy open, equal access to work opportunities with Uber. We also feel that our ongoing vigilance to test for bias helps us honor our commitment to being an anti-racist company. This work is always guided by our internal Fairness Working Group, which includes members of our Employee Resource Groups such as Black at Uber, Los Ubers, and Pride at Uber. We feel that this is vital because these conversations need to be centered around the communities who face the most potential harm from such technology. These are complex issues to navigate, and as we seek the right solutions for our platform, we must be open and transparent about how we reach them. Our work will continue.

--

--