Helping people verify their identity first time

Updating Onfido’s capture experience to help people submit high-quality photos of their identity documents

Tom Bloyce
Onfido Product and Tech
5 min readNov 14, 2022

--

View website →

Overview

Context

At Onfido, we help protect businesses against fraud by verifying their customers’ identities. This process requires users to take a photo of their identity document and a selfie. Once submitted, machine learning algorithms check both are genuine and match the selfie to the photo ID to verify them. To do this, most users provide their photos using our Smart Capture SDK, a drop-in set of screens customers integrate into their app.

Problem

Users frequently submit low-quality photos of their IDs. Low-quality photos make it harder for us to verify their identity, which often means they can’t access our customer’s services. These failed verification attempts cost our customers time and money as their would-be users must try again or give up and drop off entirely.

Goals

Our goals for this project were to:

  1. Help people provide high-quality photos of their IDs using our iOS and Android SDKs.
  2. Increase the number of people who get verified on their first attempt.
  3. Decrease the number of verifications that fail due to ID-related image quality issues.

Team

I led the design process and was responsible for running usability testing. My team included:

  • 1 x Product Designer (myself)
  • 1 x Product Manager
  • 1 x Engineering Lead
  • 3 x Engineers (1 iOS, 1 Android, 1 Backend)
  • 1 x User Researcher
  • 1 x Applied Scientist
  • 1 x Data Analyst

Process

Discovery

Through our data and user research, we discovered that:

  • Despite the SDKs already having blur detection, blurry photos were still getting submitted.
  • The most common image quality issues responsible for failed verifications were blur, glare or essential parts of the ID getting cut off (such as the MRZ on a passport photo page).
  • When reviewing a photo, ‘submit photo’ was always the primary button. Because of this, people often submitted photos inattentively, even when their details weren’t clear. They expected the SDK to tell them to retake their photo if it wasn’t good enough.
  • Due to the personal nature of identity documents, most users wanted to review their photos before they submitted them.

Exploration

I explored a number of different ways to prevent users from submitting low-quality photos, from automating the review step to capturing a video and using machine learning to select and enhance the best frames.

After discussing the pros and cons of each with the team, we decided on a dual approach. We’d use machine learning algorithms to detect the most common image quality issues and provide instant feedback to users during a smarter review step. With this approach, we’d be able to let users know if there’s an issue with their photo and how to fix it in real-time before submission.

Technical approach

Introducing these new algorithms meant choosing whether to have them as part of the SDK or Onfido’s backend. We decided the backend would be best as we’d be able to:

  1. Use the same solution across platforms instead of needing to create different algorithms for iOS and Android.
  2. Improve existing algorithms and introduce new ones to detect other issues (e.g. expired documents) without having to release a new version of the SDK each time.
  3. Use more powerful algorithms and not be constrained by the SDK’s file size.
  4. Build on top of the Web SDK’s existing Image Quality Service (IQS).

To accommodate this approach, I updated the user flow. Photos are now uploaded to the backend before users review them so that the algorithms can flag any issues first.

Validating our approach

I created some prototypes and ran two rounds of usability testing on the review step, each with ten participants. By changing the primary button to ‘retake photo’ when an issue is detected, users found the review step “makes more sense as it advises you what do to”.

We also ran an A/B test to measure the effectiveness of the new algorithms. The results showed a 16% decrease in identity checks that failed due to image quality issues. We also saw a 2% increase in the number of people successfully verified.

Solution

With the new algorithms, the SDKs give users feedback during the review step if there’s a problem with their photo. If an issue such as a blur is detected, ‘retake photo’ is the primary button. If the photo is ok, then ‘submit photo’ is. This change steers users to retake their photo when needed whilst still allowing them to review it themselves, providing more clarity without compromising control. To prevent users from submitting photos we know aren’t good enough, we also now hide the submit option if an issue gets detected on their first attempt.

Impact

This project was a success on multiple fronts:

  • We released a new version of the SDK with the changes on iOS and Android. They’re now used by millions of people each month to submit photos of their IDs.
  • We’ve seen a decrease in the number of verifications that fail due to ID-related image quality issues and an increase in the number of people verified on their first attempt since this change. Both have helped contribute to improved customer conversion rates.
  • We put the foundation in place for the team to improve existing algorithms and introduce new ones without having to release a new version of the SDK each time.

Since finishing this project in late 2021, the team have made several improvements, including releasing an improved glare detection algorithm. The foundation we put in place meant we could deliver these improvements to users and start seeing results without relying on customers to adopt the latest SDK version.

--

--