Glare detection — Our journey to help users take higher quality photos

Charlotte Sferruzza
Design @Onfido
Published in
6 min readDec 11, 2017
Photo: Onfido

As product designers in Onfido’s SDK team, our role is to create the best user experiences for people who need their identity verified as part of our clients’ onboarding flows.

Our SDK is a plug and play solution for our clients. They use us to verify identities and make their communities safer. The role of our SDK is to help users upload the best possible images for verification. We are constantly looking for ways to improve the quality of the images sent via the SDK in order to raise the pass rate, reduce fraud and reduce the instances where customers have to re-submit images.

Problem

A few weeks ago, we noticed that a large amount of checks uploaded via the SDK were failing because of glare. So, how can we help users take better quality pictures?

Iteration 1: Implementing glare detection in our SDK

We decided to create a first prototype and test it on users. This is a starting point for us to work on the UX.

Prototyping

Our research team created a glare detection algorithm that we implemented in our iOS SDK. In this first version, we provided feedback to the user after they took a picture.

The user takes a picture. If glare is detected, a pop-up appears saying Glare was detected in the image and allows the user to take a new picture.

Iteration 1: glare detection pop-up

User testing

We set up our first batch of user testing sessions with 10 users. Five users are enough to discover more than 75% of the usability problems. Our users all have different backgrounds and use technology in different ways. It was important for us to test on people of various age groups and digital knowledge.

During user testing, we looked at two main things:

  • Does the user understand what glare means?
  • Does the user understand how to avoid glare?

We gathered all the feedback from the users, and analysed how they performed the given actions:

  • How much time does the user spend on capturing a document?
  • How many attempts does the user take to capture a document?
  • How does the user hold their document?
  • How does the user avoid glare?
  • How does the user feel about this capture experience?

Insights

  • Users like being notified about glare. Most of them are not aware that glare is a problem.
  • Users think glare detection is too sensitive. They want to know about glare detection only if glare is on important details on the document.
  • Users expect to be blocked from confirming a picture when it’s the first time glare is detected. Otherwise, they don’t see the point of having glare detection.
  • Users want more tips on how to avoid glare.
  • Users don’t want the pop-up to hide their picture.
  • Users want visual feedback over text.

Outcomes

Users liked the idea of glare detection, but not the interaction we tested. User testing provided us clear feedback about users’ expectations and helped us scoping our next goal: focusing on a better user experience and interface.

Iteration 2: Live glare location

For this second iteration, we wanted to provide users more insights about glare by showing where glare was on their image.

Prototyping

We created a simple interface: all the areas of the image that have glare pulsate in red. We wanted to grab users’ attention and allow them to see the glare on the image.

Iteration 2: live glare detection

User testing

We set up user testing sessions with 5 users. We looked at 3 main things:

  • Does the user understand what glare means?
  • Does the user understand how to avoid glare?
  • Does the user understand what the red areas stand for?

Insights

  • Users don’t know what to do to avoid glare. They expect tips on how to avoid glare.
  • Users are confused with the red areas. They don’t see the relation to glare.

Outcomes

We first listened to our gut feeling (and user’s feedback!). This made us think that providing more detailed information about glare would make the users avoid it. We were wrong. Users are confused by this overload of information. Our next goal is to decrease the level of detail provided with glare detection.

Iteration 3: Live glare detection with improved user interface

For this third iteration, we wanted to give users information about glare but without being overwhelming.

Prototyping

We split the document frame into four areas. Each of these areas would light up in blue if glare was detected. We added a pop-up below the document frame to tell the user what to do to avoid glare.

Iteration 3: live glare detection zoning and pop-up

User testing

We set up user testing sessions with 6 users. We looked at 3 main things:

  • Does the user understand what glare means?
  • Does the user understand how to avoid glare?
  • Do glare detection grid and text help user to avoid glare?

Insights

  • Users don’t connect the blue boxes with the glare message. Some of them thought it shows data extracted from the document. Some of them thought the blue boxes were an edge detection feature.
  • Users follow the instructions provided to avoid glare.
  • Users notice the pop-up more than the visual feedback.
  • Users like the interface, the copywriting and the transitions.

Outcomes

We understood that the level of detail on glare detection was still too high. Users enjoyed the new pop-up that allowed them to see their picture and avoid glare, but they didn’t use the blue boxes. Our next steps are to clean this interface to make it more efficient.

Iteration 4: Live glare detection, final UI

After 3 prototypes and user testing sessions, we were a lot more confident about our product than at the beginning of the project. We created a final version in which we only kept the pop-up.

After a final user testing session, we were confident that:

  • Users understand what glare is and where it is on the image
  • Users understand what to do to avoid glare
  • Users spend a bit more time on the capture process, but the image quality is better
Iteration 4: live detection pop-up

Results 🤓

We tackled this project with the idea of giving users very specific details about image quality to help them take better pictures. It seems obvious now that our primary solution was too complex for users. Carrying out iterative prototyping and user testing really helped us to refine and improve the concept. We understood that the most important thing was to provide the right information at the right time.

This feature is now in production and being used by many clients. The results speak for themselves: we have seen user pictures supplied with glare decrease from 2% to less than 0.8% — more than a 1.2 decrease. Glare is now less of a problem and more users can onboard to our clients’ online services.

--

--