Case Study: Project Aria @ FRL

David Fisher
10 min readFeb 22, 2023

--

Context

In the interval between November 2020 — November 2022, I freelanced with Meta’s flagship R&D division Facebook Reality Labs (FRL) via product design agency Handsome. This project was for one of Meta’s most strategic research efforts, Project Aria. This engagement occurred over a 2 year time span but consisted of separate engagements during that time.

My role was Product Design Lead, but I also undertook a variety of other duties alongside my primary duties including recruitment, UX research, client & project management.

Project Overview

Before discussing the nuances of the work conducted, it’s important to frame what Project Aria is and what its long term objectives are. Project Aria is an ongoing R&D program which aims to uncover the requirements needed to develop AR Glasses and bring them to market.

Meta’s Project Aria announcement video

The analogy I have found most useful to describe Aria is that it is more closely related to a research satellite/Google Maps Car than anything else. Aria is in essence a specialized device with an array of different sensors that have been specifically designed to capture data from a human eye-level perspective.

Participating in the program as a research participant involves being issued a device, which requires configuration and calibration before you can begin participating in research in earnest.

Once the device is set up, participants are encouraged to partake in various recording tasks (created by research scientists) that are published via the Aria companion app. Once a recording task is selected & carried out by a participant, the data is recorded on the Aria device, and then uploaded at a later time when the device is charging.

The data gathered by the Aria device is then anonymized and used by teams across FRL for various purposes, including indoor mapping, object and context detection, as well as many other purposes related to machine vision and perception.

Project Composition

Client: Meta (formerly known as Facebook)

Duration: November 2020- November 2022 (intermittent engagements)

The team consisted of the following;

  • David Fisher (Product Design Lead) < — — — yours truly
  • Product Designer (handsome)
  • Delivery Lead (handsome)
  • Group Product Manager (Meta) ← — — Main Client/Decision Maker
  • Design Manager (Meta)
  • UX Researcher (Meta)
  • Creative Technologist (Meta)
  • Machine Vision Research Engineer(Meta)
  • Multiple other internal teams on a consultation basis, including subject matter experts, Aria Operations, UX Research, Engineering & Legal (Meta)

Creative Brief

Since the collaboration began in 2020, our team’s brief was to assist FRL evolve core parts of the Project Aria experience through improved Product Design. As the program began to scale and devices began to be issued to a wider internal population at Meta, a need arose to improve key aspects of the Aria user experience most notably usability, user comprehension and participant engagement.

The three main areas of work we collaborated with Meta

  1. Improving Eye Tracking Calibration
  2. Research Tasks & Incentives
  3. Training & Onboarding

Items 2 & 3 above are catalogued in separate case studies to this one.

Improving Eye Tracking Calibration

One of the most impressive capabilities of the Aria device was eye tracking. Aria is equipped with sensors that can sense where your eyes are looking within your field of vision. This combined with other optical sensors can allow research scientists to understand how the human gaze functions.

However, in order to achieve a high level of accuracy, the eye tracking cameras integrated into Aria needed to be calibrated on an individual basis, as everyone’s face and eyes are slightly different.

Our team was tasked with helping improve the design of the calibration sequence, as the prototype they had designed internally was not yielding adequate calibration results.

Research

To begin with, we began to analyze how the current calibration system worked.

As can be seen from the gif above, this calibration tutorial hints towards participants wearing the Aria glasses, and having to move their heads in specific directions, but little else in terms of context or instruction

To understand where the calibration was falling short, we performed some guerilla usability testing on the existing tutorial and calibration process with volunteers within the organization who had not worked with the device before.

Our research yielded some valuable insights. What we found was that participants were unfamiliar with eye tracking and not clear on what was expected of them for the calibration process. Participants defaulted to performing what they interpreted was required.

Participants emulated the actions they were familiar with (Apple Face ID)

The memoji head movements were understood, but were interpreted as being similar to those required of similar systems like Apple’s FaceID.

We wanted to understand on a deeper level what was required for this calibration, so we interviewed the eye tracking specialists & engineers who originally built this version.

What we found out from our research & testing is that the calibration process aimed to capture images of a participant’s eyes from multiple angles while the participant performed what is known as the Vestibulo-Ocular Reflex (VOR).

VOR can be thought of as our eyes’s built-in image stabilization system.

VOR in action (with bonus owl)

The gif above illustrates how human eyes will exhibit the VOR reflex by maintaining their forward focus, even if the individual’s head is rotated in any given direction. (Incidentally, owls are also well known for their excellent VOR reflex, which allows them to keep their eyes fixed even if their head moves dramatically)

How head-mounted eye tracking works courtesy of Tobii

Following on from our extensive research, we established that the correct way to perform calibration was to keep your eyes focused on the center of the screen, while moving only your head during the calibration process

Armed with our newfound understanding of VOR and what the calibration required we began to think about how we could improve this process.

Designing a New Calibration Flow

An early whiteboard sketch of the ideal calibration flow

The first thing we did was to bring more structure to the calibration process in the form of a new interactive tutorial.

The original format felt disjointed and ambiguous. We decided that we needed to set the scene and inform participants that they were about to perform an important action, and simplified the process into digestible ‘chunks’ so that participants were not overloaded with instructions.

Introductory screens for the calibration experience

The calibration begins with setting some context. Participants are informed about the calibration process and then shown a short tutorial to set some expectations around what they will be expected to do.

Screens showcasing the specific calibration instructions

In this new version of the calibration tutorial, we replaced the animated memoji visual with a more minimal visual that resembles a target. This was done to limit any distraction for individuals performing calibration, as we noticed that during usability testing, people would unwittingly try to maintain eye contact with the memoji animation, which reduced the accuracy of the calibration.

We also reasoned that the best way for participants to learn how to perform calibration was to practice. So for this design, we built a prototype which added a small arrow that would indicate the direction where participants needed to move their heads in order to progress through the calibration steps. Participants would cycle through a few practice rounds before beginning the actual calibration process.

In addition to this, we also added a number to the center of the target, which would begin to count down once participants focus/head was in the correct location

This countdown also helped provide real time feedback for participants as they progressed through the calibration process.

We achieved this critical feedback mechanism in our prototype by utilizing sensor data from the device performing the calibration (front facing camera and accelerometers/gyroscopes)

Our tutorial concluded with showing the participant some additional visuals before they began the calibration. On the left above, is a tracking code, which the Aria glasses use to track the spatial distance between the glasses and the device performing calibration. This tracking code proved to be very distracting to users, but due to the nature of the calibration, it was not possible to remove or minimize it.

We also noticed that many participants were encountering issues when holding their phone up in front of them to perform calibration, so we added an additional interactive visual (above right) which helped participants find the optimum angle/level for calibration by lining up the crosshairs.

Calibration

various states of calibration feedback

Once participants had progressed through the tutorial, they were allowed to begin the calibration. The screens above show the calibration experience. Each screen would display the focus target, along with an arrow indicating which way users had to angle their head, along with a countdown (outer left image above).

Once the countdown had completed (inner left image above), the target would pulse a white dot before the participant was given a new direction to look at (inner right image above).

In order to provide feedback if a participant was moving too fast or ff the calibration had not been very accurate, the target would change color to amber (outer right image above).

Users were asked to complete about ~10 cycles of calibration, moving their heads in a variety of different directions.

Calibration confirmation animation

Once participants had achieved the required number of accurate calibrations, the target would display a green check mark animation, and proceed to the confirmation screen and the end of the process.

Process

The priorities for this engagement revolved around getting Aria devices into research participant’s hands and allowing them to begin using the devices as soon as possible.

Our redesign of the calibration system similarly had time constraints which required our team to yield a result quickly.

Over the course of 5 weeks, we conducted research, guerilla usability tests, interviews with subject matter experts, several rounds of design iteration & prototyping and validation testing.

Given the small size of our team, we split the work across two main focus areas.

  • The structure of the tutorial & calibration (e.g what screens we needed to provide appropriate context & guidance for participants)
  • The micro-interactions of calibration (how do we get participants to fix their gaze on a point, and prompt them to move their head in the appropriate direction?)
  • All while bringing the experience together in a cohesive way that we could test and iterate on rapidly.

Our design work quickly took shape, with a guided tutorial flow that indexed on setting expectations for participants on what they might expect to see during the tutorial, and an initial pass at an improved calibration UI.

Our calibration UI was tricky to design/prototype due to the interactive nature of it, so we worked collaboratively with a creative technologist to build a version of our calibration flow in Unity. This allowed us the fine grain control over the motion design of the calibration UI, and also allowed us to pull sensor data from the device to provide real time feedback.

We distributed our Unity prototype over Testflight and performed validation testing on participants within the organization, making tweaks in the design until we had a functional system.

Reflections

What went well

  • 🏠 Despite this being a short project (~5 weeks), our team came together to tackle the work in light of difficult circumstances (all our team was new to remote working due to Covid-19).
  • 💻 This was our team’s first time working with Reality Labs as a client. Our team and clients were spread out across 3 time zones (PT, CT, ET) and we largely worked asynchronously which presented challenges in maintaining effective communication which we managed to overcome using new tools and workflows.
  • 📓 Given the technical nature of the project, our team managed to onboard very quickly. There is a significant learning curve associated with learning and internalizing the nuances of eye tracking and how its calibration works from a physiological and technological perspective.

What didn’t go so well

  • 🕶 Access to test devices: We had extremely limited access to Aria devices. Due to how new the device was, there were few of them available for testing purposes. This meant that we had an intermittent feedback loop upon which we could validate the success of our designs.
  • Time constraints: While a time constraint is usually a good thing for most projects, the very short nature of this project and the client’s desire for a quick solution meant that we only had sufficient time to partially explore the solution space. While the solution we delivered worked to a high degree of accuracy, I feel it could have been improved/optimized to the point where little if any guidance text would be required.
  • ✏️ Design systems: While the value of design systems is clear and demonstrable within certain contexts and within organizations of a certain scale, that value does not always transpose to other contexts. In our case, we were constrained to build our suggested experience from existing components of the Facebook Design System. While this did allow us to build quickly, it also came with tradeoffs. For this project we chose to focus the bulk of our efforts on the calibration UI, and built the rest of the experience using existing components and patterns. These tradeoffs allowed us to focus on calibration accuracy while still creating a relatively seamless end to end experience.
  • ⚒️ Control over final design: As we were working with a client-side creative technologist to help us prototype the more complex aspects of our calibration experience, at times some design changes would be made on the fly for the sake of development velocity. While this was done with good intentions, the final deliverable included a number of these changes which reduced the cohesion of the overall design.

If you made it this far, thank you for your time. I would be delighted to discuss this project further or answer any questions or comments you may have.

You can reach me by email here or follow me on Twitter 👋🏻

Unlisted

--

--

David Fisher

Independent Designer based in Brooklyn NY | Specialized in Product & Experience Design for emerging technologies