FoQus

Boyan Li
21 min readMay 30, 2019

100% focus wherever you are.

Introduction

FoQus is a Mixed Reality (MR) headset and application design that allows users to virtually adjust and modify their study environments so that they may be productive and distraction-free throughout the course of their study session. It supports two main functionalities: 1. It allows users to change the background of their study environment to be a completely different virtual environment, and they have the option to have other users of the system work around them. 2. It allows users to remove distractions from their current environment in the MR visual and audio display.

Target Users

Our target users are college students who have desk work (e.g. reading, coding assignments, etc.) that require high levels of focus to complete. In addition to benefiting college students, our design will also work well for office workers that need to focus on completing their work.

Team

We are a team of four Computer Science students studying at the University of Washington. Boyan, Teran, Thomas, and William are UX researchers and designers. They all conducted user research interviews and usability testing sessions, and they all participated in making the paper prototypes. Boyan functioned as the project manager who schedules team meetings, and he led the digital mockup for MR headset. Teran led discussions on usability testing, self-heuristic evaluations and simplifying feature sets. William and Thomas led the digital mockup for MR UI.

Background

Problem

When a person becomes distracted from their work, hours of productivity can be lost. A study done by Gloria Mark, an HCI professor at the University of California Irvine, discovered that it takes an average of about 23 minutes to return to the original task after an interruption. All of this lost time could bleed into other aspects of a person’s life, which in turn could come back to negatively affect their studies. Thus, a vicious cycle of playing catch-up is created. Gloria Mark stated in the New York Times that their research showed distractions also led to high-stress levels and a bad mood. It is difficult to avoid distractions as they can be present in all kind of environments: noise, people walking around, the current environment is too familiar, etc.

Motivations

As college students ourselves, our friends and we experience the negative impact distractions have on our lives on a daily basis. All four of our team members have actively sought out ways to reduce distractions by going to quiet study rooms, using noise-canceling headphones, turning on “Do Not Disturb” mode on our phones, etc. We believe there should be a more effective solution to deal with distractions other than these existing methods. What we propose is a solution that removes visual distractions and reduces audio distractions from a person’s environment, motivates them to work by allowing them to work in a virtual environment and have virtual users working around them, to help them focus.

User Research

Users

We conducted four design research interviews with four individuals. All participants are young (19–23) and tech-savvy. Two participants are undergraduate students from the University of Washington, one participant is a first-year Ph.D. student at the University of Washington, and the last participant is a full-time software engineer. We chose the students because we felt that they are a group that needs to consistently be focusing. We chose to interview both undergraduate and graduate students to try and understand the different problems that each might face when doing work. Finally, we decided to also interview a software engineer because we thought that they might have similar problems especially if they work in an open office space. Our primary participants are undergraduate students. Our secondary participant is a graduate student. Lastly, our tertiary participant is a full-time software engineer.

Research Method

For user research, we focused on one research method, interview, because it was efficient to implement and it allowed us to gather more information in a limited amount of time compared to other methods. For the interviews we conducted, we followed a series of steps. The general structure of the interviews was introduction, kickoff, build rapport, grand tour, reflection, and wrap-up. We notified the participants that their answers are kept confidential, as our interviews were conducted in quiet spaces. Some of the questions to which we asked were “Where do you usually study?”, “Do you study in different environments for different tasks?”, and “In what study environment do you focus well?”. During the interview session, we focused on listening to the participants’ experiences and what they have to say, which gave us really good insights into themes and tasks on which we need to focus.

Research Results

Our design research results consist of themes we discovered when interviewing our participants. The three themes are that people are more focused working and studying around other people who are also focused on their work or study, people’s biggest distractions are applications, social media, and websites they could access on their mobile phones and laptops, and people prefer different environments for different tasks. With these themes, tasks were discovered under these themes. Initially, we had six tasks: feel motivated to work, understand how media impacts focus and productivity, understand how the physical environment impacts focus, get alerts, share productivity data, and remove distractions. Overall, we chose two tasks to design for which were to make users feel motivated to work and to remove distractions. We chose the first task because, in our research reviews, all of our participants stated that they work best when they are motivated to work by outside factors. The reason we chose the second task was that it directly addresses the design problem we originally set out to solve; by removing distractions, we increase the users’ focus on the task at hand. For our two chosen tasks, we created the following scenarios.

Task 1: Feel Motivated to Work

Jim is an undergraduate student at the University of Washington and he has an exam on the upcoming Friday night, and some other assignments due by the end of the weekend. Thursday at 6 PM, he decides to start studying for the exam, but he thinks that watching a short YouTube video wouldn’t hurt. However, he loses track of time and ends up watching videos for a whole hour. Jim is ashamed of himself. Jim feels like he would never have gotten distracted if he was in the library because other people studying around would make him feel motivated to study too. The next morning, Jim borrows his friend’s FoQus headset and puts on the MR device. Jim chooses to work in a virtual library and has other users working around him. Jim then looks up and sees all these people doing work around him in the virtual study space. Jim feels motivated by social pressure to get his work done and starts on it. Two hours later, Jim looks up and can’t believe how much studying he’s gotten done; if he got distracted like he normally does, it would have taken him 4 hours! Jim is happy and ready to work on his homework assignments.

Task 2: Remove Distractions

Sam is an undergraduate student at the University of Washington trying to get some reading done. Since it’s a bright, sunny day, he decides to do his work outside. Soon after he gets there, several children come out of a bus and start screaming and running around. Unluckily for him, it’s Engineering Day at UW. The loud noise and motions in his peripheral vision make it difficult for him to focus. Fortunately, Sam has the latest version of the FoQus headset. Sam fires the device up and puts it around his eyes. Sam chooses to keep working in his current environment. Sam chooses to remove those children from his view and remove background noise. As he returns to his reading, he is able to give it his complete focus, as the background noise and the scene of children playing are completely gone.

Prototypes and User Testing

Initial Paper Prototype

Initial Overview

Initial Hardware Prototype

Our hardware design is a Mixed Reality (MR) headset. The circles drawn around the paper prototype are low-latency cameras, the bars are low-latency microphones, and the two round pieces on the headband are noise-canceling earpieces. The headset front is a screen facing the user. Cameras: low-latency cameras capture the user’s physical environments in real time and display it on the screen inside the MR headset. Even when users change their virtual background entirely, low-latency cameras still need to capture the user’s chosen workstation and the user’s hands and arms to render them in real time in the work environment. They also function as sensors to track things happening around the user, for instance, if the cameras detect a physical threat to the user, the MR device would terminate the program and pull the user back to the real environment immediately. Microphones: low-latency microphones capture the sound of the user’s physical environments in real time and play them in the user’s ear pieces. Similar to cameras, they also function as sensors and would pull the user back into reality if emergencies happen, for instance, fire alarms.

Initial Props & UI Components

The upper-left figure shows props we use in our paper prototype such as background images, user workstation, and people. The upper-right images show components of the user interface for our application in the MR device.

Initial Task Flows:

Task 1: Feel Motivated to Work

Figure 1.1, Figure 1.2, Figure 1.3

Figure 1.1: The user is studying in their dorm room, and they would prefer to study in a different environment with other people working around them.

Figure 1.2: The user puts on our MR device and sees the current screen on their MR device. A prompt asks if the user would like to work in a virtual location or if they would like to work in their current location. The user selects “Virtual Location” by clicking on it.

Figure 1.3: The MR application automatically detects the user’s workstation boundaries using computer vision algorithms. The user can adjust the boundaries of their defined workstation. In this case, the auto detection was accurate, and the user clicks on “Confirm”.

Figure 1.4, Figure 1.5, Figure 1.6

Figure 1.4: The user can now select the background for the virtual environment in which they would like to work. Each option has a preview image and a title. The user selects “Library”.

Figure 1.5: The background is now a Library. The user can then choose whether they would like to work around other people. The user selects “Yes”.

Figure 1.6: Now other users who are also using our design to study or work would appear in the virtual environment the user has chosen. These are real users who are using the system at the moment, however, the system only displays pre-recorded stock videos to represent the kind of work they are doing (typing on computer, reading, writing, etc.). No user information is displayed. The user is now in a virtual environment with other people working around them, and they are now motivated to work! Task 1 accomplished.

Task 2: Remove Distractions

Figure 2.1, Figure 2.2, Figure 2.3

Figure 2.1: The user is studying in a coffee shop where several patrons are having loud conversations and are continuously getting up and moving around. The user is having a difficult time focusing with these surrounding distractions.

Figure 2.2: The user puts on our MR device and sees the current screen on their MR device. A prompt asks if the user would like to work in a virtual location or if they would like to work in their current location. The user selects “Current Location” by clicking on it.

Figure 2.3: The user is prompted to circle any visual distractions they would like removed from their view. The user circles the people they would like removed with their finger, and then clicks “Confirm”.

Figure 2.4, Figure 2.5

Figure 2.4: The user looks to find that the people circled have been removed from their view. They want to reduce some of the background noise, so they click on the cog in the top right corner to open up the settings menu.

Figure 2.5: The user clicks on the speaker icon to open up the volume settings.

Figure 2.6, Figure 2.7

Figure 2.6: The sound by default is set to allow the user to hear the current volume of the environment. The user reduces this by dragging the bar to the left.

Figure 2.7: Now, the background noise has been set to near nothing, and the user can concentrate on finishing their work.

Heuristic Evaluation Feedback, Discussions & Revisions

Feedback 1: We had the intent for the user to be able to upload their own custom images or soundtracks, but there was no clear way that they could do this.
Discussion 1: We designed a companion mobile application that allows users to connect their phone to the MR headset via Bluetooth and upload custom background pictures and soundtracks to the MR headset. However, This companion mobile app is eventually removed after our team did self-heuristic evaluation again after the second usability test. We decided to remove the mobile app from our overall design to make our design more lightweight.

Feedback 2: Users cannot customize notification alerts when wearing the MR headset. For instance, the user should be able to set a threshold If someone calls consecutively a number of times then the user would receive a notification.
Discussion 2: The original problem that we wanted to tackle with our solution was to give college students a way to study in a distraction-free environment. One of the distractions we wanted to remove was phone notifications. We decided that allowing the user to receive non-essential notifications through our device would be counterproductive towards our goal. We decided that if the user was waiting for a specific email, call, or notification, that they probably shouldn’t be using our design in the first place.

Feedback 3: Our existing design doesn’t take into account a situation where the user gets up and starts moving around while wearing our device. This is a potential safety concern, as we don’t want our users to be disoriented in case of an emergency.
Revision 3: To fix this, we have decided to use cameras and microphones on our hardware to detect when the user is in motion. When the user is in motion, their view will be transported back to their physical location and they will receive a notification about what has occurred. The user will also be snapped back to reality in case of a detected emergency, such as a fire.

Heuristic Evaluation Revision 3

Feedback 4: Our design may be too cumbersome or heavy, and the user might have trouble carrying it around or get tired of wearing it for long periods of time.
Discussion 4: In real life, our design will be built using lightweight materials like carbon fiber to reduce the weight of the headset. We decided to not make any changes to our paper prototype based on this feedback because the weight of the prototype does not represent the weight of the actual device. Also, another medium option, contact lenses, is not friendly for users who would prefer wearing glasses.

User Testing

We conducted user testing with three stakeholders: two undergraduate students, one graduate student. Although software engineer is also one of our stakeholders, we did not find the right time for a session with a software engineer due to schedule conflicts. Given the short amount of time we had to conduct three tests, we reached out to people we know that are our stakeholders but do not know about our project, and we asked them to be our participants. The environments we chose to conduct all three tests are quite places either in a reserved conference room or in libraries so that the testing sessions would not be interrupted by loud noises or moving people, and there would be large enough tables to lay out all of our paper prototype components. For test 1 and test 3 we had two team members participating in the session: one as the facilitator and the computer, the other as the observer. For test 2, we had one team member serving all three roles. We also performed self-heuristic evaluation after the second usability test.

Protocol: We started off both of the testing sessions by first introducing ourselves and inviting the participants to introduce themselves. Then, we told the participant that we are going to assign him some tasks to accomplish using our design, he is encouraged to ask questions whenever he wanted, and nothing he does will be considered “wrong” in this session. We encouraged the participant to employ the “Think aloud” method, speaking his mind when taking each step to accomplish tasks. We then went over the problem we are trying to solve with our design, and the overview of the three main components we have: MR headset, MR application, Mobile companion application. We then presented the tasks for him to accomplish:

  1. You are currently studying in your room and you feel like the environment is too familiar for you to focus. You would like to feel like you are studying in a library. How would you do that?
  2. You are currently studying in a coffee shop. People there are distracting because they are talking and moving around. You would like to feel like you are studying alone in the coffee shop. How would you do that?
  3. Let’s say you are in a virtual environment with a background of the library, and you are not completely satisfied with some of the settings. How would you change them?
  4. You would like to upload a picture you took at the park to be the virtual work environment background and put yourself in that virtual environment to study or work. How would you do that?

We made some changes to the protocol we used for the first two usability tests. In the revisions before test 3, we removed the companion mobile application, so Task 4 from the original protocol was also removed for this testing session. In addition to that, we also asked more questions at the final reflection session (after the tasks are finished) about any confusions the participant raised when they “think aloud” when stepping through the tasks. This worked really well because it prompted the participant to further explain why they felt confused in the first place and gave us clearer directions on how to solve those issues.

Revisions After User Testing

Updated Paper Prototype Overview

Physical Sound and Brightness Adjustment Buttons, Back Lens of the 360 Camera
Front Lens of the 360 Camera, UI Component Overview

List of Revisions

Revision 1, Revision 2, Revision 3

Revision 1: We now use a string to represent the workstation boundary.

Revision 2: We added a home menu to the MR app.

Revision 3: We changed the “Turn off” button in the settings menu to a home button.

Revision 4, Revision 5, Revision 6

Revision 4: Similar to the work boundary issue, we will add strings to represent what distractions the user has chosen to circle.

Revision 5: In each of the settings pop-ups, we will add a back button.

Revision 6: We changed the sound settings menu to the following: a master soundbar to which controls the overall sound level that is outputted We will then have two more separate sound bars where one consist of the background sound and the other consist of the music sound. We also have a prompt in the settings menu to connect to the user’s phone if the user wants to play music.

Revision 7: We changed the design for the “Remove Visual Distractions” pop-up by adding “Undo” button and “Remove” button.

Revision 8: We added info pop-ups. If users do not understand the prompts, they can open the info pop-ups to see a more detailed explanation.

Revision 9: We added arrows to inform users on how to expand and collapse settings menu.

Revision 10, Revision 11

Revision 10: We added Bluetooth pairing pop-ups.

Revision 11: We added a bluetooth button on the MR headset to start pairing mode.

Revision 12

Revision 12: We added the brightness adjustment buttons and sound adjustment buttons to the MR headset.

Revision 13

Revision 13: Instead of using small cameras, we now have two large cameras on the front and back using 360 camera ability to capture the surrounding environment. The larger cameras clarifies how the device functions.

Final Prototype

Final Overview

Hardware Digital Mockup, UI Digital Mockup

Final Task Flows

Task 1: Feel Motivated to Work

Figure 3.1, Figure 3.2

Figure 3.1: The user is studying in their dorm room, and they would prefer to study in a different environment with other people working around them.

Figure 3.2: The user puts on our MR device and sees the current screen on their MR device. A prompt asks if the user would like to work in a virtual location or if they would like to work in their current location. The user selects “Virtual Location” by clicking on it.

Figure 3.3, Figure 3.4, Figure 3.5

Figure 3.3: The MR application automatically detects the user’s workstation boundaries using computer vision algorithms. The user can adjust the boundaries of their defined workstation. In this case, the auto detection was accurate, and the user clicks on “Done”.

Figure 3.4: The user can now select the background for the virtual environment in which they would like to work. Each option has a preview image and a title. The user selects “Library”.

Figure 3.5: The background is now a library. The user is now prompted to adjust their volume settings. They feel comfortable with the sound around them as it is, and they don’t like to listen to music while studying. They click “Done” to move on to the next steps.

Figure 3.6, Figure 3.7

Figure 3.6: The user can then choose whether they would like to work around other people. The user selects “Yes”.

Figure 3.7: Now other users who are also using our design to study or work would appear in the virtual environment the user has chosen. These are real users who are using the system at the moment, however, the system only displays pre-recorded stock videos to represent the kind of work they are doing (typing on computer, reading, writing, etc.). No user information is displayed. The user is now in a virtual library with other people working around them, and they are now motivated to work! Task 1 accomplished.

Task 2: Remove Distractions

Figure 4.1, Figure 4.2, Figure 4.3

Figure 4.1: The user is studying in a coffee shop where several patrons are having loud conversations and are continuously getting up and moving around. The user is having a difficult time focusing with these surrounding distractions.

Figure 4.2: The user puts on our MR device and sees the current screen on their MR device. A prompt asks if the user would like to work in a virtual location or if they would like to work in their current location. The user selects “Start in your Current Location” by clicking on it.

Figure 4.3: The user is prompted to circle any visual distractions they would like removed from their view. The user circles the people they would like removed with their finger, and then clicks “Remove”.

Figure 4.4, Figure 4.5, Figure 4.6

Figure 4.4: Once the user is satisfied that their environment is distraction free, the user can press the “Done” button to move on.

Figure 4.5: The user taps the ‘Tap Here to Connect a Device for Music’ section in order to pair with their phone via Bluetooth. This action puts the phone in pairing mode. The user could alternatively press the physical Bluetooth button on the device.

Figure 4.6: The user is prompted to use their phone to pair the device.

Figure 4.7, Figure 4.8, Figure 4.9

Figure 4.7: Once the device is paired a success message is displayed which the user can dismiss via the done button.

Figure 4.8: The user can move to the next step once satisfied with the sound settings.

Figure 4.9: The user has removed the distractions around them, and is now ready to study! Task 2 is completed.

Project Retrospect

What We Learned

Through our user research, we discovered common themes with our primary user, as our primary user are undergraduate college students. We found that people’s biggest distractions are applications, social media, and websites they could access on their mobile phones and laptops. We found that people prefer different environments for different tasks and that people are more focused on working and studying around other people who are also focused on their work or study. Through our user testing, we learned that clarity and simplicity are important for our users when they use our design. We found that our most salient and important modifications consisted of adding features to which made it clear for our users to perform certain tasks with our FoQus headset and removing a feature to simplify our FoQus headset. The modifications to which we made to our FoQus headset consisted of adding info buttons and popups, adding back buttons, adding boundaries to selections, and removing the mobile companion app (added in the first heuristic evaluation session, removed in self-heuristic evaluation after the second usability test).

Limitations and Constraints

The limitations and constraints we identified were the weight of our headset limited portability, the visual text prompts limits readability for people with dyslexia, the need for button clicks limits the FoQus headset use for people with hand disabilities such as cerebral palsy, and the fact that our FoQus headset only provides English text prompts limits people who are non-English speakers. To resolve these limitations, we can build our FoQus headset with lightweight material to reduce the weight of the headset. To assist people who have dyslexia or who have hand disabilities, we can add voice assistance to which allows a person to use our FoQus headset through audio commands. To have non-English speakers use our FoQus headset, we can add support and functionality with various languages to accommodate different users.

Future Roadmap and Impact

Our project can evolve in the future by becoming more integrated with the daily lives of people. We can broaden our users to support college students around the world, including those who have reading disabilities or hand gestures disabilities. We can evolve our project by trying to make our FoQus headset built-in to people’s daily study habits so that they can have 100% focus wherever they are.

Credits

Coffee Shop Photo Credit:
https://www.maxpixel.net/Cafe-Coffee-Jebudo-The-Coffee-Shop-Indoor-2081857

Library Photo Credit:
Photo by DAVID ILIFF. License: CC-BY-SA 3.0
URL of the license: https://creativecommons.org/licenses/by-sa/3.0/us/

Settings Cog Icon, Information Icon & People Icon Credit:
Made by Freepik/Flaticon.
https://www.freepik.com/?__hstc=57440181.282a3081e7b16646cd717f8d945f0200.1558554821799.1558554821799.1558556947018.2&__hssc=57440181.1.1558556947018&__hsfp=4223128933
Flaticon.com
Flaticon is licensed by CC 3.0 BY
URL of the license: http://creativecommons.org/licenses/by/3.0/

Trash Icon & Eye Icon Credit:
Made by Gregor Cresnar.
https://www.flaticon.com/authors/gregor-cresnar
Flaticon is licensed by CC 3.0 BY
URL of the license: http://creativecommons.org/licenses/by/3.0/

Undo Icon & Home Icon Credit:
Made by Dave Gandy.
https://www.flaticon.com/authors/dave-gandy
Flaticon is licensed by CC 3.0 BY
URL of the license: http://creativecommons.org/licenses/by/3.0/

Skip Button Icon & Play Button Icon Credit:
Made by Smashicons.
https://www.flaticon.com/authors/smashicons
Flaticon is licensed by CC 3.0
BY URL of the license: http://creativecommons.org/licenses/by/3.0/

Background Image Icon Credit:
Made by Onlinewebfonts.
http://www.onlinewebfonts.com/icon
Onlinewebfonts is licensed by CC 3.0 BY
URL of the license: http://creativecommons.org/licenses/by/3.0/

All other images are CC0 Public Domain

--

--