Don’t touch your face

A tool that reminds you when you are touching your face

Atharva Patil
The Startup
9 min readApr 23, 2020

--

Quick Summary

I built Don’t touch my face to alert myself when I am touching my face. As the project expanded I opened it out for others. Below is the case study of the design process.

You can check the web app here.

Interface with feedback on touching face/not touching face.
Feedback on touching face/not touching face.
Demo video of the final app

Motivation

We touch our faces a lot. Most of the time we don’t even know we are doing it. One study found that, on average, people touch their faces 23 times an hour, which is a lot. Resting the face on the hand. while reading something from a book or a long article on a website, watching youtube videos, or just talking to someone. This involuntary action has suddenly turned into a concern.

The spread of COVID-19 changes the severity of this action as the virus can potentially spread by touching our face. I tried to be cognizant of my involuntary behavior but I still ended up touching my face a lot.

Changing such an involuntary habit overnight is not possible as I won’t even realize I am doing it consciously.

To address this issue I could be aware that I was involuntarily touching my face. I could ask my roommate to tell me when I do it & return the favor. I could wear a helmet all day to make sure my face is not accessible to my hands. But, there was going to be a better way of going about it.

Ideation

If I wanted to address the problem for myself so I had to start observing my behavior.

The most significant behavioral change since entering a self-imposed quarantine since 12th March. Traveling outside is restricted so I spend most of my time indoors, to be more specific in front of my laptop.

My laptop now my new control tower. I message friends, attend classes & even birthday parties using it. It is the object I spend most of my time interacting with.

Screenshot of daily laptop usage.
Screenshot of daily laptop usage.

Using the screen time app I could see how long I was using my laptop daily. Spending about 70% of my waking time in front of/in the vicinity of my laptop which in addition to being a productivity tool is also a medium of entertainment & social interactions in the times of social distancing.

I started thinking about how my laptop could become an assistant/tool for me to track my involuntary behavior. It comes armed with multiple sensors that understand it’s environment(camera, microphone, etc) & a set of output mediums(visual, audio, tactile, etc).

I could either put a sticky note on my screen to remind myself not to touch my face(I tried & it failed). Or I could have my laptop alert me whenever I touched my face. I could do this by teaching an algorithm to alert me when I touch my face. With machine learning, I could build a classifier that knows if I am or not touching my face.

Using my webcam as a source of input & getting good & bad positions as output to give me feedback if I am touching my face.

Interaction framework illustration. If good pose no feedback. If bad pose audio feedback.
Interaction framework

Design Process

Teaching the machine

For the machine to know if I am touching my face or not it needed to learn to differentiate between the two states. To train the machine learning model, I used teachable machine’s image classifier to train for good & bad states on myself.

I chose the platform as it lets me deploy my trained model on the cloud immediately on training. So even if I chose to work off a different hardware later I can access my web app which runs locally after the model is loaded.

Teaching the machine what not touching the face looks like
Teaching the machine what not touching the face looks like
Teaching the machine what touching the face looks like

While training & testing, I realized the model worked better if I gave it more explicit data than just showing it what touching my face or not means. I trained it with 3 classes, a neutral, touching my face with my left hand & touching my face with my right hand. On doing this the resulting efficiency improved.

& then it broke.

I moved from my table to my bed to do some light reading & thought I could keep the classifier running in the background to test it out. The results were inaccurate & I saw a lot of false positives. I had made one a glaring mistake in my hypothesis.

“The context of use was the laptop, not sitting in front of the laptop on the desk”

To solve this issue I moved to a different model on the teachable machine platform. PoseNet, it takes a camera feed/image & identifies 17 key-points from the human anatomy. Key points such as(eyes, nose, knees, etc) are identified. I redid the exercise to teach the machine the difference between me touching my face & not touching my face. The results turned out better at different locations.

This technical shift also came with an unexpected advantage. It would be independent of who is sitting in front of the webcam as the algorithm only saw the relative positions of the body parts. Which let the same functionality to be used by anybody.

I took my roommate’s help in improving the training data & tested out a few models before settling on one.

The next step was designing an interface.

Overall user flow map detailing potential user journeys & app knowledge level.
Overall system map.

Interface Design

I was initially building a simple web app to modify my behavior. When I used the app for my personal use. As someone who had a peek behind the doors I only needed to know the current states to be able to debug.

Original minimalist interface design showing percentage probabilities of different states.
Original minimalist interface design

Due to the refined algorithm, I could extend the app for other people who are stuck at home & predominantly on their computers. The assumed user scenario was a passive use case,

  • Users open the app website.
  • They find out about what the app does.
  • Activate the alerting feature.
  • Continue working on their daily activities.
  • Get an audio alert when unconsciously touch the face.

As I was designing for people who didn’t know what was going on behind the scenes I wanted to convey 2 things.

  • How the app works
  • Privacy

I wanted to unmask the technology in a friendly manner without using any jargon. Make it easy to understand on a broad level from kids to tech novices.

The app requires constant webcam access which can bring about potential privacy concerns. I wanted to assure users that the app doesn’t collect any data & performs all the computations locally on the browser.

As an ethical design choice, I wanted to inform the users about the data usage & privacy before even asking them for camera permissions.

With these design choices, I created a single page web app where users could access the functionality.

Top fold of the app

The top fold simply introduces the app functionality with a CTA to begin using it.

On activating the functionality users get a visual & audio feedback.

On scrolling, they can find out how the app works & a link to know more about it in detail

The next section gives a peek into data & privacy as the app constantly requires camera access.

Finally, the motivation behind designing it & verified medical resources for users.

Coding

I used a combination of Vanilla JavaScript & p5.js(a JavaScript library) to put my ideas into practice.

As an engineering & design choice, I disabled the video feedback on the page with the key points being drawn as it slowed down the app due to added unnecessary calculations.

I used Github pages to deploy the app as I could make changes very easily & I didn’t want to track users in any way. The project could source code that would be publically available for everyone to build their clones & improve the app with feedback.

User (zoom) Testing

I did some usability tests over zoom & by sending the app to friends to test out

Insights from user testing:

  • Unexpected actions: I asked users to touch their faces in different ways they could think of. Some of them were unaccounted for in the model & were probable actions users could be taking.
  • Audio Feedback: The original audio feedback lasted for about 2.5 seconds which was a little distracting to a point of annoyance. The sound was changed for a human-based voice limited to 1 second.
  • Knowledge: Users liked the fact that the website explained the clear motivations behind using the app & what was happening to their data.

Insights & what next?

Designing for myself

I started making this as a response to the current situation with the coronavirus. First attempt at designing an application for myself. I found some advantages & disadvantages for myself.

  • Advantage: I could keep iterating on the idea quickly as it was well defined.
  • Advantage: I made something that suits my needs & works perfectly for me. But with the limited user testing, I could do for the poseNet backed approach, the results weren’t staggering.
  • Disadvantage: As I narrowed myself down into something specific, I didn’t explore different ways people could use this tech.
  • Disadvantage: When I lost a train of thought, it took me time to circle back to further my ideas as I lacked feedback.

“Recognizing my own bias in the process, knowing & acknowledging it helped design a more inclusive app.”

Designing for others

Even though I started this project with myself in the first place I saw it could be extended to others who may benefit from it. Some things I noticed as transitioned.

  • Well defined problem: I went a bunch of iterations & testing with myself so when I decided to expand the potential users I had a very clear understanding of the limits of the technology & a base prototype I could start testing with.
  • Bias: I designed for & my context of use. Even though I had a working prototype I had to change my model more than I expected. But it was a very useful way to recognize my own bias in the process, knowing & acknowledging it helped design a more inclusive app.
  • Explainability: I was designing for a machine learning backed app. The algorithm has learned things but is essentially a black box in the final form. The algorithm thinks users touch their face accidentally occasionally. If the users anticipate this false positive behavior they can expect unintentional outcomes.
  • Privacy: With the users giving continuous access to their camera feed the app essentially is asking them to share private & intimate moments of their daily life with it. Users needed to know how their data was being used, who could access it & how it was being managed. This brought on a transparency to the app.

What Next?

Potentially expanding the scope of the project to have different situations it is trained for. Once the users opt-in, they can choose their context of use, if they are sitting on their desk, lying on their bed, or sitting with a laptop on their lap.

Splitting the algorithm into smaller tasks could help improve the accuracy of each individual use-case & help make the design process more modular.

Want to collaborate? Have feedback? or just want to say hi! email me or reach out on twitter.

--

--