Story behind the Edge Sense

Zach Lin
Little by little …
6 min readDec 11, 2018

Hundreds of thoughts to build up a new interaction
Edge Sense is the most important key selling feature on the software side of HTC’s flagship device on 2017 - HTC U11, but it’s also a project full of uncertainty and surprises. The discussion of this feature lasted for about one and half years because of the mis-trigger concern, cannot lock down the killer use case, and the HW uncertainty. I am glad we eventually launch it successfully. It may not be revolutionary, but it works.

The most important thing for a new interaction is its reliability
The project was driven by the hardware capability which we can sense the force pressed on the bezel of the phone. To have bezel-interaction on a smartphone is actually not new or unique. NTT Docomo revealed their Grip UI interaction on 2012 and No long after we kicked off our Edge Sense project, Nubia Z9 introduced their FiT (Frame interactive Technology). By checking online reviews and testing the Nubia Z9 on hand, I found a critical issue is that their interaction is not reliable. We know that it takes hundreds of times for user to build a habitual interaction, but it only takes a few fails to break it.

I had a phone-shaped cuboid with me for Edge Sense use case inspiration. Reference picture from NoPhone.

A killer application is the one which can solve user’s pain point, not the fancy one
Because we didn’t have the Product Planning role after a company reorganization, so I needed to start from envisioning the killer applications. To help me be in the context and free myself from visible interface, I borrowed a 3D printed phone-shaped cuboid from our ID team and play with it all day. I didn’t take a picture of me playing around the cuboid, but it will be like how people do with the famous crowdfunding project - NoPhone. This kind of quick prototyping always be very helpful, I came out with many wild ideas. The dreamer part of me were very excited by many of those cool ideas, but the UX soul inside reminded me to focus on solving user’s pain point. We focused on three use cases in the end - Launch camera and selfie shot, Launch voice assistant, Adoptive screen time-off and correct orientation.

Reference photo from Johannes Eisele / AFP

Launch camera and selfie shot
Photo-taking is no doubt the most important feature on a smartphone nowadays. There are many utilizations have been done though, it still takes several steps to take a photo. Take out smartphone from the pocket -> Light up screen -> Launch Camera app -> Take photo. With Edge Sense, user can just squeeze to launch the Camera app while he is taking out his smartphone to take a photo instantly. It works even better on a selfie shot. The shutter button is normally placed at the bottom-centered of the screen so user can see the preview clearly. In the real practice of a selfie shot taking, user needs to grip his phone in different gesture to light up screen, interact with UI, and take photo. I won’t be surprised why there are certain amount of phone-dropping cases when people are trying to take selfie shots. With Edge Sense, user can just squeeze to launch Camera app and squeeze again to take photos without moving their hand at all.

Reference photo from Google

Launch voice assistant
Voice assistant was the super hot topic back to the time. It’s potentially the almighty one-step solution for anything. There were no Always-on solution for voice assistant yet. With Edge Sense, user can launch voice assistant anytime even when device is screen-off. Long press Home button to launch voice assistant is the industrial standard, squeeze & hold to launch voice assistant is easy to learn.

Reference photo from Facebook Instant Article

Smart screen time-off and correct display orientation
To keep things simple is a design golden rule. We’d like to empower Edge Sense as much as we can, but we also don’t want to make it complicated to learn. So, instead of introducing one more intentional interaction, we built features that can work even unintentional. It happens when people read a serious article, the screen will turn off if there is no touch event detected in certain period of time. As we can sense user’s griping, we extend the screen-off period if so. The other pain point we can solve by sensing user’s griping is the correct display orientation when user is using their phone laying on the bed/couch. The G-sensor is just not smart enough to be sure about the correct display orientation, we can sense user’s grip gesture and make good improvement of the detection. This feature didn’t go out with U11’s release as we chose the other sensor which cannot sense tiny force like griping/holding, but this is what’s on the U12+ Edge Sense.

Devil is on the details
All above are what anyone can see, but I think the most interesting part is the execution design challenges behind. The amazing part of the Edge Sense is it can be used intuitively even without visible interface anytime. One the opposite side of the advantage is the risk of mis-trigger. We found the key is on the force level of the trigger threshold.

User is prompted to set up Edge Sense in the device setup process (aka OOBE), and the first step is to set up a force level of the trigger threshold. The design challenge is on how to help user understand what’s force level setup and try to make the threshold as high as possible to prevent from mis-trigger. We started from the several versions of text instructions first but none of them works. I tried to bring gamification thinking in, why don’t we do a mini game so user can do real practices (instead of reading instruction) and they may be enjoy more in the setup process as well. I came out with a force meter which mimics a punch challenge feedback and a mini bubble game. The result is overwhelmly great. People understand better what’s the foce level setup and the threshold they set was over 20% higher than what we previous tested. I would guess that’s driven by the competitive mind when people see a force meter. And we also do a small trick, we set the upper boundary level of what we detected instead of the middle of the period since human being is so adoptive. No one found the trick and it doesn’t increase fail-to-trigger case at all.

Maybe things just won’t be that easy, we observed that sometimes a Squeeze action will be mistakenly performed as Squeeze & Hold. Is it because Squeeze and Squeeze & Hold is confusing? We ran a study and found the thing we missed. The force data show us that user just didn’t release enough. It’s human natural that people will think “when I don’t squeeze then I release” instead of performing a release action. So we adjust the threshold of the release and everything all work, the success rate of Squeeze and Squeeze & Hold are all 95%+.

It may not be revolutionary, but it works
Even there have several user testings been run, I still feel nervous when the product was about to launch. It’s great that online reviews and real user feedbacks show we did a decent job, it’s reliable and no serious issue we didn’t deal good with. There are things I wished I can do better, the Edge Sense settings page, the game can be more fun, we may do more useful features… But I am happy to keep moving on, because design is a iteration process, it’s a good thing that I grow and am able to see what can be better.

--

--