Canvas AOD Case Study

Zach Lin
Little by little …
9 min readMay 10, 2021

--

Overview

  • Canvas AOD is the hero feature of the OxygenOS 11; it turned users’ photos into simple figures to show on the ambient display and created a seamless and cool animation from Ambient display, Lockscreen, to Home.
  • We have merely two months to build this feature from scratch; no existing algorithm, no similar features on the market for reference.
  • I built a quick prototype and presented our executive officers with good stories to get their support; even the development cost can be very high. Worked with AI partner Meishe to train our own algorithm from scratch. Created a new screen burn-in prevention algorithm to support any kind of possible figures on AMOLED. Tested and got feedback from our private group users.
  • It turned out to be users’ most expected feature for the OxygenOS 11. Created around triple user volume than average in the launching event. Exposed on many tech media building up good brand credit on the innovation. Attracted 8,000 audiences for the offline exhibition event, 30 million event article views on Weibo. 56% of online event audiences are not OnePlus users. Our app on Play Store is 4.217 out of 5 currently. Applied for more than 6 patents.
  • In this project, I initiate the concept, Pitch the idea to top managers, Lead the product communication with our partner, Innovate with developers to create the line figure transition effect and the screen burn-in prevention mechanism, Lead the iteration and drive this product forward.

The birth of the concept

Back in May 2020, when the main idea of the OnePlus 8T’s Live wallpaper is solid enough, I was thinking about how can I measure the level of success for the live wallpaper as the KPI (Key Performance Index) of the work. The innovative live wallpaper can present the sense of time passing with the beautiful wallpaper color changing according to how long it is between the two usages of the phone. Surely we want users will use it as long as possible. How can we…? This question has occupied my mind for days, and then I suddenly realized that it wouldn’t be possible to make it. At least, not like a wallpaper of users’ photos can achieve.

I am definitely one of the many users who prefer to use my own photo with someone I loved on it as the wallpaper. Nothing can be more connected with the user than a photo that captured a treasured moment in life, no matter how beautiful the artwork is and how well it is crafted.

“Users’ photos have been allowed to set as the wallpaper on a smartphone for long. It’s a solid user need; why is nothing more added for the users?”

“The best part (memory) is created by users, then what a smartphone company can add?”

“A lively photo wallpaper like the portrait in Harry Potter will be fantastic!”

So many thoughts popped up in my mind. And a photo pops up as well, a picture of my daughter looking into my eyes with her most favorite dog doll in her arms. It is undoubtedly the photo I want to use as my wallpaper. My memory of the picture is so vivid, I feel my daughter seems to look into my eyes still, but my memory of the photo is also so blurry, I can’t recall any detail on her clothes and the background. I suddenly got an idea, this is how memory works, and it’s so like how we abstractly present info on the AOD. I can relive the memory recalling process from abstract, blurry, to clear as an effect to create a Harry-Potter-liked experience for a photo with memory. I was so excited, and I can’t wait to make it.

Original photo and the photo in my memory

Prototype and Pitch

I shared the idea with my colleagues the next day with excitement.

“Hey, I got a cool idea that we can transform user’s photos into simple lines to show up on the AOD. And then, the line will spread into a real image when the user wakes up the device, and the image will change from blurry to clear when users go into Home from the Lock screen. The whole process will be just like Harry Potter’s lively portrait; it’s cool, right!”

And I can see question marks on their faces, yes, not only one colleague but for all 3 colleagues who I shared. At the moment, I realized this is not an idea that you can analyze rationally; it’s an experience you get only if you see it. I need to make a prototype.

To create a simple line figure for a photo is not hard for a designer, but to create an animation to show the line spreading into an actual image is hard for me. However, I feel this is the key to the idea pitching; I need to make it. I can finally make it with a sudden inspiration by creating a simple line figure image and using Keynote’s Magic Move effect to morph in actual appearance. When I show my colleagues my quick prototype, from their face, I know that I succeeded immediately. Take this lesson learned, I know I need to show the effect and show the impact in context so people can understand what this feature can bring — smile, I wish. So I recorded the smile when my daughter sees my prototype and show it in the idea pitch to our top managers; it’s so successful. And we wish we can bring this feature to people all around the world, just like how I am excited to take the first working prototype home to play with my daughter.

Screenshots from my Keynote presentation file

Execution

It’s easier to come out with a fantastic idea, but it’s much harder to make it real and execute it well. There are three critical breakthroughs of the project — The algorithm transforms any picture into line figures, the fantastic transition effect that gives users a WOW, new AOD screen burn-in prevention mechanism.

Algorithm — I referred to Picsart’s effect as a reference to the feasibility. (See reference pictures above) It surely was the first one we touch base with. However, the business negotiation didn’t run very well; it wanted to charge us a million dollars for the authentication. The price is way too high, and it wouldn’t help us with the customization. Even the line figures Picsart created are good enough for their effect, but for Canvas AOD, we need an even better algorithm as there is no accompanying photo. So we need to build our own algorithm.

Picsart’s Sketch effect

We found Meishe that has both image editing and AI training capability to work together for the algorithm. As the time is extremely tight, we need to narrow down the scope we want to support. The feature’s goal is to connect people, and the bokeh effect is more matured on human being shape, so we go with the support for human being portrait first. We then defined different types of photos to start the AI training, half-body selfie, group photo with people less than three, full-body… We fed thousands of hand-drawn images that are created for actual photos. And we ran rounds of drawing iterations for the better machine learning result. Amazingly, our algorithm performs better than Picsart’s in many cases in the end. I celebrated with the team that we literally accomplished a million-dollar mission.

Transition — To add a WOW is no doubt a key to the product’s success. We didn’t rely on any packed solution for better performance and do it with the native coding instead. As any figure is still composed of white dots on black background, the effect is made by erasing the AOD screen with many enlarging circles located on any detected white dots. This approach saves the computing power from the segmentation process and creates a smooth animation even for low-end devices.

To make the transition looks excellent, we reviewed it frame by frame and made improvements accordingly. When the spreading starts from the figure lines, it will have dark spots on the face and the center of the body. We then added enlarging circles in the middle. It looks better, but the relationship between the figure lines and the actual photo can be better emphasized. Moving the enlarging circles a bit toward the subject’s center makes the subject’s image appear first, then the transition looks just great! (See the comparison below)

Spreading from the line
Added enlarging circle in the middle
Erasing subject area faster than the background

Screen burn-in prevention — The character of the AMOLED that only the lighting pixels will consume power makes the Alway-on display possible. But it also introduced a key challenge of creating an AOD feature that a pixel may be permanently discolored if it lights up for too long. Due to the contrast between normal and discolored pixels, an afterimage will be seen. It’s easier to deal with images that are sort of controlled, but the figure lines of users’ photos can be in any figure/size.

I assumed that if the figure line’s crowdedness and the line width are under control, we can have pixels to rest by turns by moving the figure right/up/left/down. But the testing proved I was wrong. We tried to make it like a pixelated effect (See screenshots below) that no pixel will light up back to back with the pixelation. But the effect only looks ok when the update period is short. Screens in AOD mode can only update once a minute to save battery; looking at a pixelated image for 1 minute is just weird.

The pixelated effect we tried

We ended up with a solution that combined the key character of moving the figure around and the pixelated approach. We transformed the pixels of the line into grid boxes and only light up specific pixels with their nearby pixels are in rest. Combining the moving around treatment, the mechanism passed the test.

We finally created a smooth transition to connect AOD, Lockscreen, and the Desktop all together and allowed users to have their best memory to be with them all day on the most essential device in modern life.

Demo of the Canvas AOD effect

Iteration

The most outstanding achievement to me is we do release the product to users’ hands, even I marked it as a Beta version. To come out with a brilliant idea is not hard; it’s hard to make it real and ship it to users to create true value. After the product was shipped, we start the iteration.

Make it updatable — The first step is to isolate the AI engine and upload it to the app store so it’s updatable. Even though the update speed is not fast, we can update the algorithm according to what we learned from the users.

Improve the readability — We were too conservative to show the AOD screen in the normal brightness level, but it caused the readability problem. I led the test to make sure we can show it at the proper brightness level and so the color can be appropriate displayed for our future plan.

Support screenshot on the AOD — We curated an offline Canvas AOD exhibition to present how users’ memory can be transformed and showed on the AOD. The event is so successful that we realized the emotional connection can be powerful for spreading the feature and our brand’s recognition. Allow users to take screenshots and share is key to the spreading, so we must add it.

Looking back…

  • What we shipped is a beta version or MVP; there are many known incompletions. I’d like to make Canvas recommendations to filter out photos with user’s loved ones, memorable moments, styles of images that our algorithm can do excellent figures. This should dramatically lower down the cost of the photo selection and increase satisfaction. I also want it to be playful by providing style options. Allowing users to anticipate the creation process. Ease of sharing is essential too. This can super-boost the spreading of the feature and brand recognition. But the good news is its high potential makes it a hero feature that we will keep working on OxygenOS 12, let’s see.
  • By talking with our users, I learned a lot too. They are looking forward to using our feature on the photos of pets, landmarks… They want their photos shot in landscape orientation can be better supported. They want to use it on the ambient display only but not necessarily to be wallpaper too. Some of these feedbacks were considered but were in lower priority as we were making MVP. It’s really nice to know these true desires from real users still.

--

--