The problem with your Homescreen and why Google is predestined to solve it

Andreas Stegmann
hyperlinked
Published in
6 min readMay 15, 2017

--

The moment I’m typing these words, I have 6 Desktops side by side open on macOS. You see, I’m a heavy user of Apple’s Fullscreen App mode. Next to my Desktop with files, I have one app where I try to gather my communications, in another my browser tabs, in another my notes, in another my music, in another my virtual machine needed for work.

The problem

Let’s dive into my work desktop. I have another browser and another communications hub, which is also home for tasks and calendar. Inside my mail inbox, I got folders and inside folders I got mails grouped in conversations. Every mail can host another mails as attachments.

The result of all this is, of course, that I waste milliseconds every time I need to switch tasks on where to find relevant information. This adds up.

Matt Gemmell calls this the different “Frames of Interaction”. The hardware calculator vs. calculator-inside-an-app comparison is spot-on.

The device’s screen is no longer task-dedicated, but instead is conceptually split into the “content” (in this case, our task-specific web app), and the “chrome” — the actual interface.

The more interface you have visible, the higher the cognitive load on the user. When parts of that interface belong to entirely different, unrelated frames (or levels) of interaction, the load is high.

We can cope with a surprisingly high degree of interaction frames, but we’re not optimised for it.

Two software calculators inside different interaction frames

One of the points that made the — then called — iPhone OS so revolutionary and successful was the massive reduction in interaction layers. It’s a key enabler for otherwise technology-shy people. Even experts adore the feeling of clean, minimalist simplicity.

But I argue this concept was so successful, it came with its own disadvantages. We’re living in an app-driven world. Even the inventors of the App Store didn’t anticipate that many apps. Which is why the one-icon-per-app homescreen concept won’t scale. Albeit users have their homescreens packed till the 10th screen, they only use a tiny percentage on a regular basis.

Who doesn’t like hunting for an app?

Apps are “kept around” in case needed for a very specific use case in the distant future. I found myself guilty of searching the App Store for a niche use case-app I forgot I had installed long ago.

That’s my background why I was excited when Google first presented their Chrome OS. The app (browser) becomes the OS, eliminating almost all interaction frames below.

Chrome OS concept as of 2009

This Ease of use was certainly a major reason cheap Chromebooks ate the iPad’s (school) lunch.

But in the versions that followed, Chrome OS re-introduced a “Desktop” where you got icons and a taskbar.

I bet that these elements were pushed hard by user feedback, which would make it the perfect example why product designers can’t always listen to their customers: With this change you give up simplicity, just for familiarity reasons. Rather than getting used to a (slightly) new paradigm, users now have the historical cruft of 20 years of operating system design (by mimicking Windows). Maybe you can call a wallpaper where you see your dog a feature, but at what cost?

The newly introduction of Android apps in Chrome OS made me think hard about what that means. While in a feature-driven world it makes perfect sense to give Chrome OS users access to the app-driven smartphone ecosystem, you’re back to square one in terms of complexity. You still don’t have to manage everything, but it’s not that far away from Windows S.

Still, Google has a second chance of changing the paradigm on Chrome OS. This could eventually translate to Android, which would then transform the smartphone ecosystem.

The proposed solution

Let me recap the pieces already in place at Google.

My concept would go away with searching the App store and installing apps completely. Apps become Services.

The central starting point is a search box. Google taught us to search for anything. This box is operated by the Google Assistant. When I search for information, Google can answer right away or give me the known list of search results.

When I search for a service like Instagram, I get to their site and can use the service instantly, thanks to Instant Apps technology. (While not many apps support Instant Apps at the moment, almost all services offer an Android app which could be adapted. And web apps got ridiculously good, e.g. I use Twitter Lite in favor of their official app.)

Example Instant App implementation

My homescreen learns from experience. It starts to show my frequently used services, my last used services. Of course I can also pin favourites. The Assistant can help with predicting what content/service I want to see next.

These are also the categories that get downloaded for offline use. Nobody argues that having content available locally covers more use cases and could be notably faster. But like with the Robin phone, you get more flexibility by throwing away contents after a certain period of no use.

As for services themselves, I’m a fan of showing the content inside of apps directly like Android widgets or the Amazon Fire interface do. Every task should live on the same level, meaning every instance of an app (like a web browser tab) is shown without hierarchy. Grouping could be done, but it is oriented around Stories (Jobs to be done) and not Apps or URL shemes.

In summary we get a new OS paradigm in which the user:

  • doesn’t need to differentiate between web apps or native apps,
  • doesn’t have to worry about local storage,
  • doesn’t have to worry about managing the UI (just google what you want),
  • has significantly decreased wait times before using a service,
  • gets his contents automatically sorted for him,
  • gets the latest version automatically without needing to deal with updates,
  • and has to learn a very simple mental model (fewer interaction frames).

Now merging two very different operating systems would suddenly make sense.

Fusing together apps and websites would help Google the most, it has some of the best web apps and only on the open web it’s the gatekeeper. My browser input gets routed through Google anyway — imagine all the data which can be used to accelerate the Google Assistant. (And let’t put aside the privacy implications for a second.)

By technically redirecting websites to apps in the background, the OS could make an attempt to solve one of the last remaining big problems of the web ecosystem: Monetization.

Maybe Google is already working towards such a vision. As I laid down the building blocks are already there. And it would make a ton of strategic sense.

But call me a skeptic about any efforts at Google to blend projects together for a smooth customer experience. In fact, Google — even before your typical Fortune 500 enterprise — is the incarnation of corporate absent-mindedness.

I’ll take evidence first, before I buy into any more rumors.

--

--

Andreas Stegmann
hyperlinked

👨‍💻 Product Owner ✍️ Writes mostly about the intersection of Tech, UX & Business strategy.