Success rate is the most troublesome usability metric. Especially on mobile
There are many usability metrics, yet some are more important than the others (skill retention rate, anyone?). For example, usability from ISO 9241 consists of effectiveness, efficiency and satisfaction, all of them are quite wide. There are more specific ones, for example, task times or learning speed. And there is the single important “simple” metric — success rate.
Plainly speaking, success rate measures how many users can do the stuff that product is made for. For example, in calculator success rate defines how many users are able to perform their calculations. Success rate is important because if features are technically here, yet users are unable to use them, these features are really non-existent (and the resources spent on them are basically wasted). There are not many product lines that can boast high effectiveness, efficiency and satisfaction (especially satisfaction) with low success rate.
Success rate seems simple. Still, every usability metric is becoming more complex the longer one looks on it. Specifically, there are actually two slightly different success rates:
- Success rate by users (SRU) measures how many users are able to use this specific product feature. In map app, for example, it defines how many users can save POI for later use. SRU grows users base (“we have added this feature and now our product is also useful for these specific users”).
- Success rate by features (SRF) measures how many product features are actually used (or at least can be successfully used). Most product owners are striving to max the number of users who use product to the limit of its usefulness. Such users are glued to product and unlikely to leave. Theoretically speaking, the very same can be said about the users (“the more I use this map app the more I’m able to accomplish”) but just theoretically. Yet, from the product owner’s view this is nothing but a beautiful dream. The question is why users are not using the product as intended? As stated already, SRF is important of user retention as loyal and competent users are unwilling to switch.
In every single product the comparative importance of SRU and SRF is different:
- Complex (because they have to) products have problems with adding new features. Basically, by Hick’s law any new button makes the old UI worse (if there any screen space left anyway). There is the thin line in a product lifetime after which any new feature is no longer instantly makes product better. The added feature just negates the worsening of UI. Therefore success rate by features is shining here. Anyway, big, complex and mature products are less likely to boast large number of super-duper-power-users. How many individuals in the whole world have completely mastered Microsoft Office?
- In small products it’s much more important to make sure that all the users are able to use this specific feature. SRU wins here.
- In products made for work, users are often extrinsically motivated (by not being fired, for example) to use product as fully as possible. Such users will ask coworkers for advice, google best practices etc. SRU someway diminishes here compared to SRF.
- There is also a context of use in play here, specifically the hardware (see below).
The real estate problem
When (before millennium) I came to UI design, the recurring dream was “some day, the screen will be of proper size”. At that time the normal monitor was displaying just 65K colors (and many just 256), the screen resolution was 800x600 and the screen size was just 15" (17 inch screens have started to outsell 15" in 1999). Everybody hoped that in 2010 everyone (and our users in particular) will own huge (25" or larger) screen with cool resolution (like 1600x1200). Everyone have waited for the golden age when everything we want to see on a screen will actually fit to the said screen.
It turns out that these dreams were not destined to become a reality. Sure, big splendid monitors are here, but:
- Big monitors are not really here, at least in the offices. Home computers are often equipped with really large displays, yet office monitors are rarely exceed 21" from a lack of space. And monitors for industrial or professional use (for example, a cash register) are still tiny. Of course, 21" monitor is bigger than 15", but not quite.
- Big screens are not that big. Display industry have well learned the trick of using widescreen to artificially inflate the numbers (widescreens have bigger diagonal values).
- The vertical screens have not really become mainstream. There are here, yet nearly nobody uses them. Pity, as vertical monitors can greatly increase display size without overtaxing cubicles.
- The desktop computer of old is nearly dead. Laptops are powerful enough for most tasks and win in convenience. The added factor is the handiness. Everyone is happy with a light and small laptop, ignoring the obvious productivity bonus of large displays (class consciousness is probably is in work here, as having a 17" laptop is a statement “I do the work” and 12" states “I do the synergy”). With a switch to widescreen, the display size is actually smaller now than 5–7 years ago.
- And the most important factor — we have smartphones now (uber tiny screens) and tablets (tiny screens).
- Also, there is a tax on UI controls as touch input requires large touch areas. Large touch areas reduces density, making interfaces just plain bigger.
Bigger is better
Why the screen size is so important? Just because with a success rate, especially success rate by features, there is a simple rule at work:
In order to use a feature, users have to know that feature does exists in the first place
Let’s use the map app example again. Given the user task “save POI for later use” there is an important question how many users do understand or know that saving a POI is possible?
Note that to work, this knowledge have to be active. As with vocabulary, which have passive size (the words that individual recognizes) and active size (words that individual actually use in speaking or writing) the knowledge of tool have to be active as the user have to act in order to use most features. I do know that piano does sounds if one presses the keys (passive knowledge) but completely unable to play something meaningful (lack of active knowledge). In UIs active knowledge is most readily formed (and used) when product itself reminds of features by actively presenting them on the screen. Without apparent presence the feature is destined to be unused by most users.
In practice, this leads to direct heuristic:
The feature not really exists for users if it’s not presented on screen
Of course, it’s not completely true. For example, most people have learned to scroll (vertically). Yet there are multiple sad cases of UIs which makes users to believe that screen/page ends at the viewport → therefore users are not even attempt to scroll down → they are unable to see important stuff → they are unable to finish their task. Moreover, this heuristic is not fully applicable in expert products which require skilled users to operate (like Photoshop). Also, we have to count on user’s tolerance for bad interfaces. Some people are still able to perform their tasks even on bad UIs, so technically, heuristic should truly be “The feature not really exists for about 1/4 users if it’s not presented on screen*”. Still, these complexities doesn’t improve the heuristic. Even in simple form it is able to predict success rates and therefore to predict product prospects.
* There remains a lurking question of designer tolerance for low success rate. For me, even the thought of my own UI not accessible to quarter of user base is simply untenable — but my practice lies in complex and demanding products. Your own opinion may differ.
What “not presented on screen” really means? Well, to put it bluntly, feature is not on the screen if it does have screen presence but not in a direct and unambiguous way.
Lets check this screen:
- Everyone can see that you can freely do a papildini here (although it takes some mental effort to understand that black rectangle is not just a rectangle but actually a button — hurrah for flat design).
- Everyone can spot a tabs, yet it’s impossible to predict what exactly lies beneath them. It’s a knowledge, yet pretty passive one.
- Some users (the most skilled and active) do understand that data on the screen can be refreshed, as there is perfectly standard icon for that.
- Nearly everyone understands that something will happen if they touch the “!” button or menu (3 dots) button. But what exactly will happen after pressing is impossible to predict (“menu — what items will it contain?”). So users are left without any reason to press those buttons.
I have ordered this list by reduction of active knowledge. You can’t expect that more than 50% of users will ever press ! or menu buttons, yet about 3/4 users will still use the tabs (case 3 is somewhere in-between).
Its’ not because users are stupid or lazy. They are none (oh yes, someway lazy, but not much). It’s because they don't give a damn about the product of yours till they love it — yet in order to love it they have to master it — which in turn requires them to actually see it. Users are perfectly aware of the fact that pressing the menu button will open the menu. They just have to intention to open it in the first place — at least until they are engaged by product itself or by external circumstances.
So, the small screen is able to accommodate little UI to show off the features (read — to advertise them) in order to make those features become an active user knowledge. There is a simple way to feel a difference here. Google map app requires users to check a POI and then star it in order to save it for later use. Apple maps (although sucky) do have direct command Place pin. In this case, Apple maps definitely wins.
Yet, there is a factor that breaks this feeling — the often cited yet often neglected observation that designers are different from users. Check this well-known graphic of technology adoption lifecycle. It’s not about the technology (it can be any), It’s about users.
The sad truth lies in fact that designers are innovators and early adopters. Yet bulk of our users are late majority or even laggards. They are different. The early adopters are willing to press a button just to learn what it does. The late majority isn't going to move a finger without direct reason.
This chasm have some consequences which are not truly apparent for most UI designers. For example:
- Majority (60% or so) of actual users do not use horizontal swipe (except in the home app). Worse, it’s impossible to tell when horizontal swipe will find some demand from the end users. Of course, they do know to pan and zoom photos and maps — but that’s all. There is exception to this rule, i.e. Windows Phone users. It’s basically impossible to do stuff on Windows Phone without the constant horizontal swiping so users are forced to adapt. Too bad that such users are not numerous enough.
- Users, in general, are devoid of curiosity at least when it comes to UIs. They don't experiment with interfaces and don't even read tips. For example, Windows 8 is actually rather good — but only for those who heavily invested in learning new UI (and also to unlearning all skills which went overboard). Why should users care? So Windows 8 have caused quite considerable hate (it someway helped that Microsoft — in a fit of hubris — initially have not provided users with tutorial and still not bothered to provide users with even a brief explanation of improvements).
- There is no such thing as a non-standard icon — all such icons are actually very small illustrations. Yes, you heard me. Want to use custom icon for standard action? Sure, feel free. Just be prepared that quite a few of your users will never press or click it. Of course, you can actually add a text label — but this way your image will become even more “illustrative”.
What can we do?
Sadly, there is no silver bullet. Touch UIs are more or less crappy when it comes to ergonomics and they shall remain in this state until someone invents input method that is better than fat greasy finger, pointing at the screen made for pocketability instead of looking into. We are in love with smartphones/tablets not because of interfaces, but because they allow us to perform more everywhere and every time we want to.
Still there is a slight hope to boost success rates.
Method 1. Easy going all the way
In simple apps, there is no inherent problems with success rate. And simple products can be perfectly engaging and popular — like casual games or an app that sends message Yo to its victims.
Method 2. Embiggen the buttons!
Take the sad hamburger menu. In Android 4 there is solution for app menus — half-hamburger. The concept is splendid — the menu button is logical, concise, and structurally solid. Press and menu will appear (with the other half of hamburger present; clearest incentive to open menus as everyone knows that whole burger pwns a puny half). The is just a single slight problem — half hamburger doesn’t work at all. Success rate is not even funny. Well, even the whole hamburger Ieads to low success rate (it cannot show what lies inside).
From the Android 5 beta it’s clear that Google designers are fully aware of it. They are busy not only providing the missing half but are busily making the hamburger really big (Hamburger Royal Deluxe!). Most likely deluxe burger will suffice (yet we may feel somewhat disappointed as there are actually less pixels left for content).
Method 3. Use actual words
Hamburger as menu sucks. Yet the same hamburger with a word “menu” actually works. The same applies to any kind of button. But:
- With words, nothing will fit into screen.
- Looks boring.
- Localization will be a nightmare.
- Default UI components will not suffice, you will have to code your own.
Method 4. Break screens in many, use hierarchy
Its possible to avoid small buttons by splitting the big, complex screens into many simple ones. For example, we can use a single menu for navigation and use big, descriptive buttons for all the actions. For example, the yellow gray UI from above can be replaced by a single menu button and 3 big buttons for three tabs. It will work (at least in Windows Phone it works about the same way), yet there are consequences:
- The number of user actions (taps and clicks) will grow. Currently app lands into middle tab, after the change user will have to press a button. Nothing really bad, yet someway depressing.
- There will be cluster errors. Every wrong user decision on first level of navigation will lead to wrong actions below, no matter what. And with more navigation levels, chances of error will grow.
- It’s hard to display context (where are am now in the app). Basically it’s a clear return to practice of old ways of web design, when there were no search engines and users had to move all the way from homepage — every time.
Method 5. Use loooong screens and use animation to show their contents
It is possible to stop trying to squeeze everything into a viewport. With a really long screens we can avoid using small (and puny) buttons because the height allows us to make them big. Yet, there is problem of making users understand what lies below the fold — even before they have started to scroll down (there will be no scrolling until our users learn that there are goodies below). Animation comes to rescue.
We can show user just an outline of the screen (without real content; left state on the storyboard), displaying, for example, that app menu lies below (yellow) and there are three sections (greenish) on this particular screen. In a snap, sections are filled with content, moving sections 2 and 3 below. Of course, it will greatly increase swiping, but with kinetic scrolling it’s not that slow. With this method the whole UI can have just two navigational buttons — one to move to the home screen and one (optional) to collapse the current screen into the bird’s eye view.
As always, some drawbacks:
- Windows 8 have cool feature — one can enter the bird’s eye view by simple zooming. Still, it doesn’t seems that this feature is really demanded by end users.
- Scrolling is fast, yet it takes time. Also, for any single user the spring animation will be either too slow (and therefore boring) or too fast (and therefore not really educating).
- Hard to implement.
- Swarms of stakeholders will materialize in the instant, each and every one screaming “Could we use some standard UI instead?” and “Can you squeeze feature of mine into a first viewport?”. Not really encouraging.
Method 6, revolutionary. Use the eye tracking, Luke!
Whole eye tracking research tells us that the very first thing that everybody sees is an eyes. Ergo, lets just put eyes on everything that users are busily ignoring!
This splendid method — product of my huge brain — is not patented and free for anyone to use. So use it every time when in doubt.