What’s an autonomous UI?

An autonomous UI doesn’t rely on animations, sound, colors, shortcuts, and help to be understood and used. It doesn’t mean it shouldn’t have them but it shouldn’t depend on them.


Animations help to reinforce the chosen visual metaphor (e.g. a slide right to express a card removal; a loader to show there’s some background job) and create visual harmony (e.g. to soften transitions). They should be simple, tenuous, used consistently (e.g. fade in vs fade out to express opposite concepts). Use them in moderation and if they bring value. Animations aren’t supposed to be constantly noticed by users, as they’d be distracting from what matters. They shouldn’t be used just to “make the app cooler”.

Excessive and heavy animations can be really bad, consuming lots of CPU time or being harmful to people with certain disabilities (e.g. epilepsy), so in certain cases it should be possible to turn them off.

A nice experiment to try is to turn off all the animations and confirm if the UI is still usable (preferably with users). This helps to understand if certain animations are there to camouflage or reinforce bad or good design decisions.


Color, shape, position, contrast, texture are basic elements of graphical design. Colors are associated with moods and emotions that may vary with culture; they can also serve to highlight elements, thus having high expressive power. However, you should never use color as the single differentiating element. If you have several items that you need to distinguish, color can reinforce it, but it shouldn’t be used alone. The main reason is accessibility (e.g. people with color blindness would have problems using such UIs). On the other hand, you usually don’t know the color limitations of the users’ displays.

In addition to color, the shape also helps you realize what the icon is (Google Drive)

The general rule is to use shape to differentiate items and color as complement. Luminosity itself is also a differentiating element. To know if you pass this “color test”, try to use your interface in shades of gray and see if it needs some fix.


A good interface doesn’t require the user to depend on help to perform tasks. This is not saying that the application shouldn’t have a user manual or contextual help. On the contrary, the help must exist and be as contextualized as possible (e.g. well placed tooltips; contextual links to help); what you should aim for is that the users should’t have to consult it, specially on repeated usage. A good UI should “speak for itself”. For this, a minimalist design, a natural flow of information, a good information architecture, among others contribute a lot.

Avoid instructions; you can do it naturally, transparently and implicitly. If your UI needs and relies on instructions, “you’re doing it wrong”. Well, not necessary, but try to build the UI like that was true.
If users rely too much on help, this may be a symptom that something is wrong with the design or even with the information architecture.


A sound effect at the right time can be helpful. It can be used to draw attention (e.g. new message arrived), to reinforce a concept (e.g. successful operation), to signal an error, among others.

Like animations, sound must always be used as a complement to the UI and never as a primary form of communication. You should never rely on sound since the user may lack a sound device or have it muted, be in a noisy place or have some kind of hearing disability.
If adding a certain sound really brings value, then it should be used with consideration. Inappropriate use can be annoying and tiring.


The dependence of the mouse can also sometimes be negative. Typical examples are mouse-over and right mouse button operations which become unusable on touch-only devices. You can use the right button, but only for the sake of flexibility and speed — and again, as alternative method of interaction. For navigation, consider also keyboard shortcuts.
The same applies to the mouse wheel; there should exist an alternative to its use. Finally, depending solely on mouse also impacts accessibility.


Keyboard shortcuts (e.g. Ctrl+C to copy, Tab to navigate) aren’t supposed to be the only means of interaction—most are for advanced users. What’s done with a shortcut should be explicitly offered in the UI (hidden under menus or not).

Navigation controls

You shouldn’t expect users to click on the browser ‘refresh’ button to recover from errors or to force data updates. Also, try to avoid depending on the ‘back’ and ‘forward’ buttons to navigate. The software should be self-contained in terms of navigation, error recovery and data update (specially SPAs).
In native Android apps, you can offer an alternative to ‘menu’ and ‘back’ in the app UI itself.


In web and native apps, you can ask for hardware and software permissions (e.g. camera, microphone, location, showing notifications) but your app should work without them (e.g. if the user declined permission). If it depends on them to be used, you should explain it to users.


Don’t assume users read error/warning messages in dialogs. Many times they dismiss them without reading their text. Don’t punish users for missing them. The app should have ways to repeat the message or display it inline (contextually) instead.


Don’t create UI use cases that depend on ids (database identifiers) to be accomplished. For example, why would you show ids on a data table? Why would you let the user assign an object to another requesting its id? Friendly titles are for humans; identifiers are for machines.

Why show ids when creating objects?
Why display ids in data tables?


As a general rule of thumb, try do design each “place in the app” in a way that the users wouldn’t need to know what they’ve done to get there. They could be really forgetful or always going to coffee. That said, each place should have the necessary information to be understood by itself allowing users to resume the task as hands.

When designing UIs, we need to decide what are the means of interaction and divide them into primary and auxiliary. Generally, animation, sounds, color, and help, should be seen as auxiliary — to reinforce or complement the main concepts and ideias. Of course sometimes they can be of great value (e.g. inline icons to give contextual help, breadcrumbs as shortcuts), but at least you should try to design the UI without depending on them. The general guideline is to turn all off, and test how the UI “behaves”.

In web apps/sites, it’s obvious at these days that you should avoid depending on the target audience’s display capabilities, like screen size and resolution, color schema, etc. Embrace responsive and adaptive design. Also, beware and run away from non-standard dependencies like Flash, Java and Silverlight.

Don’t assume users will see initially hidden information like tooltips, unselected tabs, dropdown options, etc. Bear in mind there are certain interaction design patterns that are inherently an alternative to something (auxiliary), and never the primordial form of interaction. For example, breadcrumbs shouldn’t be the only form of navigation; these are only a shortcut/alternative to the main navigation.

There are items that you want users to see and others that they can discover later (depending on task importance). However, you should present items in ways that they wouldn’t need to be instructed beforehand. Design the UI in such a way that it can be self discoverable. Aim for self explainable, autonomous and independent UIs.

Note that the rules above should serve only as a reference, as they may have exceptions (however, to break the rules, you need to know them). Only by testing with users can you be sure that they are met. It’s never enough to repeat that just 5 users are enough to detect most UI problems.