UI Heuristics — QA With The User In Mind (Part 2)

This is part 2 of a 3 part series on UI Heuristics. You can read the first part here.

4. Consistency and standards

Always take the time to check if the system you’re testing is consistent when it comes to naming things or actions, the design of action buttons, the placement of similarly functioning buttons, whether links are in different places etc. Different parts of the system should be alike enough so that user will not wonder if he or she somehow got to the other side of the mirror. Sub-pages of one website should be consistent in layout and element naming, among other features.

When asking a customer to confirm or cancel, it is good to always keep these options in the same positions. If in one situation, for example, “OK” is on the right and “CANCEL” is to the left yet, in another instance this is switched, it is guaranteed that users will not click the option they wanted.

On the other hand, different things should not be too alike. I once saw (as a user, not as a tester) an internet shop where there were two options in a basket — discard and pay — represented by icons of paper thrown to trash and by the basket moving to a cash register, respectively. At least that is what I think the designer meant because, believe me, somehow they looked almost the same!

Let’s say there is a system where people who need help with home chores can post offers and “helpers” can contact them to offer their services. Such offers can be visible for everyone or just for helpers that you have already had contact with — if everyone can see it, it is in open mode. Also, the offer has a status; it is a ‘draft’ before publication, ‘open’ while it’s posted and ‘closed’ after the helper is chosen. Both the mode and status can be ‘open’, so when someone refers to an open offer, do they mean the status or mode? I sense some possible problems here…

Potential questions to ask:

  • Are the buttons representing the same actions designed consistently throughout the system?
  • Is the naming of functions/states consistent?
  • Are the conventions and already used standards used to help user or are they working against him?
  • Are buttons that were designed alike doing the same/ similar actions?
  • Is the underlined text/ bolded text always a link or just sometimes?

This heuristic is strongly connected to the second heuristic factor — the match between the system and the real world — discussed in the first part of this series.

5. Error prevention

“Better safe than sorry” is not a new rule by a long shot, yet it still continues to prove problematic in implementation. A good system should prevent errors from happening, but it also should have good error messages and try to prevent users from getting dumbfounded.

Has every potential situation been considered? Are the corner cases taken care of? I admit… I like checking all of these little things and contemplating any conditions that were potentially missed during planning and coding.

If you have ever used a similar system, was there something in it that did not work properly? Check it here, as well. What if the user will use the search field? What if they click the same action button three times quickly. What if…

Having a happy stroll down the let’s-be-a-stupid-monkey lane may disclose some problems that need to be fixed before an inexperienced user with a slow internet connection finds them.

Also, trying to break the system is kind of fun and something we testers enjoy, don’t we?

Good error messages are also vital. A user should never be left with just “something is wrong”. If there are errors in form fields, it should be indicated what is wrong (too long an input, incorrect format etc.) If something went wrong, the system should tell the user how they can get back to a previous state, how to do it differently etc. Test if the user is well informed and the feedback they’re given is understandable.

Potential questions to ask:

  • Are the most core cases taken care of?
  • What will happen if the user will use/input the maximum/minimum he is allowed to?
  • Are the error messages in place?
  • Are the error messages understandable for users?
  • Is the positive and negative feedback handled well from a visual perspective?

6. Recognition rather than recall

A user often has the memory of a proverbial goldfish. Three seconds aaaand… it’s gone. To help him or her in this predicament, the system should let them choose from options, rather than making them remember how to do things or where to find things.

Processes have to be tested to see if the user is informed of the steps he has to take, as well as where he is, what he can do there and what he has to do next. If, at the end of the process, something is wrong, the user should be informed where the error or mistake is, as well as how to correct it.

If some information applies to more than the just the current screen, it should either be visible in other relative places or easily retrievable from those places.

This is, again, also connected with navigation and the user’s movement in the system. They should always be able to navigate using what they see, not what they remember.

Potential questions to ask:

  • If there is a process user has to go thru — is he informed at each step where is he?
  • Can a user take a back easily?
  • If there are instructions are they presented in every place where they are needed or only in one place user would have to go back to if he forgot a vital step/detail?
  • Are the instructions clearly visible?
  • Do fields user has to fill have labels or maybe are pre-filled with text that disappears when you start typing?

7. Flexibility and efficiency of use

Let’s be honest — when it comes to computers and mobile devices, every fraction of a second feels like an eternity. This is even more so when you are waiting on something to be loaded on a website or in an application….

Most users get annoyed in a few seconds and will go somewhere else for what they need. This is why testing loading times is important and may be a good starting point for some optimization performed by the dev part of the team. Test how fast it is working on different devices and operational systems, if needed, as well as changing the speed of your internet connection and moving around with mobile devices.

Flexibility means allowing the user some freedom in what they do — and how. It may mean letting him or her choose between standard and advanced user options; in this case, test whether it is easy to switch these options on and off, as well as checking if using the advanced options does not allow user too much choice and will not cause the system to become prone to errors. Maybe user can show and hide some elements — test this on different resolutions and devices. You should also see what happens after quickly clicking something a few times…

When it comes to the subjects of flexibility, this is connected with accessibility (at least in my experience). Dark grey text over a black background is hard to read for most users, for example, and nearly invisible to those with visual impairments. It might be a tester’s duty (and privilege) help the team make a product accessible to all users — test the contrast, headings hierarchy and even the experience when using a keyboard instead of a mouse. Tester insights may be what brings the team closer to making a perfect product!

Potential questions to ask:

  • Is the system efficient when it comes to loading/response time?
  • Does the efficiency change much from device to device and/or system to system?
  • Is the system flexible?
  • Is the system flexibility in users favor?
  • Can user made changes cause errors?
  • Can user easily both change settings and restore them?

That’s it for part 2. Join us soon for the final part!


Originally published at www.pgs-soft.com.