Evaluating UI — Part I — Evaluating without Users

GDM Nagarjuna
The New Product Manager
5 min readDec 13, 2022

Evaluating UI is one of the most important and critical functions of a product manager. Having an eye for details, ability to foresee what the customer might think, feel, see and do when presented with a user interface and to control the interactions to match with what we want them to experience is something excellent product managers carry them with.

If one is new to product management, it takes time to build this skillset and overtime it comes natural. Even then, it is possible to miss a detail when you have 10 things on your mind. So, there are structured methodologies to help one evaluate a UI and ensure you give users what they want to interact with and in a way that they want to interact.

We have broken down evaluation of UI into two parts —

I. Evaluating without users

II. Evaluating with users.

In this post, we will be talking about evaluating without users. Under this mode, we have further categorised the techniques into

  1. Quantitative Evaluation Techniques
  2. Qualitative Evaluation Techniques

We will discuss multiple methods in both the variations.

Important note to keep in mind before we move ahead, it is not always necessary that we make everything easy for the user. Interesting applications of the same are elaborated by Yu-kai chou in his book. Evaluation is essentially measuring, whether the outcome is what you want or not depends on your goal for the UI.

Evaluating without users

UI evaluation with users is costly. It costs both a time and a user. A user who agrees to be approached for inputs and discussion are very valuable. So, before presenting the interface directly to the user, you can evaluate the UI with the following methods.

Quantitative Evaluation: Formal Action Analysis and Informal Action Analysis

Let us understand what we mean by Action analysis

Action — The steps a user has to carry out with an interface , Analysis — a methodical review or study

Action analysis helps us to review and rethink the steps necessary to complete a task in an interface

The idea behind formal action analysis is that it helps us to accurately predict the time required for a task completion and can also provide insights on error rates for pro users of an interface

Here is how we can perform action analysis

On the UI,

  • Break the task into tiny steps
  • In the key stroke level model those tiny steps are things like typing a character, moving your hand from a mouse to a keyboard or back, focusing you gaze on a particular point on the screen.
  • Physical steps: Keystroke, mouse movement, refocus gaze. Fitz Law states that the time required to move to a target depends on the distance to it, yet relates inversely to its size.
  • Mental steps: Retrieve an item from long-term memory, Deciding among alternatives

Look up average step times using tables from large experiments

End result is the total time which is the sum of step times

It is not necessary that the total time is as low as possible. There are situations in which you would want to have a target for this which is more than the lowest possible. This depends on the goal of the UI. If you are trying to gamify the experience, then you might want to increase the difficulty to give the user a sense of accomplishment

Another method of evaluation is GOMS: Goals, Operators, Methods, and Selection rules

Goals are symbolic structures that define a state of affairs to be achieved and determinate a set of possible methods by which it may be accomplished

Operators are elementary perceptual, motor or cognitive acts, whose execution is necessary to change any aspect of the user’s mental state or to affect the task environment

Methods describe a procedure for accomplishing a goal

Selection Rules are needed when a goal is attempted, there may be more than one method available to the user to accomplish it

Primary application: Repetitive tasks

Benefit: Very accurate and helps identify bottlenecks

Cons: Difficult to decompose accurately. Makes it even more difficult for long processes. Its not useful with non-experts

Lets look at informal action analysis

Informal Action Analysis

First step is to list the basic set of actions, for example — select menu item

Once the list is prepared, you can ask questions about the interface such as

Can a simple task be done with a simple action sequence?

Can frequent tasks be done quickly?

How many facts and steps does the user have to learn?

Is everything in the documentation?

This helps us to think and decide whether a feature addition to the interface or a system will actually be helpful. We tend to keep on adding features and features and more controls to the user as blackhole engulfing everything in its sight. A simple, task-oriented action sequence can easily and quickly become a symphony of menus, keystrokes and dialog boxes to navigate on the interface that people think that the system should offer. More often than not, these options and features are backed by good intentions of being “time savers,” but the end user will end up spending more time just to decide which time-saver should I use and which I should ignore. One key thing we should keep in mind is that it takes real time to choose between two ways of doing something

A few swift calculations can empower you to convince other stakeholders in product, design and development which features should or should not be added. Of course, your marketing argument may prevail as it often seems that more features tend to more sales of a product, irrespective of whether or not they are useful. Yet, it is also true that popular products sometimes become so complex, that simpler, newer programs move in, give a upper cut and take over the tail end of the market. And, eventually the newcomers may even displace the market leaders. Personal computers replacing mainframe market is a grand scale example of the same.

References

Olson, Judith Reitman, and Olson, Gary M. “The growth of cognitive modeling in human-computer interaction since GOMS.” Human-Computer Interaction, 5 (1990), pp. 221–265.

Gray, W.D., John, B.E., Stuart, R., Lawrence, D., Atwood, M.E. “GOMS meets the phone company: Analytic modeling applied to real-world problems.” Proc. IFIP Interact’90: Human-Computer Interaction. 1990, pp. 29–34.

Kieras, D.E. “Towards a practical GOMS model methodology for user interface design.” In M. Helander (Ed.), “Handbook of Human-Computer Interaction” Amsterdam: Elsevier Science (North-Holland), 1988.

Task-Centered User Interface Design, A Practical Introduction, by Clayton Lewis and John Riema

--

--