Evaluation Techniques for Interactive Systems

Ushani Anuradha
6 min readMar 25, 2023

--

Effective Products for Users…

Various interactive products are used by us. It is essential to design these products as interactively and user-friendly as feasible given their growing use. The primary goal of this article is on established techniques for design evaluation that guide the development of interactive products. Let’s first examine the nature of evolution and its purpose.

What is Evaluation?

We must evaluate and test our systems to make sure they function as expected and satisfy user requirements, even if we use a design process to create usable interactive systems. This is what evaluation serves as.

Goals of Evaluation

All stages of the design life cycle should include evaluation, with the findings influencing design changes. The benefit is being able to address issues before investing a lot of time and money into execution.

Evaluation has three main goals:

  • Assess system functionality and usability.
  • Assess the effect of the interface on the user.
  • Identify problems related to both the functionality and usability of the design.

Functionality of the system is important because it must conform to the needs of the user. This includes not only providing appropriate functionality within the system, but also making it clearly accessible to the user in terms of the actions the user must take to accomplish the task. It also involves matching the use of the system to the expectations of the user’s work.

Considering factors like how simple the system is to learn, its usability, user satisfaction, and the user’s enjoyment and emotional reaction to the system are all important when evaluating the user experience of the interaction. To create an effective design, it is critical to pinpoint design elements that overwhelm the customer.

Identifying specific problems with the system that may be related to both system performance and usability (depending on the cause of the problem). These can be elements of design that, when used in their intended context, cause unexpected results or confusion.

Evaluation Techniques

We consider evaluation techniques under two broad headings: Expert analysis and User participation.

The evaluation techniques

Evaluation through Expert Analysis

Expert analysis is practical when the designer lacks the resources to involve users.

Evaluation through inspection by experts can be done through the following methods: Cognitive Walkthrough, Heuristic Evaluation, and Review-based evaluation.

Cognitive Walkthrough …
A walkthrough requires a detailed review of a sequence of actions. The steps a user will need to take on an interface to complete some known activities are referred to as the sequence of actions in the cognitive walkthrough.

The primary goal of the cognitive process is to determine a system’s learning capacity. It emphasizes learning through discovery more particularly. The checks done during the walk ask questions that address this exploratory learning.

Heuristic Evaluation …

A heuristic is a guideline or general principle or rule of thumb that can guide a design decision or can be used to critique a decision that has already been made.

The general idea behind heuristic evaluation is to have several evaluators independently critique a system for potential usability problems. It is important to have several of these evaluators and to conduct the evaluations independently.

To help evaluators find usability problems, a set of 10 heuristics is provided. Heuristics relate to principles and guidelines. These can be supplemented when needed by domain-specific heuristics.

The set of 10 heuristics

Model-based Evaluation…

Model-based evaluation uses a model of how a proposed system is used to obtain predicted utility measurements by calculation or simulation. These predictions can replace or supplement empirical measurements obtained from user testing.

Evaluation through User Participation

The techniques we’ve considered so far are no substitute for actual usability testing with the people the system is intended for: the users. In this section we look at a number of different approaches to evaluation through user participation.

There are several techniques that can be used to evaluate users. Below I outline those techniques and assessment styles.

Styles of Evaluation…

We get two different evaluation styles:

Laboratory studies : These are conducted under laboratory conditions. Users will travel to a special environment.

Field Studies : These are conducted in the field. That means designers and reviewers go into the user’s environment to perform the evaluation.

Empirical Methods…

One of the most powerful ways to evaluate a design or an aspect of a design is to use a controlled experiment. Any experiment has the same basic format. Within this basic model, there are a number of factors that are important to the overall reliability of the experiment, which must be carefully considered in experimental design. These include the participants selected, the variables tested and manipulated, and the hypothesis tested.

Participants: In evaluation experiments, participants should be selected to match the intended user population as closely as possible.

Variables: Manipulate and measure variables through experiments under controlled conditions, to test the hypothesis. There are two main types of variables: those that are ‘measured’ or modified (independent variables) and those that are measured (dependent variables).

Hypotheses: A hypothesis is a prediction about the outcome of an experiment.

Observational Techniques…

A popular way to gather information about the actual use of a system is to observe how users interact with it. They are usually asked to complete a predetermined set of tasks, however, if their workplace is observed, they may be observed performing their normal duties. Evaluator observes and reports users’ actions.

Observational techniques: Think Aloud, Cooperative evaluation, Protocol analysis, Automated analysis, post-task walkthroughs. Let’s go through them one by one.

Think Aloud: A form of observation where the user is asked to talk about what he is doing while being observed; For example, describing what he believes is happening, why he is doing an action, and what he is trying to do. Think Aloud provides insight into interface issues and can be used for evaluation throughout the design process.

Cooperative evaluation : A variation of think-aloud that encourages the user to see himself as a partner in the evaluation rather than just as an experimental subject. If there is an issue, the evaluator may ask the user questions to get more information about their actions.

Protocol analysis : Here, use methods to record user actions: paper and pencil, audio and video recordings, computer logging, user notebooks…

Automated analysis : As manual protocol analysis is time consuming and tedious, automated analysis tool is provided. These offer a means to edit and annotate video, audio and system logs and synchronize these for detailed analysis. E.g.: EVA (Experimental Video Annotator), Xerox PARC’s Workplace Project, DRUM, which also provides video annotation and tagging facilities.

EVA: an automatic protocol analysis tool. Source: Wendy Mackay

Post-task walkthroughs: Reflecting back on participants’ actions after an event can be done immediately or after a time interval, with the advantage of providing time to formulate questions and focus on specific events.

Query Techniques…

This means asking the user about the interface directly. The advantage of such methods is that they directly capture the user’s point of view and may reveal issues not considered by the designer. There are two main types of inquiry methods: interviews and questionnaires.

Evaluation through Monitoring Physiological Responses…

We depend on user reports of what they are doing and how they are feeling, which is one of the issues with most evaluation techniques. What if we had immediate access to measuring these things? Objective usability testing enables us to measure users’ feelings and activities when using computers.

The two categories that are currently getting the most focus are : physiological measurements and eye tracking.

Eye tracking is used to take readings by observing the infrared waves that the human eye reflects. We are able to comprehend the participant’s cognitive load and condition. The monitoring of a person’s responses involves physiological data. Pupil dilating, breathing rate, heart rate, skin color, perspiration, and blood sugar level (pre/post insulin level) can all be observed to determine this.

This article is titled ‘Evaluation Techniques for Interactive Systems’. I have discussed what is evaluation, objectives of evaluation and methods of evolution under two main categories. Evaluation through expert analysis and evaluation through user participation. If you want to know more about all these things, you can refer many learning materials along with the above-mentioned material. It will be more useful for your studies.

References…

Human Computer Interaction by ALAN DIX, JANET FINLAY, GREGORY D. ABOWD, RUSSELL BEALE | Third Edition | Chapter 9

--

--

Ushani Anuradha

BSc.(Hons) Software Engineering Undergraduate | University Of Kelaniya.