Human-Computer Interaction Course in Short, week 4 — Heuristic Evaluation

Igor Goncharov
4 min readMar 24, 2016

--

Key thoughts from Scott Klemmer’s HCI course on Coursera, October 2015.

Heuristic Evaluation

Ways to evaluate:

  • Empirical: assess with real users;
  • Formal: models and formulas to calculate measures;
  • Automated: software measures;
  • Critique: expertise and heuristic feedback.

When to get design critique:

  • Before user testing;
  • Before redesign;
  • When you know there are problems, but you need evidence;
  • Before release.

Begin review with a clear goal.

Heuristic evaluation (developed by Jakob Nielsen) helps find usability problems in design. It’s a small set (3­­­-5) of evaluators examine UI who can perform it on working UI or sketches.

  • Independently check for compliances with usability principles (“heuristics”);
  • Different evaluators will find different problems;
  • Evaluators can only communicate afterwards.

Nielsen’s ten heuristics

1. Visibility of system status

  • Time: feedback depends on response time
    < 1 s: just show outcome
    ~ 1 s: feedback that there is some activity going (spinner)
    > 1 s: show fractional progress (progress bar), time
  • Space (e.g. allocated),
  • Change (e.g. save dialog),
  • Action,
  • Next steps,
  • Completion.

2. Match between system and world

  • Familiar metaphors,
  • Language,
  • Categories,
  • Choices.

3. User control and freedom

  • Freedom to undo,
  • To explore.

4. Consistency and standards

  • Consistent layout,
  • Names,
  • Sometimes to distinguish which category you should use, ask the users — provide them with the ‘other’ category and learn from there,
  • Choices,
  • Keep choices clear — rather than just ‘OK’ and ‘Cancel’ provide meaningful labels, e.g. ‘Keep .do’ and ‘Use .pdf’.

5. Error prevention

  • Prevent data loss,
  • Clutter,
  • Confusing flow,
  • Bad input,
  • Unnecessary constraints.

6. Recognition rather than recall

  • Avoid codes,
  • Avoid extra obstacles,
  • Recognition with preview.

7. Flexibility and efficiency of use

  • Flexible shortcuts,
  • Defaults with options,
  • Ambient information,
  • Proactivity,
  • Recommendations,
  • Keep it relevant.

8. Aesthetic and minimalistic design

  • Core info above the fold,
  • Signal-to-noise ratio,
  • Minimalist login,
  • Redundancy,
  • Functionality.

9. Help users recognize, diagnose and recover from errors

  • Make problem clear,
  • Provide a solution,
  • Show a path forward,
  • Propose and alternative,
  • Recognize errors.

10. Help and documentation

  • Provide examples for learning and choices,
  • Guide the way,
  • Show the steps,
  • Help point things out,
  • Provide more information,
  • Help clearly,
  • Help people have fun.

Evaluators’ process:

  • Step through design several times
    examine details, flow and architecture
    consult list of usability principles
    etc
  • Principles
    Nielsen’s heuristics
    Category-specific heuristics (design goals, competitive analysis, existing designs)
  • Use violations to redesign/fix problems

Single evaluator founds about 33% of problems. Five increase that to about 75%. More is not cost-effective.

Severe problems found more often.

Heuristics vs User-testing:

  • HE is faster (1­-2 hours per evaluator)
  • HE results come pre-interpreted
  • UT is more accurate
    UT takes into account actual users and tasks
    HE may miss problems and find ‘false positives’
  • HE is valuable to alternate methods
    Finds different problems
    Doesn’t waste participants

Phases of heuristic evaluation:

  1. Pre-evaluation training
  2. Evaluation
  3. Severity rating
  4. Debriefing

How to conduct an HE:

Each evaluator should take at least two passes:

  • first to get feel for flow and scope of system,
  • second to focus on specific elements.

If system is walk-up-and-use or evaluators are domain experts, no assistance is needed — otherwise you might supply evaluators with scenarios.

Each evaluators produce list of problems:

  • explain why with reference to heuristic or other information,
  • be specific and list each problems separately.

Separate listing for each violation:

  • risk of repeating problematic aspect,
  • may not be possible for fix all the problems.

Where problems might be found:

  • single location in UI,
  • two or more locations that need to be compared,
  • problem with overall structure of UI,
  • something is missing
    ambiguous in early prototypes — clarify in advance
    sometimes features are implied by design docs and haven’t been implemented — relax on those

Severity rating:

  • Independently estimate after review,
  • Allocate resources to fix problems,
  • Estimate need for more usability efforts,
  • Severity combines frequency, impact, persistence.

Rating system:

  • 0 — don’t agree that this is a usability problem
  • 1 — cosmetic problem
  • 2 — minor usability problem
  • 3 — major usability problem; important to fix
  • 4 — usability catastrophe; imperative to fix

Example of HE:

Issue: Unable to edit one’s weight
Severity: 2
Heuristics violated: User control and freedom
Description: When you open the app for the first time, you have to enter your weight, but you cannot update it. It could be useful if you mistyped your weight, or if one year or two after the first use of the app, your weight has changed.

Debriefing:

  • Conduct with evaluators, observers, and development team members,
  • Discuss general characteristics of UI,
  • Suggest potential improvements to address major usability problems,
  • Brainstorm solutions,
  • Dev team rates efforts to fix.

--

--