Using Heuristic Evaluations to Improve Property & Casualty Insurance

Ursula Wright
Guidewire Design
Published in
7 min readMar 22, 2021

Guidewire UX created a standardized process for conducting heuristic evaluations on our insurance-related applications. The goals were to provide evaluators with a consistent way to document and track issues. We wanted to minimize the time and effort needed to identify and solve the unique user concerns of those in Property & Casualty insurance.

Heuristic evaluations are a type of usability inspection method used to identify the friction points end-users, like commercial agents, business analysts, and claims adjusters, experience when interacting with our solutions. A small, internal team of evaluators works on behalf of our end-users to examine and weigh a solution’s interface against a set of established design principles, called heuristics. The result is a list of issues that violate these heuristics. Each issue is assigned a severity based on its impact on task outcomes and the overall user experience. These severity ratings are a 4-point scale ranging from minor irritant (1) to catastrophic (4). While subjective in nature, the ratings provide a consistent way to measure and compare the progress of our applications overtime.

Teams use this approach to establish a shared understanding of our end-users’ experience in our solutions and gather baseline measures to gauge progress on how an application evolves. Standardizing this process enables our teams to easily document heuristic violations consistently, rank the severity of violations with more confidence, and prioritize potential issues to maintain focus.

Figure-1 outlines the general process for our heuristic evaluations. A brief description of each step is below.

Figure-1: General Heuristic Evaluation Process

General Process for Heuristic Evaluations

Planning Meeting

There’s an initial meeting of designer(s), researcher, product manager, and other stakeholders to discuss the insurance solution. The conversation includes product areas to explore, previous research, personas, deliverables, and how the team will use the information once it’s available.

We then agree on the tasks and set of design principles or heuristics to use during these conversations. Our design principles derive from Jakob Nielsen’s usability heuristics for user interface design. However, we’ve modified some of them to address our end-user’s needs — professionals in the Property and Casualty insurance industry.

For example, heuristics associated with reducing visual clutter and cognitive loading may not be helpful for some insurance professionals. Using these types of heuristics to evaluate tasks for claims professionals may complicate their workflow, particularly when documenting first notice of loss (FNOL) or the policyholders’ account of an accident via phone. Research indicates some claims adjusters prefer densely populated screens for easier scanning and data entry. This helps them find areas of input quicker when entering data. The adjusters in our studies preferred entering data on one ‘tight’ screen rather than navigating input locations across multiple screens.

After the team aligns on goals, evaluation criteria, templates, and deliverables, the discussions transition to identifying potential evaluators. The host designer and lead researcher select a diverse, cross-functional team of four to six evaluators based on heuristic knowledge and evaluation experience, product specialty, end-user knowledge, and time availability. This team includes designers, researchers, product managers, domain experts, and others from around the company. These evaluators function as surrogates for the end-users. Their roles, individually and collectively, are to identify and document heuristic violations found in the application.

Kick-off Meeting

Once the evaluators are onboard, the researcher schedules the kick-off meeting. During this meeting, evaluators hear a presentation about the application, end-user characteristics and task flow, and previous research if available. Next, teams review how to identify issues that violate the heuristics and acquire information on the set of heuristics to be used, documentation templates and other tools, and contacts for guidance. This promotes a level of consistency in process, technique, and documentation throughout the evaluation process. Evaluators have access to information from the kick-off meeting via the project Miro board for reference, as shown in Figure-2.

Figure-2: Example of a completed template used in planning for a heuristic evaluation kick-off meeting

Individual Evaluations

Next, the evaluators leave the kick-off meeting ready to individually walk through the application, performing critical tasks on behalf of the user. Evaluators record the issues they find on the documentation template (partially) shown in Figure 3 and assign the appropriate heuristic and severity rating to each issue.

Figure-3: A partial view of a completed template used to document evaluations for individual tasks.

Team Debrief

At the appointed time, the evaluators reconvene to discuss their findings and agree on heuristic assignment and severity ratings. The ratings help to prioritize issues and identify areas for potential remediation. Examples of completed templates used to document the final severity ratings are shown in Figures-4a and Figures-4b. During the team debrief, discussions include recommendations for how to eliminate or mitigate issue severity.

Figure-4a: Example of a completed template documenting the severity of issues within a task.
Figure-4b: Example of completed template for documenting individual task severity overall.

Stakeholder Report Out

The lead designer and researcher host a stakeholder meeting to present the findings to the broader team using the report-out template. The discussion includes collecting stakeholder comments and feedback and the next steps for improving the application.

Improve Application

The designer, product manager, and others work to implement recommendations for prioritized issues. Then a new evaluation process starts to determine the effectiveness of the changes. Evaluators use the templates, heuristics, severity ratings, and other resources on the project Miro board to start the evaluation.

Using a consistent process for conducting heuristic evaluations as one of our testing methodologies has had the following benefits:

  • The ability to quickly gather data internally, helping us learn more about potential issues and establish recommendations to resolve them within shorter development cycles;
  • A scoring system based on issue severity, helping us prioritize where to make improvements and remain focused on the most critical issues for our end-users;
  • The ability to better plan and prepare for additional research based on issues found from the evaluations. For example, complex problems related to sequencing and workflow often require qualitative study to explore the root causes; and
  • The flexibility to evaluate our solutions more frequently to gauge our progress. We’re also able to compare the most recent list of issues found against earlier findings to track progress.

The less obvious benefits are:

  • Opportunities for team members to work cross-functionally and learn about different product areas and end-users. Working with teams outside of their normal day-to-day gives evaluators a chance to expand their product knowledge and understand how various applications work together to contribute to the overall business strategy; and
  • The ability to identify and correct problems in the interface before talking with actual users. While some issues found during the evaluations require time and deeper exploration to understand and resolve, others are easier to address and fix. Resolving the more straightforward fixes helps us remove distractions from our applications before engaging with our end-users directly. Less distracting designs help teams maximize the limited time we have available with our harder-to-find study participants like pricing actuaries, commercial underwriters, or data scientists.

While most of our experiences with heuristic evaluations have been positive, we’ve had some challenges as well. The biggest challenge is the deep domain knowledge needed to ensure the issues we identify are actual problems for our end-users.

Guidewire Software produces a wide range of B2B insurance applications. Some of our solutions are complex and niche to support the needs of our global customer base of insurers and their employees — insurance professionals like solutions architects, developers, and business analysts. We realize some team members may not have the knowledge needed to walk in the shoes of these unique, specialized end-users during the evaluations. So, we involve our domain experts and internal industry analysts throughout the evaluation process. These experts and analysts bring decades of experience in various insurance-related roles. They help teams understand users more thoroughly, including their workflows, terminology, information needs, and decisions. In addition, their participation minimizes the potential of identifying issues that are not real concerns for our users.

Overall, creating a standardized process for conducting heuristic evaluations helps us quickly identify, prioritize, and consistently track issues. It allows us to communicate impactful information within shorter development cycles. The evaluations are a valuable tool in our UX toolbox and a crucial part of designing solutions. The knowledge we gain sheds light on how well Guidewire applications meet their intended objectives and how successful we are in creating solutions to address user needs.

Interested in working for a dynamic company that is revolutionizing the cloud space for P&C insurers? Check out our open positions and follow Guidewire UX on Medium.

--

--