Usability Testing Complex Systems on Domain Experts

Robert Bronkebakken
The Aize Employee Blog
5 min readSep 27, 2022

--

One of the main reasons behind joining Aize was the opportunity to work within a new industry, with different types of users and a much larger system complexity.

I was assigned to work with visual construction planning, an Aize functionality that did not exist yet. No previous products or competitors to glare at, really. It’s just us (and maybe some competitors, developing in parallel) on the frontier.

The start point

When making a specialised tool like this, you would like to think that you (A) have Domain Experts in charge of the project and (B) have a pool of other Domain Experts with whom you can test your hypothesis and the solution.

Luckily we have both. If you have neither, then the curve might be very steep!

Make sure you understand what you are trying to solve!

Understanding the product and the process is critical. And coming from a background where I needed to understand the whole process to contribute to a good solution really gave me a headache. It took months to start to understand the contours of the actual process.

A lot of my contributions consist of simplifying the user journey. And if I don’t understand what problem our tool is trying to solve for users, then I might be simplifying the wrong stuff.

Our project went to the yard to see what we actually were trying to optimise a solution for. Our functional architect showed us the ropes …

4 men in security equipment in Egersund Yard in Norway, smiling at the Camera.
Team members on a huge Structure.

At the yard, we saw metal plates cut and welded together into bigger and bigger modules. Like what we are pictured standing on — a five-story tall module — a part of something even more significant.

Compared to our daily view of the solution on a computer screen, looking at a 3D model which seems relatively small, it’s impossible to imagine how huge some of these structures are. You have to see it to believe it.

What are the basics?

Doing your standard research on the matter through the principles of UX is never wrong, but it’s usually about the basic principles. You also need to be specific; What is the industry like, how revolutionary is this new tool, and who are the users?

I found great insights from NNgroup when working on complex systems, when to stray from the road, adapt to the industry, and test on more than one group of users.

Then I took a step back…

As a User Experience Professional or any other Professional working digitally, you conduct your daily work within the range of a few essential tools. It could be Figma or Sketch, Notion or Word 365, FigJam or Miro, etc.

What defines those tools? What are those tools to you?

They can be quite a big part of your daily work!

And they are tools. They are hard skills. They should be hard skills that will enhance your soft skills and simplify your workflow.

Get feedback from a mix of fresh eyes and trained eyes.

Get users and domain experts also outside of the project. And if you can, try to spread the users across your test plan. Start with just enough, iterate, and bring in more users. They can see it with fresh eyes, and help verify the usability. With the users already familiar with the system, you should be able to talk to them about the next tasks. What are their expectations? One step at a time.

Set the Context

Let’s say you are conducting a usability test of Figma or Sketch. You would need to test on Domain Experts, not necessarily expert users, but users having a deeper understanding of what a tool like Figma can do and why it exists.

You would also need to set a scene for doing a specific task — much the same as with standardised Usability Testing — but in addition, you would need to make sure the user has the correct details and insight to conduct the test. You would need to test a piece of the puzzle so that it makes sense to everyone.

Introduction with a Presentation and a demo

At Aize, we conducted a crash course test of our new digital tool on Method Engineers who had a profound understanding of how to solve the given task. They had done it many times over, but in a different and much more manual way than we wanted them to test doing it.

It was then crucial that all Method Engineers had the same understanding of the Plan, the construction method, and the project they would contribute towards.

We made an introductory presentation to ensure they shared the same base information.

Close up of a digital 3D model from Visual Construction Planning.
A snippet of the 3D Model in Visual Construction Planning at Aize.io

And since we were testing a flat Prototype in Figma for the first usability test, we also needed to show them how they could navigate the 3D model when the application was ready. All this before they had done any tasks.

The Prototype test

As mentioned above, it was a crash test to see whether they understood our hypothesis for a sound navigation system in the app. If they got the flow we were trying to make.

Typically, they would receive onboarding and training on the app before starting, and they had gotten none. They knew the project and their job, but not the tool.

We built a task-driven workflow to make the context as natural as possible. The users would get an email with a task from the Project Lead, with basic information about the task and a link to get started.

From there, they were on their own. And most of them did great.

Getting valuable feedback

When the users had done their tasks, several of them were sad we didn’t go further within our flow, and they wanted to finish the job. They were very excited about what we were trying to make and what we were trying to solve for them, even though it was just a prototype in Figma.

They followed up with a range of questions about the functionalities that were coming and how the app would function with the 3D model. They had ideas for making it even better and discussed their current pain points in great detail.

Massive feedback for us to keep going. And the fun part was to see that most of their input to functionality was existing user stories in our backlog. But also new perspectives that enriched our active discussions.

Transparency and involvement are key

Use transparency to your advantage, and let the real users help you also beyond the actual usability test.

Like when you summarise your findings and are verifying your hypothesis, include your test users in that. Show them what you did find. Mix them in with the project domain experts. Let them know what the verified hypothesis is, and the problems you need to fix. And over time, they will see that their feedback helps shape the application.

The inclusion will make many enthusiasts, and some of them will most likely be superusers.

--

--