Designing for Very Different Users — Justice Impact Network (Part II)

Ariadne Brazo
Pro Bono Net
Published in
10 min readSep 29, 2022

Welcome to Part II of my two-part series on how we designed JusticeImpactNetwork.org , a project of the Justice Impact Alliance co-designed with Pro Bono Net. The Justice Impact Network brings together justice-impacted individuals and families, students, and advocates to help impacted individuals and families find and utilize the resources they need to navigate the system, access the full power of the law, and unlock justice.

In Part I I discussed our design process including how we developed user personas and came to understand the needs of our users. The three user groups of justice-impacted individuals and families, students, and advocates represented three very different sets of users. Now in Part II, we dig into the most unique part of this study which was having community leaders conduct the usability tests. I’ll share how we decided who to test, how to train moderators totally new to web design, and what we learned from this experience.

A screenshot of the impacted people portal on JusticeImpactNetwork.org. It shows three trial stages for the user to choose from: Pre-trial, trial, and post-trial.
The landing page for justice-impacted users.

Deciding who to test

Usability testing, which I’ve written about in the past, is exactly what it sounds like: it is testing the usability of a product. The industry standard is to test ~5 people per user group, per device. Since we were testing two user groups and designing for both desktop and mobile, that added up to 20 tests!

  • Mobile — 5 justice-impacted, 5 students
  • Desktop — 5 justice-impacted, 5 students

Considering how long the probono.net platform has been serving advocates and attorneys, we decided to deprioritize testing for them. We have done recent usability testing for that user group on very similar designs and thus we had reasonable confidence that we could move forward without investing in testing for that group. It doesn’t mean that we know perfectly the needs of advocates forever, it just means that with the constraints of this project, we knew this was the least likely to trip us up in the here and now.

Students represented a newer group and contained various kinds of students within it so we felt they needed to be tested. Finally, we absolutely felt we needed to test the usability of the design for justice-impacted individuals and families. This group, based on our user research, was likely to include a very wide range of comfort and access to digital technology.

Recruitment and digital technology comfort levels

The first step is recruitment. Once we identified where we would send out the call, we developed a sign up form that was part form/part survey. We collected their contact information but also asked a few questions. The most important question we asked was about comfort with digital technology. We found that justice-impacted testers had a very wide range of comfort with digital technology where some users had very low comfort and others had very high comfort. Students reported a much more consistent range.

The chart below shows how widely those 10 users in each group varied on their reported comfort with digital technology. To be clear this isn’t the average level of comfort, it’s the range between the highest and lowest levels of comfort.

A bar chart that shows levels of comfort with different devices: desktop, phone, tablet. Justice-impacted users showed a wider range of comfort levels than students did.
The higher the bar, the higher the differential between comfort levels among users surveyed.

As you can see, students showed a more consistent level of comfort with digital technology making it easier for us to anticipate how easily they will navigate our product. Justice-impacted individuals showed a much wider range of comfort levels which means we had to account for both high and low comfort levels in the design.

Community-led usability testing moderators

While developing this testing structure with our partner organization, the Justice Impact Alliance, they proposed an intriguing idea that aligned with PBN’s efforts to increase participatory design in our field. Because the Justice Impact Alliance staff have developed strong relationships with the justice-impacted people they serve and the students that volunteer with them, they wondered if they themselves should moderate the tests rather than Pro Bono Net. Their plan was to have several moderators conduct a handful. Normally, I’d advise against using numerous moderators, simply for reasons of consistency. By having several moderators you risk inconsistency in what moderators focus on and find compelling, what follow up questions they ask, etc.

However, it seemed like a good opportunity to try something new, something that could expand our imagination of user-centered design by having community leaders themselves moderate usability testing. Although it would throw off some levels of consistency in the approach, it would bring in a variety of perspectives and interpretations that we definitely wouldn’t get if we kept it to a single moderator. Since this wasn’t intended to be a highly scientific study, meaning we didn’t need to have strong control methods, we decided to go for it.

“Having our impacted members of our staff trained to moderate test sessions with impacted testers was a stroke of genius. This tactic gave us a unique edge in testing which increased the quantity and value of the insights we gained.” — Dieter Tejada, Co-Founder and Co-Director of Justice Impact Alliance (JIA)

Challenges of external moderators

This route did pose several significant challenges.

  • Context building: None of the moderators we trained had any experience in web design up to this point. Most had participated in our user persona workshops but that’s about it. Their understanding of web design principles was very fresh and so context building was key.
  • Limited time and availability: Our moderators are community leaders who work with a variety of people that demands a lot of their time and focus. This meant that although they were willing and eager to participate, they had many other responsibilities on their mind and so the amount of training we did had to be incisively impactful and highly efficient.
  • Difficulty of user research: I have written before about how conducting user research isn’t as inaccessible as people think. I stick to that sentiment but having said that, it’s important to remember that many people do this as a full time job and undergo a ton of training to build this skill. It’s impossible to train people in all of those nuances with just some materials and a one hour workshop. We had to pick the most important parts to train on.
  • Consistency of documentation: Although we knew we wouldn’t get the same level of note taking and annotation, we still wanted to try our best to achieve some level of consistency across our moderators. We wanted them to use the same script, use the same follow-up questioning patterns, and make similarly detailed notes on their impressions of the tests they conducted.

Training materials

With all of those challenges in mind, we went about identifying what solutions we could offer. We agreed that a combination of training materials and a training session would be best. We scheduled an hour-long workshop with our moderators, developed materials, and sent them out beforehand so they could review them and come with questions.

Our moderators were given the following training materials:

  • Moderator guide: This was a single document they could bookmark and keep as their compass to navigate this entire process. It starts with links to the script, the beta site, and the example tests. This included information about usability testing is and is not. It included information about how a test will go, what equipment they and the participants need, troubleshooting screenshares, best practices, etc.
An partial image of the moderator guide. It shows links to important materials, how to get technical support, and begins the design overview.
Part of the Moderator Guide we prepared for our external moderators.
  • Design intent: In the guide we included a reminder of how we designed the site, what our intentions and hypotheses were. This is crucial! Moderators must know the design well and know the intentions behind it.
  • Training deck: After the training session was over, we sent them both the recording of the session and the slide deck itself.
  • Example tests: We then linked them to recording clips of previous usability tests. Those were accompanied with notes on what to learn from those clips. We titled them with names like “Good introduction” and “Bad introduction.” Those included examples of my own mistakes. This was not only to teach our moderators but also so they could see how I myself mess up too and hopefully alleviate any pressure they may have been feeling.

Training session

The actual training session started with a design review. Again, it’s so important that your moderators know the design through and through. Without that context, they will not glean many insights from the user’s experience. For example, if they don’t realize that there is an entire section of the site that users are missing, they won’t be able to report back that their users totally skipped over this crucial feature.

We then did an introduction to usability testing. This focused not just on the ins and outs of usability testing but also on dispelling the mystery and intimidation of this process. We focused on creating an accessible and inviting framework for how to conduct usability tests so that our moderators were able to build some confidence going into it.

After a detailed reminder of all of the equipment they and their participants would need (their computer, webcams, reliable internet connection, a quiet location, Zoom, recording, etc.), we dug into some high-level best practices. These included:

  • Ask the user to narrate their thought process as much as possible.
  • Avoid the urge to give them hints and allow them to get lost (this can be especially tempting when you know the user outside of this setting).
  • Listen as much as possible, only speak in order to get the user to speak.
  • Study the design before testing.
  • Take notes but pay attention to the session. Expect to re-watch the recording and take thorough notes then that way you aren’t pulled out of the session.
  • Write down your main impressions immediately after the session, that’s when it is most fresh in your mind.
  • Never send them a link to pages you want them to get to, instruct them using the website so that you can see how easily they can find things.
  • Know your script well, you will have to jump around depending on where the user goes.
  • Loosen up! Don’t take this too seriously and build some comfort with your participant.

Finally, we went through the example tests I mentioned before. We watched clips as a group and discussed them. We then sent out these materials and discussed the logistics of how scheduling and documentation would work.

What we learned

In the end we learned a lot from this process. We did indeed get a variety of insights that we may not have gotten doing this on our own. Here is what we learned.

We had good moderators

Our moderators were fantastic. I can’t say enough about how grateful we are for all of their time and effort. They showed up to the trainings, took it seriously, asked good questions, and then conducted some really effective tests. For just a few hours of training, I am really impressed with the outcomes. This goes to show both how far some strategic training can go but also how useful it can be to have community leaders involved in testing. What they may have lacked in user research experience, they made up for in intuitive understanding of their participants’ experiences.

Candid feedback

We found that some participants did seem to offer candid feedback that possibly they may not have given if they felt us tech professionals were too far removed from their real-life experiences. It’s impossible to know for sure but I think it’s a fair assumption. In other studies I do think we have gotten candid feedback by building rapport with the user and telling them explicitly, “You can’t hurt my feelings on this design. If you hate it, I love hearing that so I know how to make it better.” However, being a part of the community you are studying just naturally carries some built-in rapport.

Training materials and sessions were key

We found that the training materials we gave were critical to the success of the study. We tried our best not to overdo it and I do think we struck a good balance there. If we were able to do two training sessions, I would have included some role playing where we act out a mock session live. Reviewing videos is helpful but actually trying it out and getting over some of the apprehension is very helpful.

Documentation issues

Documentation was tricky. We found out too late that the Zoom accounts being used had blocked permissions making it difficult for us to access the recordings. It proved tricky to explain to our moderators how to download them and then upload them to our Drive. This slowed things down a lot. In the future, I would have advocated to make everyone use a Zoom account under our own team.

The way people titled their notes documents was also difficult to handle. There were so many documents flying around through emails and separate drives that I spent a substantial amount of time chasing down documents, getting permissions, and then moving them over and standardizing the formatting. This is where a research assistant would have been pretty cool to have!

Note taking is too time consuming

Additionally, getting our moderators to put in notes was very challenging and for good reason. Taking the time to write down all of those notes into a document is not easy when you have a whole other job on your plate. It came down to me watching every recording (which you should do anyways) and taking my own notes down too. We could have seen this coming and in the end, I think it was unavoidable. Factor this into your timeline.

Skill building and the empowerment of the collective

I want to end on one of my favorite parts of this experiment which is that we all got to grow our skill sets. On our end, we got to learn a lot about how to train moderators, how to make this skill more accessible, and how to step back and let go of control. On our partner’s end, they got to learn a new research skill. Besides having new tools in our toolboxes, we all got to learn something new. Just the act of trying something new and learning from it is an empowering experience. I care about my work but I also care about being more human and facilitating experiences that help us get in touch with ourselves and what we can achieve as a collective.

--

--

Ariadne Brazo
Pro Bono Net

Product manager using digital technology for social good.