Why I don’t give out report cards
Resist the urge to reduce your usability findings into a series of checkmarks, X’s, and exclamation points.
I was in my second semester of grad school at Michigan when I was first exposed to color coding usability findings. A student in our program had used the system in class the year prior. Our professor wanted to share the method as an example of visualizing usability findings in a manner that’s easy for stakeholders to understand. We all agreed with our professor that a table of color coded findings seemed like a novel way to present our data. We all even noted that it was similar to the terror alert system (this was 2005 when the comparison was seen in a positive light).
A year or two later I started using the same system in all of my usability reports at work. I even got positive feedback from my team for doing so. And I wasn’t the only one using the system. I’ve seen some form of a color coded table of usability findings at every tech company I’ve worked at.
But at some point about a decade into my career, I started seeing the system, which I’ve since nicknamed the report card, as far too reductive of a visualization. And I wondered about its impact on various members of my team and my relationships with them.
Before I dive a bit deeper into why I don’t issue report cards anymore, I want to pause and recognize that this perspective is very controversial among UX researchers. Some people swear by the report card and are very annoyed by anyone even questioning its place in a usability report. If you’re an early career UXR whose team loves giving out report cards and uses them in their report templates, continue using your team’s conventions! If you read this article and agree with my perspective, consider bringing this perspective to your manager and your team.
But be prepared — you might encounter some resistance.
Why I dislike report cards so much
It alienates your team.
One of my biggest issues with the report card is how it might make members of your team feel … and eventually how they might feel about research and about you. In my experience, most designers, engineers, and product managers are genuinely trying to do what’s best for users given whatever constraints they’re up against (usually time, money, resources, technical limitations). It’s one thing to point out that something a whole bunch of people worked on needs work. It’s another thing to do so using very heavy handed iconography and color coding. You can probably get your point across without shaming your team.
And if I look across various members of a team, researchers might be the only ones whose work isn’t directly being evaluated when we conduct usability research. Sure you’ve probably been in meetings when the project was scoped, reviewed the designs with the design team, and your insights from prior research may have even influenced what’s being built. But when you as a researcher declare a design or feature to be failing or successful it’s not your direct work that’s being criticized.
How would you feel if your stakeholders could rate one of your research reports or your study moderation skills and give it a red X or a green checkmark? I’d be horrified.
It can seem arbitrary.
Unless you’ve taken the time to establish success metrics with your team, giving a usability finding a checkmark or an X can seem arbitrary. If most people in your sample were successful in completing a task but a few encountered some issues, should you give the task a green checkmark? What about the few people who had problems? Should your team worry about the issues they encountered? Do you now mark the task with an orange exclamation point?
It doesn’t actually focus on what’s important.
This brings me to my next point — report cards don’t actually focus on what’s important: why an issue is occurring and what your team should do about it. If you’re conducting tactical research, you’re not doing it to present your team with a list of pass/fails. You’re doing the research to understand if the product or feature your team has designed and built meets people’s needs and whether people can understand and use it. By focusing on the number of tasks that people got “right” or “wrong,” you’re distracting your team from deeply understanding why something isn’t work (or why something is working).
It’s a surefire way to be seen as QA.
UXRs often complain about being seen as a quality assurance function, forced to “test” every small change to the product design and language. But I’d argue that there are times when UXRs perpetuate this mischaracterization themselves. If you don’t want to be seen as a usability inspector then don’t issue report cards!
It makes your team lazy.
Reducing your research to a list of checkmarks, X’s, and exclamation points may seem convenient for your team. But I’d argue that it makes your team lazy. If all they’re looking at is the report card you’ve issued, then they’re not engaging deeply with your research and the findings you’ve uncovered. And they’re not spending time trying to understand the root of the problem and working with you to design a solution.
It’s not accessible.
Color blindness is a lot more common than you may think. Presenting a table of findings marked in red and green isn’t accessible or convenient to your color blind colleagues.
What you should do instead
Instead of using a report card, I’ve been summarizing my data in an issues/recommendations table, where I include the following columns:
- Issue: I describe what happened and how often the issue occurred.
- Recommendation: I describe what we should do to fix the issue.
- Priority: I use the standard software/hardware engineering priority scale of P0 to P3, with P0 being something we absolutely must address.
- Status: I state whether something is in progress, done, or not started. And if I have more details, I’ll add those too (for instance, that we’ve redesigned something and will be evaluating it again).
What I like about this system is that it doesn’t shame anyone for past decisions and it focuses on solutions going forward.
When you should issue report cards
There are times when report cards are valuable and should be used:
- Benchmarks: If you’re benchmarking your product with every release, it makes sense to have an easy way for you and your team to understand how your product is performing from a usability perspective.
- Comparative studies: If you’re trying to compare two designs, it can be helpful to have a way for your team to see how each design performed.
In both cases, I’d urge you to establish success metrics with your team before embarking on the research. You may see a 60% success rate as a problem while a PM might see it as promising. Having clear guidelines from across the team about what you’ll see as a success and what you’ll see as an issue will help you avoid problems in how your team interprets the data after you’ve collected it.
But even in both of these cases, you should still focus your team on understanding why behaviors are occurring and what solutions you should implement.