Platform research & development
GCshare early prototype usability testing — Part 2
What we discovered and changed
If you need more context, read an earlier article about GCshare and how this usability testing came about.
You can also read this article in French.
In this article
- Main usability issues and insights
- Quantitative findings
- Usability test limitations
- Closing thoughts
Main usability issues and insights
There were 5 main issues observed during the usability testing sessions:
- Process of what happens after the resource is submitted was not clear to users
- Not everyone agreed on the best way to provide instructions, even though everyone read them
- Users did not recognize where to take important actions
- Process for sharing bilingual content was missing from the workflow
- Users missed resources below the fold (screen cutoff point)
These insights along with other findings were shared with our UX designer, who used them to improve the platform.
1. Process of what happens after resource is submitted was not clear to users
Two specific things stood out to us:
- No one realized the resource would be reviewed before being published
- They wanted to know more about what happens next
What we heard
When the task was to share a resource, I assumed like on GCconnex or GCcollab, you can just upload and it’s there. I did think that it was going to be published right away. I did not read the note that says — Our team will review the resource and connect with you.
When you make the [Thank you] message so big, that’s the main message. I am not reading the small text below. If it’s important, make it important. My understanding was when I hit submit, everything is done. It means it is there and everyone can see it.
I am wondering about what are the next steps! It is a bit unclear to me. Does it mean someone will review it? Does it mean they will get back to me to make changes, get it translated or add more keywords! Is that going to make my life complicated?
- Users have strong associations with how interfaces work based on prior experience with similar interfaces in similar organizations
- Expectation around “content sharing” is that resources get uploaded immediately. Our platform is compared to past experiences with other government social platforms like GCwiki and GCtools, where content is uploaded immediately
- Users want to know more about the process. Uncertainty is not comforting. They want to know how much additional time something might take
- Repetition helps retention. We need to repeat the message in multiple places, not just in one!
What we did
- Added another step to highlight the review process
- On the About page, make it clear how this platform is different from other GCtools, highlighting its additional support for findability and curation
- Changed final button text from “Submit a resource” to “Submit resource for review”, since this button text was actual read by the test participants
- Told users how long to expect to wait and what we will be doing with the content we review
- Explored the possibility of generating an automated email to go directly to the person submitting the resource to let them know we received it and what the next steps are
2. Not everyone agreed on the best way to provide instructions, even though everyone read them
Not everyone agreed on whether instructions were needed and where they should be placed.
What we heard
Not sure I know the page continues.
I don’t want to have to read [instructions] first. I expected to see the upload first. I would have a small icon first that tells you “Instructions”, so I don’t need to see it if I don’t have to. I don’t want to have to scroll through it every time. If I had to see it every time I want to upload the file, it would be very irritating.
This little blurb is inviting and it is easy to tell that I am in the right place and steps are clearly outlined telling me what resources are accepted. I can see my checklist would be a good option.
Internal and external government websites are not super clearly laid out in terms of steps. So when I came here, my first inclination was to read, oh great, I have a #1, #2, #3 and it tells me exactly how to do it. I often find with government interfaces that, you start doing what you are trying to do first and then you end up hitting the help button — what do I do? So this worked well for me, to figure out what do I have to do before I even start my task, and interrupt what I am doing to figure out how. For me to have steps laid out at the beginning is very helpful and saves me time in the end.
- Amount of space taken up by steps makes people think there is nothing below and it could lead people to miss essential information or abandon the task entirely
- Information needs to be scannable, as people skim it for “anything big that stands out”
What we did
We valued the different perspectives shared, yet the information in the steps was not trivial. Our initiative is not what many public service employees are used to in that:
- Content posted to the platform is open to anyone to view and has to be free of copyright restrictions
- Many types of content are welcome, including drafts and templates
- Our team reviews content and ensures it is well tagged, so it can be easily discovered
We wanted to save users time and communicate the constraints and expectations upfront, to avoid wasted efforts. So while returning users may not need this information, it is important for the many new contributors, at least in the interim.
So instead of removing, moving or hiding these steps, we tried to minimize their impact on returning users and everyone who ‘scans’ information:
- Created short headings for each step, so people can scan what each one is about and decide if they want to read on
- Added a “Skip to the form” button for returning users
- Reduced amount of text in each step, so the steps took up less space
3. Users did not recognize where to take important actions
Two very important actions were impacted by this:
- Users missed the download button
- Additional action box was not noticed by anyone and users were not sure what to expect there, missing the “share” option
What we heard
The download button, I did not see right away.
The only thing I am missing , I don’t know what I would do next, — that is not immediately clear to me — where I would go from this page — I guess download — Download did not draw my attention. My first reaction was to go down to the bottom and hit Download. I was not expecting it up here.
If I would have seen the icons — Preview, Share link and Add to favourites icons — then I would have been like “oh wow, nice”. So my reaction would have been instantaneous in knowing oh cool, I can share it with my colleagues.
Did not notice the 3 dot square; I would not put them inside this … box, as i have no idea what this box is and I don’t necessarily want to go there. I would make them all visible.
- People are scanning along the left side of the page in a familiar F-shaped pattern; the right top corner is the last place they go to for important actions
- The hidden additional options may never be clicked on, as they will likely be missed
What we did
- Moved “Download” and “View online buttons” to the left side, to help with the F-shaped scanning pattern and placed them close to the top fold of the page, since users may not scroll down
- Removed the box with 3 dots
- Made the “Share” option visible, as sharing a resource was very important to us from an advocacy perspective, to spread the message about open educational resources
- Placed the “Share” option close to other resource actions
4. Process for sharing bilingual content was missing from the workflow
How bilingual resources would be shared via the platform has been preoccupying us for a while, but we did not have a clear solution, so the prototype was missing it all together. It so happens that the questions participants asked, helped us figure out where to integrate it effectively.
What we heard
I wonder if there would be an option to add bilingual text. On GCcollab there are 2 tabs and you can write your English and French text. Or is this just in the language of choice?
Should the description be in English and French, and same for the title, or just one language?
- When no clear instructions are provided, users will find workarounds and we will end up with inconsistent resource description, undermining discoverability of resources
What we did
- Asked users if they will be sharing content in both languages at the very start of the form; if yes, then the form will be automatically modified to include necessary fields for both languages
5. Users missed resources below the fold (screen cutoff point) on the search results page
What we heard
I did not scroll, so I did not see this either, so I guess it is a different format?
I usually just skim the titles first and check a few results. If description is long, I would just read the title; if the title is interesting, I would go to the description.
The Title is talking to me — if the title is not well done, I will miss a chance to go through it.
- White space and using it carefully is very important. Amount of white space between elements helps to group similar things together and separate things that are different; when there is a large gap, it looks like there is nothing else below
- Users are in a rush and they scan what is in their immediate view; if first results and specifically titles do not attract their attention; they may not scroll down to see more!
- Titles are the primary way to assess content for relevance
What we did
- Removed sections separating content into different types of resources
- Created even spacing between different types of resources to make it easier to scan
This particular issue was an important reminder that what you discover during usability tests is more than just usability issues; you uncover patterns of behavior and mental models that you will have to support, inform and work with.
In the case of titles being the primary entry point into content, this will be a key change management and coaching opportunity, to support the community in creating meaningful and concise titles for their content.
Time on task and task success
All 6 participants completed all of the tasks successfully, although the criteria for success can be challenging given the types of tasks the platform is meant to support. More on that in the test limitation section.
Each task generally took a while because the interface was unfamiliar to users and depending on the level of interest in the interface, some took extra time to comment or make sense of what they saw. Task 1, would likely have taken less time than task 2, but there was one outlying result that made the average higher.
System Usability Scale (SUS)
System Usability Scale is used to determine the perceived ease of use and learnability of an interface.
Based on the ratings of 6 participants, our score is 58.3.
Generally, values from 50–70 could indicate that a product is acceptable, but there are some usability issues. While the score may seem low, there were some limitations that may have contributed to this rating. These are discussed below.
Usability test limitations
Time on task and task success metrics
There were a few design decisions that may have added extra constraints to our ability to evaluate task success and gather valuable time on task metrics:
- Asking people to think out loud as they complete a task, has certainly affected task completion timeframes and at times resulted in significant tangents; perhaps next time, a better approach is to do retrospective probing instead
- Our prototype was not optimized to support all exploration that a user expected, creating extra friction and confusion; with some features inactive (search and filtering), it prevented users from finding a resource that they felt may have been relevant
I did not feel that any of them were matching the specific topic of digital communications and I was not sure if they were active or not. It wasn’t very clear that these were documents I could consult.
- When a task is conceptually complex, it is hard to evaluate time on task and what success looks like
Task 2 success criteria was very broad because the task was cognitively complex. So the task could be successful if a user did any of the following:
- chose to click on the “Download” or “View content” button, on a resource page
- stated that they are not sure if any of the resources presented are useful to their task, having reviewed the search results or the resource pages
- returned to the search results to check out other resources, after reviewing a resource page and feeling that that resource was not relevant to their task
Given this range of interpretation, success could take varying amounts of time and ultimately ‘look like failure’. This is an important consideration for future testing. It is essential to make sure everyone has the same understanding and expectations about what success looks like and what the resulting metrics will be communicating.
While this score may be useful as a baseline/benchmark for future testing, it can be an inaccurate representation of interface quality given that the prototype was not fully functional (as mentioned earlier). When something does not work as expected, the perception of usability will be lower.
Literature on the topic also suggests that SUS score can be affected by the degree of familiarity with the interface and task complexity. SUS could be 8–10% lower, if usability test participants are:
- Unfamiliar with the interface
- Completing difficult tasks
This was certainly true in our case and is an important consideration to keep in mind.
Finally, a decision to simplify the prototype design may have contributed to a lower SUS score as well. Generally, mockups that are stripped down to the very basics can be great. They allow us to focus on tasks rather than aesthetics. But there is also a danger in that mockups make a product look finalized. So making the prototype entirely black and white, without clearly indicating at the start of the testing session that the live version is going to have graphics and colour, can create unnecessary focus on the ‘bleakness’. Every user commented on how ‘grey’ the prototype looked, thinking this was the finished product: “Visually, an injection of colour would be nice. I found it looked very grey.”
We will focus on making improvements to the platform based on what we’ve learned, while keeping it aligned with the positive impressions we heard:
- Inviting and exciting feeling
- Clean, simple, easy
At the same time, a comment left in the SUS questionnaire is what stands out:
I feel if enough people put enough good quality resources on this platform, the interface isn’t too bad and people will use it, but if there aren’t a lot of useful resources there, most people probably won’t. […] what will drive use is the resources.
No matter how good a platform is, it can only take us so far. What will ultimately make or break it’s vision are the learning resources that the community will contribute to it!
We will be sharing all of our user research templates and support resources on the GCshare platform, once it’s live. Will you join us?
If you have something you want to share with other learning professionals, reach out to the Educational Technology Research and Innovation team!