Report Card: Editing Wikipedia on mobile with VisualEditor

iamjessklein
Down the Rabbit Hole
5 min readApr 25, 2019

This is a follow up to a post I wrote about how the Wikimedia Foundation’s Editing team has been infusing Wikimedia values into product design and evaluation. In the first post, I described how we crafted a rubric for a heuristic analysis so that we can test and evaluate the mobile visual editor experience to see what’s currently working with it and what areas could be improved. This is one-third of the full approach is to evaluate the tool, which incorporated usability testing and a metrics study.

For the heuristic evaluation, we had a fairly large cohort follow a three-task script and answer a series of thematic questions. The cohort included internal and external experts from design, engineering, product, accessibility, and right to left languages, with representatives from the English, Hebrew, and Indian Wikipedias.

The tl;dr of this is that due to the research here, we have developed a hypothesis to inform future prototyping: improving a users ability to focus on completing an edit, will increase editor contribution and the product experience will gain alignment with the Wikimedia’s values.

Editing Wikipedia on a mobile device in VisualEditor mode. Image by Jess Klein (CCO)

Participants were asked to empathize with the persona of Joseph, the Reactive Corrector when completing the following tasks:

  1. Access the site, make an edit and publish it
  2. Link to another Wikipedia page from that article
  3. Add a citation to the article

For this experiment, we tested the tasks against four values in the Wikimedia mission:

  • We are inspired
  • We are in this together
  • We welcome and cherish our differences
  • We engage in civil discourse.

After the participants submitted their feedback forms, I consolidated it and synthesized it by value, and then within each value by dimension. These are the results.

We welcome and cherish our differences

While overall there were some wins here relating to nomenclature, casing, and conciseness, there’s an opportunity to improve the experience by addressing user experience expectations around learnability, visibility of system status and accessibility.

Learnability — the structure is simple enough that it could be easily learned Grade: 🤨 satisfactory

Language Comprehension — the in-tool terms, references, and instructions are obvious and written using simple, jargon-free language
Grade: 🤨 satisfactory

Proper Casing — Case is appropriately used throughout labels, titles, and copy
Grade: pass

Visibility of System Status — The system always keeps me informed about what is going on, through appropriate feedback within a reasonable time
Grade: fail

We are in this together

Testers did not feel that the tool upheld this core value. From basic assistance to error prevention and recovery, many editors feel that they are not currently supported by this tool.

Assistance — There was an obvious way to ask for help
Grade: fail

Documentation — Documentation for using VisualEditor (aka “help) is clearly written
Grade: fail

Explanatory — There are tooltips and information along the way that serve as guideposts for what I do.
Grade: fail

Instructional — I’m guided through a process to complete my goal (the edit)
Grade: 🤨 satisfactory

Recognition rather than recall — I know what to do in order to accomplish my goal without needing help.
Grade: 🤨 satisfactory

Error Prevention — I didn’t run into any system errors
Grade: 🤨 satisfactory

Error Recovery — Error messages were expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
Grade: fail

We are inspired

Testers were left uninspired by editing on mobile, citing their frustration at key points in the editing journey. I’m going to need to play the role of Marie Kondo on the team and work to remove any elements of this experience that do not bring the user joy, which unfortunately is quite a lot.

Frustration — The experience was satisfying and didn’t cause me frustration Grade: fail

Freshness — Visual components feel crisp and contemporary
Grade: 🤨 satisfactory

User Control and Freedom — When I had to exit a task, I was easily able to do so
Grade: 🤨 satisfactory

Content first design — It’s obvious while consuming content that there is a way to edit it.
Grade: 🤨 satisfactory

Joy — I had a positive feeling when I finished.
Grade: fail

We engage in civil discourse

There was no obvious way to ask for help and basic functions were a challenge to figure out within the Editor. Testers felt that this was a one-sided conversation.

Assistance, Documentation — see descriptions above
Grade: fail

Explanatory, Instructional — see descriptions above
Grade: 🤨 satisfactory

Organization — You can easily find the functions that you need when you need to find them.
Grade: 🤨 satisfactory

What does this mean?

While this looks pretty bleak, the truth is actually a bit deeper than these overarching report card grades. There was a general sentiment expressed by test editors that they were grateful the tool existed — they just thought that it could be better.

I mapped out the general experience in a service blueprint so that we could look at the feedback from a higher vantage point and identify any patterns that might exist.

Here’s a portion of the service blueprint that I created to map the user’s tasks against the steps in their reactions and emotions. Image by Jess Klein (CCO)

Some Specifics

Looking at the blueprint, there are some key pain points that we can tackle through a design refresh. These include:

  • Discoverability of the visual editing mode
  • Usability bugs around retention of scroll position
  • Saving process
  • Feedback and error messaging
  • Toolbar state changes

A Focus on our Hypotheses

At a thematically higher level, we can see that it is challenging for editors to maintain their focus long enough to complete their edits, let alone with ease or joy. This pattern can be addressed through an extensive design refresh centering around wayfinding and perceived performance. Additionally, this study confirmed that there is a desire for more help and support for editors, thus validating the work that the Wikimedia Foundation’s Growth team is doing.

So back to that hypothesis I mentioned earlier. The team believes that by improving a users ability to focus on completing an edit, editor contributions will increase and the product experience will gain alignment with the Wikimedia’s values. We will know this after the edit rate increases and editors report that they feel less frustrated when making mobile edits on Wikipedia.

Let’s make some prototypes

Now that we have concluded the study and identified the opportunity for improving the product, we are going shift from the discovery into the delivery phase by prototyping and then validating solutions.

Keep up with the project and let us know what you think on the Visual Editor project page.

Thanks to Ed Erhart, Nirzar Pangarkar, Kosta Harlan and Ed Sanders for proofreading and feedback. Thanks again to all those who have participated as testers.

--

--