Standardizing rubrics to better evaluate digital services

Namita Sharma
Ontario Digital Service
4 min readJun 3, 2020

Editor’s Note: Namita Sharma is a Senior Product Manager at the Ontario Digital Service and serves on the Digital First Assessment (DFA) team. In this post, she highlights the need to understand human dynamics and address them in a collaborative way through her team’s work iterating on the digital first rubric system.

As a member of the Digital First Assessment team, I meet with product teams from across the public service to assess how they are taking a user-centred approach to building products and services. We treat Digital First Assessments like any other service by iterating, in response to user feedback. In this case, our users are public servants.

A peer-to-peer review process

Assessments are a peer-to-peer review of work being done to develop products and services. They happen at various checkpoints in the service design lifecycle to determine how well an evolving product aligns with Ontario’s Digital Service Standard, ultimately helping teams deliver simpler, faster and better services for their users.

How a digital assessment works

In an assessment, there are two key groups:

Delivery teams — working to build a product or service that is coming in for an assessment.

Assessors — a group of peers evaluating the work that the delivery team brings forward against the Digital Service Standard.

Delivery teams are graded against five different rubrics depending on their stage in the service design cycle.

A model of the service design life cycle with more information can be found here.

Pre-discovery rubric

After discovery rubric

After alpha rubric

Mid-beta rubric

Before live rubric

At every stage of the Digital First Assessment, one of three results is possible:

● Approved

● Course-correct

● Halt

While “approved” and “halt” are fairly straightforward, a course-correct is meant to flag gaps in alignment with the required standards. We see “course-correct” as an opportunity to help/guide the delivery team to address those gaps.

Evaluating the work, not the team

We need to acknowledge that participating in an assessment (as an assessor or an assessee) can be intimidating.

Delivery teams may come in with assumptions on the underlying goals: is it about evaluating competence or abilities? It can spur feelings of vulnerability and discomfort. One particular experience stands out in my memory.

At the end of one assessment meeting, one team being assessed requested extra time to talk about their recent experience receiving a “Course-Correct” grading.

The team shared openly that, as a group of committed public servants, they read the Digital Service Standard, and applied it to the best of their ability and the grading left them uneasy. It was an emotional but respectful exchange for everyone. This discussion flagged the critical need for us to empathize with our users and better articulate our expectations of them. We needed to go one step further and iterate on our existing rubric.

How we did it

We engaged a series of practitioners from various disciplines (product management, user research, experience design, technology, lean and policy), and used our existing guides and learnings from assessments to develop criteria for each of the 14 principles in the Digital Service Standard.

Each principle now has clearly identified topics that we evaluate, and these topics are consistent across the phases of service design. Rubrics help teams see how our expectations evolve as they move along the phases of service design.

Caption: Shot from our rubric workshops: Input from practitioners on two principles across phases of service design
Photo from a rubric workshops capturing input from practitioners on our service design principles.

For example, Establish the right team is a core principle. We have been evaluating teams on this principle based on:

● how teams are resourced

● the variety of skills on the team

● the roles of the team members

● how the team will be sustained

For each of these topics, we included colour-coded statements for assessors for every phase. So the expectations at the end of the Alpha phase are different from those at the end of the Discovery phase.

As an assessor, if you saw a project align with a green statement that would translate to an “approved” result, whereas if the project aligned more with an orange statement, that would lead to a “course-correct” result. This has helped bring more objectivity and standardization into the results.

Testing and validating the rubrics

We’re getting a lot of great insights, including how people are applying the standards, but also where they’re struggling. This feedback is critical to iterating and making the process work for all users of it. Stay tuned as we share our reflections running 50 digital first assessments across the Ontario public service.

Connect with us

Our rubrics are in beta while we continue to test with users to ensure that they meet the diverse needs of teams who we collaborate with, across the public service. If you have any questions or feedback for us, connect with us at: DigitalAssessments@ontario.ca.

--

--