6 Principles for Truly Effective OKRs (Part 2)

Well-crafted OKRs require a team and organisation context that fosters collaboration and learning

João Craveiro
Onfido Product and Tech
9 min readMay 13, 2019

--

(Part 1 is available here.)

Photo by rawpixel on Unsplash

In Part 1, we have seen 3 principles for effective OKRs, focused on the OKRs themselves:

  1. At least one of your Objectives should be cross-functional and stable.
  2. Its Key Results should express the outcomes that show your progress towards your Objectives (not the work you’re willing to put into it).
  3. Key Results should be unequivocally measurable.

Let us now look at 3 more principles, focused on the best team and organisation context for OKRs to thrive as an alignment and learning tool.

We have closed Part 1 saying that your Key Result should be measurable. But let’s remind ourselves of an old riddle.

— Five frogs are sitting on a log. Four decide to jump off. How many are left?

— Five. There’s a big difference between deciding and doing.

#4) Check In Regularly

It’s not enough to be able to measure your OKRs: you have to really do it. At the very least once a month, you should check in on your OKR scores. This means sharing your current OKR scores (and a high-level summary of what was done to impact those):

  • within the cross-functional team, and
  • outside the cross-functional team.

You don’t need to send an email announcement to everyone (in fact you actually shouldn’t), but you must store them somewhere easily accessible to everyone. At Onfido we have all the teams’ OKRs in one place, and I highly recommend that.

Inside the team that owns the OKR, you should naturally check in more regularly than this, in good spirit of Inspect and Adapt. This shouldn’t be a struggle, or too much additional work — if it is, probably there’ something off related to either:

  • Rule #2 (outcome-oriented key results); and/or
  • Rule #3 (measure once, cut twice).

Your OKR scores should map directly from a single source of truth about a KPI that, in theory, is already your bread and butter. For instance, at some point, the Service side of our team were checking in on the reopen Key Results weekly: they came up with a roster, so that every Friday one of the team would be responsible for getting the scores and sharing them with the whole team — along with what was done to influence that Key Result, and what’s next.

As much as meetings get a bad reputation, we did also find value in having a regular formal alignment meeting. Cross-functional collaboration should not (and in our case didn’t) only happen at regular checkpoint meetings — but these certainly help ensure alignment and focus. “But why do you have to meet regularly if you collaborate frequently?”, you may ask. Because…

#5) Extreme Cross-Functionality: Involve Stakeholders

…it’s a great opportunity to open up to people with whom you don’t collaborate so frequently. Let your stakeholders in!

Photo by Philipp Berndt on Unsplash

We iterated both the frequency and the format of this meeting, but the one that ultimately stuck was a weekly 30 minutes session with representatives from:

  • Technology (product / engineering);
  • Service (service management and applicant support);
  • Client Services (account management).

Each of these functions adds one slide beforehand to a shared deck, to be used as a backdrop. That slide contains the previous week’s highlights (or lowlights…) regarding their function:

  • Technology update on what was released / is in the works— and the observed / expected outcomes;
  • Service on applicant support and quality control trends/issues;
  • Client Services bring any ups or downs regarding key accounts.

After reviewing action items from the previous alignment meeting, each slide gets a 5 minute timebox; if some issue deserves more discussion, the key people involved get together afterwards to discuss and act. At the end, we keep action items to check upon on the following week. The combination of both constraints (1 slide, 5 minutes) helps everyone focus on what matters — outcomes rather than outputs, as we’ve seen in Part 1 — so that the meeting doesn’t derail into a mere status update or wish list exchange ceremony.

With this format, we create a healthy relationship with our stakeholders, namely Client Services — one based on collaboration. This makes it even easier to take it to the next level when a certain problem requires collaboration beyond:

  • the 30 minutes window; and
  • a circle of 1–2 representatives per function.

We went full-on in this direction on one specific occasion, by running a journey workshop, mostly based on Harry Brignull’s article (in our case we skipped the empathy mapping because we already had our persona defined beforehand from an earlier initiative). We wanted to absorb all the collective knowledge, so for this one we got together:

  • all members of this product development team (engineers, QA, designer);
  • all the people in the Service function (service management and applicant support) who focus on this line of business;
  • all the people in the Client Services team who focus on clients in this line of business.

By going through what applicants need to step through, but also how they feel throughout the process (both about Onfido and about the needs and struggles that bring them to having to do a check with Onfido), we identified opportunities based on their outcome for the applicant.

Final board of our user journey workshop (© 2018 Onfido)

These opportunities are then the basis for product discovery, where we engage other techniques (such as opportunity solution tree) to both prioritise opportunities and generate ideas for product experiments to achieve the intended outcomes.

#6) Make Learning Safe

As I’ve mentioned in Part 1, we work in cross-functional, mission-driven and long-lived teams at Onfido.

That’s an essential building block to make learning safe — the team needs to have all the skills necessary to both generate assumptions and prove them right or wrong. Without this, any experiment to learn more requires cross-team coordination, making experimentations (and learning) not only riskier but more expensive.

After having skills-wise autonomous teams in place, an organisation needs to make it okay to get to the end of a quarter with a 0.0 result in an OKR. Of course you don’t want to keep doing it, but when it happens it should lead to a conversation along the lines of “what happened?”, or “what have we learned?” — not “what the hell have you been doing here all quarter, anyway?”.

There are two common pitfalls regarding OKRs that deserve a place as “subrules” of this principle.

Don’t Use OKRs for Performance Review

There’s a whole array of reasons why good OKRs simply won’t work as performance gauges.

If you use OKRs for performance review, you’ll just be inviting people to game the system. We’re humans—if you, as a team, are responsible by defining your own OKRs and you know they put you on the line (performance-wise), guess what? Your OKRs will soon become oriented to outputs (which is something you can control) or to “low-hanging fruit” outcomes — lo and behold, 1.0 across the board, yay! If OKRs are not used for performance review, there’s really no incentive to try and game the system — you’ll only be fooling yourself. If you’re worried that an OKR shared across functions won’t show if one function is not “pulling their weight” and is just coasting on the work other functions are doing, then you have a deeper problem than OKRs can solve.

If OKRs are made to serve their purpose as a means to keep everyone focused on the team’s north star, they are susceptible to external factors. As we’ve seen in Part 1, product development starts with assumptions, and is about experimenting to (dis)prove those assumptions. This means you won’t always get it right from the outset, so it’s not like the number will steadily go up—which is another indicator that OKRs are not right, not only for performance assessment, but as a measure of any kind of output progress measure.

Don’t Use OKRs for Project Management / Progress Tracking

There will be the odd occasion when your team is overtaken by work whose success is almost entirely dominated by one measure: getting it done. This was the case for the first two quarters I was in the Hire team (the two quarters prior to the ones I highlighted in Part 1). We weren’t trying to move any needle, we just needed to transpose a regulatory change into our product by a certain deadline—the outcome was the output!

The temptation to use an output-driven OKR to track progress is strong. Back then, neither us as an organisation nor me individually were as mature in using OKRs as we are now—so guess what: we did have an output-driven OKR for that.

There’s a name for this kind of work: it’s a project, so you should manage it as such, and use appropriate means to track its progress, risks, etc.. OKRs don’t capture these nuances, and in turn they can absorb factors not related to the project—so don’t shoehorn it into OKRs just because it sounds cooler than project management and Gantt charts. If that output-driven project is taking up your whole team for one entire quarter, you should feel safe in your organisation to say “we don’t have an OKR this quarter, we need to get this project done”.

But beware: don’t let that become your default mode. As a product team, working according to a predetermined plan for a predetermined output should be an exception, not the rule—since most of the time you don’t know what you don’t know. When you’re done with such a project, you can and should come back to outcome-driven OKRs, and getting there by quickly and safely making assumptions and learning by testing them.

Key Takeaways

Over this two-part story, we’ve gone through 6 principles for a truly effective use of OKRs with cross-functional, mission-driven teams (the first 3 were laid out in Part 1).

  1. At least one of your Objectives should be cross-functional and stable.
  2. Its Key Results should express the outcomes that show your progress towards your Objectives (not the work you’re willing to put into it).
  3. Key Results should be unequivocally measurable. If there’s an underlying KPI, its value must look the same at any given time for anyone who cares about it.
  4. Check in regularly on how you’re doing, making your OKR scores visible to anyone in the company. Within your team, check in even more regularly and thoroughly — besides checking in on the OKR scores, share across functions what you’re doing and what you’re struggling with.
  5. Even better: pull your stakeholders into the mix!
  6. Make learning safe, by giving teams all the skills they need to be autonomous, and by not using OKRs as a tool to track progress or measure performance.
ProductCamp London (13 April 2019). Photo by Matt Hobbs.

In between publishing Part 1 and Part 2, I had the awesome opportunity to deliver these principles as an impromptu talk at ProductCamp London (if you’re curious, here’s a great writeup of the event).

A big thank you to all product people who attended, who triggered lots of discussion during Q&A, and who have expressed additional interest in this topic in the remainder of the event (and afterwards). You have surely given me even more motivation to get Part 2 out there!

If this way of thinking and working sounds good to you: we’re hiring Product Managers in Lisbon and in London!

--

--

João Craveiro
Onfido Product and Tech

He/him, geek, dad. B2B & platform #prodmgmt from Lisbon to the world — book out soon! Staff Product Manager @remote. Opinions my own. jcraveiro.com