Objectives and Key Results (OKRs) is a framework that allows teams and individuals to focus on achieving a goal (Objective) and measuring their progress toward that goal based on impact (Key Results).
OKRs were invented at Intel, and made popular by John Doerr, who began his career at Intel in the 1970s. In 1980 Doerr joined Silicon Valley Venture Capital firm Kleiner, Perkins, Caufield & Byers, where he funded many of the world’s most successful tech companies including Compaq, Netscape, Symantec, Sun Microsystems, Amazon, Intuit, Macromedia, LinkedIn, and Google.
OKRs were embraced by Google and are core to the company’s evidence-based and merit-oriented culture, resulting in the widespread popularity of the discipline among tech companies of all sizes.
Implementation of OKRs tends to vary from company to company. Some organizations emphasize the importance of individual objective setting, while others use OKRs strictly as a company and team-level framework.
The use of OKRs by tech start-ups generally begins within the product and engineering teams. Once the value of OKRs is recognized by other functional groups they may be adopted throughout a company, used to set both company and functional level quarterly objectives and the metrics used to measure outcomes.
There is a myriad of books about OKRs and a quick Google search results in at least twenty software tools designed to aid with their implementation.
My first exposure to OKRs was at Google, however, the approach outlined here differs from Google’s. It’s worth noting that the use of OKRs is inconsistent even within Google itself. Like many things at the company, each product area has a slightly different way of doing things, and of course, each of these groups vehemently believes theirs is the correct approach.
The Glue that Binds
OKRs are more than just a means of defining goals and measuring results, they are a framework for critical thinking, and most importantly, fostering a culture of cross-functional collaboration that values measurable contributions, accountability, and clarity of purpose.
OKRs are the glue that binds otherwise disparate teams and are most impactful when adopted company-wide, helping to align and focus efforts across distinct groups. The cross-functional collaboration this creates is best exemplified by the notion of a “shared OKR,” which is a single OKR that can only be realized by coordinated efforts of more than one functional group. For instance, a Product team may have an OKR related to user retention that requires the work of Marketing to be successfully realized.
OKRs may also be “rolled up,” from small teams all the way up to the executive or company level. This may sound practical only for small companies, but the practice is used by some of the world’s largest tech companies, including Google, whose “company OKRs” are comprised of a handful of critical product area OKRs, that, in turn, have been adopted from small, sub-team OKRs.
The practice of rolling up OKRs creates a direct and meaningful connection between the work of an individual on a small team to the mission of the company. This tangible link provides teams and contributors with a sense of purpose in their day-to-day work.
Aligning user problems with team objectives
OKRs are perfectly suited for product teams, as they help to reinforce the practice of focusing on the problem at hand, rather than the potential solutions one might employ to address the problems.
One should endeavor to grow attached to problems, not solutions. Adopting this principle provides a myriad of benefits.
Objectivity and Non-attachment
Product management is a science of experimentation and discovery. Early in our career, we learn that opinions, and in particular predictions about user behavior, are often wrong. It is a natural human instinct to assume that our rational assessment of a problem, based on our past experiences, will yield sound conclusions and represent the likely sentiment or behavior of others. But this is rarely true.
In reality, reliance on heuristics (the practice of using past experience to assess current circumstances) is no different then reliance on our biases. Cognitive bias is a systematic pattern of deviation from rationality in judgment. We create our own “subjective reality” from our perception of various inputs. Our subjective cognition of reality, not the objective input, dictates our opinions and the conclusions we draw. Thus, cognitive bias results in perceptual distortion, inaccurate judgment, illogical interpretation, or what can simply be called subjective irrationality.
A common misconception is that good product managers are gifted with “product instincts.” They have an innate ability to know what users want. This mystical ability doesn’t exist.
Like any other scientist, product managers must formulate a well-reasoned and well-researched hypothesis to solve a problem. And like a theoretical physicist, our ideas can only be proven through sound experiments and the objective analysis of empirical data.
Known as “The Scientific Method”, there is nothing innovative or novel about this approach. It has been in use since the 17th century by practitioners across a range of disciplines, from the social sciences to physics and mathematics. It is the standard process for the investigation of phenomena, acquiring new factual knowledge, or correcting previous assumptions. For a body of knowledge to be deemed scientific, to represent objective truth rather than subjective opinion, it must pass scrutiny based on empirical, quantifiable evidence subjected to well-established principles of scientific reasoning. Experiments are a procedure designed to apply scientific scrutiny to a hypothesis, resulting in factual, incontrovertible truth.
The method is a continuous process that begins with observations (e.g. user studies, user experience research, live experiments, etc.). Based on these inputs, product managers develop ideas about how to address a user need. A strong hypothesis can be thought of as a well-reasoned prediction that can be tested and validated.
But not every solution hypothesis can be empirically validated. While some hypotheses can be proven by carefully controlled experiments that gather empirical data, depending on how well additional tests match the predictions, the original hypothesis may require refinement, alteration, expansion or even rejection.
The following diagram helps to illustrate the process of validating a product hypothesis:
- BASIC ASSUMPTIONS: Validation of basic assumptions is presupposed (the outcome of ideation exercises)
- SOLUTION HYPOTHESIS: A proposed solution to a user need or problem based on information inputs
- INFORMATION INPUTS: Includes existing data and observable user behavior, user feedback, intuitive hunches, or the results of related studies or competitive products
- DATA INPUTS: Quantifiable measurements and observations subjected to systematized statistical scrutiny
- DISPROVEN HYPOTHESIS: A disproven solution hypothesis is often the result of incorrect assumptions about user motivation and behavior
- ITERATIONS AND IMPROVEMENT: Technology and user needs are constantly evolving; a solution that serves those needs must also evolve
The Confidence Quotient
There is also a cost to experimentation. Variant testing, user experience research and other methods of product experiments are time-consuming and costly. PMs are not academics and do not have the luxury of spending months or even years proving a hypothesis. Product Managers serve the commercial interests of the company and therefore need to bring products to market quickly while mitigating the risk of failure. PMs must balance these conflicting agendas, ultimately assessing the cost/benefit ratio and degree of acceptable risk.
A useful concept and tool in assessing risk is a “Confidence Quotient.” It is a variable that reflects one’s confidence in a potential solution hypothesis. It is not an exact formula, but rather a scale, from 0 to 5, with 0 representing no evidence and 5 representing highly compelling, verified evidence. The Confidence Quotient is not a proxy for whether or not you think a solution will work, but rather an estimate in the confidence of the data informing decisions.
Taking risks and making bets is an integral part of making products, but having a common vernacular to honestly and transparently evaluate and communicate risk is vital to establishing a culture of trust and learning, and provides practical day-to-day value in decision making and prioritization.
RICE scoring is one example where the use of a confidence quotient is particularly useful. R.I.C.E. stands for Reach, Impact, Confidence, and Effort and is a lightweight method for quickly prioritizing initiatives.
Let’s say there are five projects the team wants to get done, but the engineering capacity can only accomplish 4 of them. RICE scoring makes it possible to stack rank and prioritize the projects. Most methods use value and cost as the key variables to drive prioritization, with a goal towards doing projects with the biggest impact and lowest effort. Confidence, however, provides a valuable input when assessing relative cost and impact among multiple initiatives. Here is how it works:
- Reach = How many users are affected by this feature
- Impact = How will each user be impacted by this feature (on a scale of 1–3)
- Confidence = Our confidence in the validity of the first 2 numbers (on a scale of 1–5)? If no research has been done and we’re working from pure gut, confidence should be a 1. If we have run multiple studies and have empirical data to indicate a high probability of success, confidence would be a 4 or 5.
- Effort = How many engineering weeks are required to build the feature
Using these variables as inputs, the following simple formula gives you a RICE score:
(Reach * Impact * Confidence) / Effort
Projects are then stack ranked based on their relative score. While far from perfect, RICE scoring provides a best guess of which projects have the highest probability of success.
Objectives = User Problems
OKRs are particularly well suited to Product planning, as Objectives correlate directly to the problem area a team is focused on, while the Key Results represent the progress that can be made against that problem over a given time period.
As an example, let’s consider a team focused on user’s first-time experience with a hotel booking website. Following is their Objective:
Improve the user experience by addressing core product performance issues.
The Objective is intentionally broad and speaks to the team’s long-term mission (prioritizing the problems that have the biggest impact on conversion and retention). The objective will likely carry over until our core strategy changes. However, the results (KRs) they seek to achieve and the projects they define to impact those results will change with each planning cycle.
During a particular planning cycle, the results they plan to achieve could be something along the lines of the following:
- Decrease average 1st page load time to interactive of hotel pages for United States users from 4.62 seconds in May 2017 to 500ms in September 2017
- Decrease average 1st page load time to interactive of the home page for United States users from 3.59 seconds in May 2017 to 500ms in September 2017
- Decrease average 1st page load time to interactive of search result pages for United States users from 3.94 seconds in May 2017 to 500ms in September 2017
Each KR is specific and measurable. If defining a specific metric is impractical, one can instead define a result in binary terms: “achieve code completion for X by Y.”
The order of the KRs is also stack ranked by importance (in this example the pages with the highest amount of traffic and bounce rates).
The problem the team is focused on and the impact of their work becomes the main focus. During the OKR planning process, potential solutions are not discussed. Solutions are simply potential ways to drive impact. In this way, PMs grow attached to the problem, rather than specific solutions. They are non-attached to the solutions, but are laser-focused on the problem and solving it in the quickest and least costly way possible.
Executives should only care about the product team’s OKRs. What they plan to build should be of little interest, as the features themselves are simply a means to an end, and unless the executives have themselves done extensive research and prioritization, their opinion about what features should be built is irrelevant. We aren’t building the product for them, we are building it to appeal to a specific set of users or customers in order to achieve our business goals.
Similarly, a PMs performance should be assessed on their ability to achieve their stated goals, not on how much software they have shipped or someone’s subjective opinion about the features they created.
While it is common practice for a team to have 3 to 4 objectives, with 3 to 5 key results for each objective, Squads (sub-teams) should focus on one simple objective that reflects their core mission as a team.
OKRs are formulated by each squad by drawing consensus within the squad and across stakeholders. The PM operating as a squad leader is responsible for building consensus while using his or her best judgment to decide when consensus cannot be reached and making or escalating a decision.
Best Practices for OKRs
- Explicitly address a target user
- Align with the squad and group’s long term mission, KPIs, product vision, and business goals
- Evaluated for efficacy during every planning cycle
- Evaluated by all individual participants of a squad and by all interested or affected stakeholders
- Not necessarily time-bound, however one should avoid changing them in the middle of a development cycle
- There should be no more or less than one Objective proposed per squad during every planning cycle
- Include a quantitative evaluation that best represents the true intent of the Objective they are serving
- When paired with an Objective that is likely to extend beyond the time bounds of a cycle, they can focus on measurement of project completion required to eventually achieve the Objective
- They are explicitly time-bound, usually to the cycle in which they are active
- They explicitly define the metric by which they will be monitored
- They explicitly define a benchmark and a target to be achieved within the time frame explicitly defined
- When a metric is not yet instrumented, they can explicitly state that setting a benchmark is the Key Result
- They are accompanied by a monitoring dashboard
- There must be at least one and no more than five Key Results per Objective
At the end of a goal period, the tech team’s results should be evaluated against each stated KR. Expectations that a high-performance team should hit 100% of their KRs is unrealistic for several reasons. Software development planning is a process of prediction and iteration. While we do what is reasonable to make accurate predictions, teams must be comfortable with the reality that their predictions will frequently be wrong.
One of the most commonly overlooked realities that affect a team’s ability to achieve their results is unplanned work, and underestimated effort. As a team works together longer and becomes more familiar with the nuances of their endeavor, estimates become more accurate. However, teams should not obsess on achieving perfect predictive accuracy, as they will only fail, and end up spending more time trying to get their estimates right than building software. Instead, teams need to get comfortable with the intrinsic unpredictability of software development. Simply put, you don’t know what you don’t know. An engineer building something she has never built before may get it done surprisingly quickly or hit a particularly challenging problem that may take weeks to solve. Knowing this in advance just isn’t possible.
Moreover, unplanned work is a simple reality of day to day life for a tech company. As we build and learn, things we hadn’t thought of come up. This is ok, and it is vital for leadership to understand and acknowledge in order to best support high achieving technical teams.
The solution here is simply to “plan for the unplanned.” In practice, this means building headroom into capacity planning. In my experience, a team that spends 70% of their time doing planned work during a cycle is operating well. As such, hitting 70% of their KRs represents success. If they are hitting 100% of their KRs time and again, they aren’t setting ambitious enough goals.
In summary, OKRs support agile decision making by encouraging teams and individuals to seek the best information available, thereby reducing the risk of decisions founded in false assumptions (e.g. sunk costs) or anecdotal information.
OKR and roadmap planning focus on a specific period of time, or “development cycle.” Most companies plan OKRs on a quarterly basis, while others adopt two-month cycles, plan twice a year or only once a year. Personally, I have found quarterly product planning and OKR setting to work best. It is long enough to get substantial projects complete but also sets a tone of urgency to achieve something of significance every three months. “Most Viable Product” (the first, barebones iteration of a product) shouldn’t take longer than a quarter to ship. If it does, the scope is likely too large.