The “What” and “Why” Behind Performance Metrics

Fabien Loupy
SSENSE-TECH
Published in
7 min readDec 4, 2020

Part III: Setting Tech Team Objectives and Deliverables from a Vice President of Technology

This is the third and final instalment of the SSENSE-TECH series on the what and why behind performance metrics. Read Part I on squad metrics and Part II’s dive into technical metrics, for more.

Many people know the famous quote from the management guru, Peter Drucker: “What gets measured gets managed.” Several organizations have adopted this mantra to measure as many indicators as they can.

This quote was actually misattributed to Drucker, according to the Drucker Institute itself, and confirmed here as well. The full quote dates back to 1956, from an academic named V.F. Ridgway: “What gets measured gets managed — even when it’s pointless to measure and manage it, and even if it harms the purpose of the organization to do so.”

As we set our goals with the Tech leadership team for the department, we intentionally avoided measurement for the sake of measurement. Metrics need to be clearly linked towards our company objectives. We should measure only what matters, recognizing that many things done by knowledge workers such as developers, product managers, etc., are not necessarily simple to measure but are nonetheless essential (e.g. impact of a developer investing in coaching a more junior team member).

At a department level, Technology tracks five main objectives:

  1. Ensure the reliability of our services and the quality of our code
  2. Deliver the technical foundation to support the internationalization of our operations
  3. Increase conversion on the website by X%
  4. Grow the Mobile app share of revenue to X%
  5. Retain, develop, and attract an engaged and high performing team

These objectives are directly derived from our company objectives, and you can notice that aside from the first one, their outcome is not necessarily technology-focused. These objectives speak to what SSENSE wants to achieve by investing in technology. With that as the foundation of the objectives, let’s now examine the more technology-focused goal while briefly touching upon the others.

Ensure the reliability of our services and the quality of our code

To best serve our customers, the reliability of our services is key.

I initially thought about setting a target on the maximum number of incidents we can have in a year. The fewer incidents we have, the more reliable our systems are. It is a measure that would be easy to understand and communicate to an executive audience. But when you think about it, by setting such a target, you run the risk of people trying to hide incidents as we come close or go over the target. We could end up in long and worthless debates about whether an issue is an incident or not. To illustrate this, if we created a bug that caused our packing station to run slower for 15 minutes before we fixed it, and given no customer would notice, would that qualify as an incident? How about a product not appearing on the website because an optional field was not properly completed when we created the product? Should technology have anticipated such a case?

We ended up dropping the number of incidents as a metric and requested that each incident come with an incident report, to learn from them instead of hiding them.

Next, we considered setting a maximum for allowed rollbacks. However, this would create a similar risk. Teams could be tempted to fix forward, which takes longer and prolongs a negative customer experience. In prioritizing team efficiencies and the customer journey, we ultimately found setting a maximum to be counterintuitive.

Measurability as a tool to drive organizational alignment towards quality

So what do we measure instead? If you have read Part II of this series, you will know that we put significant effort into measuring the yield of each service to evaluate reliability. The SSENSE Tech leadership team agreed to setting a yield objective for each service and for the critical aspects of the purchase funnel. I find yield objectives to be useful for many reasons:

  • It is simple to communicate the relevancy of this metric to my executive peers.
  • It drives higher accountability. I had my eye on the yield of the critical services of our purchase flow and regularly sent questions to the team when it observably deteriorated.
  • It is actionable and unforgiving. It created visibility on defects that existed without us knowing. It drove investigation and code changes such as tweaks to our retry policy, or keepalive settings.
  • It can be shared between Engineering and Product. While the yield can be seen as a technical metric initially, it is also in fact a product-level metric that both the developers and product managers should be incentivized on. While teams can deliver new customer-facing features, if the service powering these experiences is down frequently, the impact will not fully materialize. To ensure quality is a shared incentive, some organizations do not allow feature deployment until the yield is above its objective for a certain number of days. If developers can’t deploy their code, it will impact the team’s ability to deliver their roadmap. It ensures that technical sustainability is seriously taken into account and debated when establishing roadmaps.

While setting the exact target for yield, it is important to guide your executive team. The target needs to be high yet achievable, otherwise it will be counterproductive and not taken seriously.

For example, targets such as 99.999% yield could be set aspirationally without understanding the complexity it represents to reach such high availability, and the costs associated with such a decision — such as major architecture redesign of key components, multi-AZ strategy, drastic slow down of team velocity, etc.

While this objective is kept simple yet powerful for executive communication, there are many detailed sub-objectives that Engineering Directors are held accountable to. For example, implementing planned and unplanned degraded experiences, implementing circuit breakers, ensuring proper classification of errors, increasing availability zones within AWS, etc.

Deliver technical foundation to support the internationalization of our operations

The challenge in this objective is that no software is live, and we can’t focus on the business impact of the Tech team’s work prior to its delivery. Most of our tech efforts to enable our international growth fall under this objective. To adequately measure our performance, we decided to set objectives based on meeting specific deadlines for product launches, while meeting the business requirements and staying within budget. This approach is more waterfall than I would like, but we did not find a better way to measure the success of these kinds of projects. ‘Launching a new ERP system by August 31’ is an example of such a project we had to achieve. Once these technical components are live, we then can transition to metrics like yield and business outcomes.

Increase conversion by X%

For website features, we originally had a double digit conversion increase objective. However, we rapidly moved to a revenue per session objective, as features can decrease purchase propensity while having an outsized impact on the financial productivity of a user’s session (e.g. personalization, search result relevance, or app performance optimization).

We rigorously track our target of percentage increase of revenue per session by A/B testing all the features we launch. Business stakeholders contribute to experiment design, to ensure we have consensus on the assumptions made about the customer journey. Once experiments conclude, we then validate the impact with the Finance team.

Grow Mobile app share of revenue to X%

Having launched the SSENSE iOS app in October of 2019, we set a specific objective for Mobile to reach a certain share of total company revenue. We first contemplated whether we had the confidence in the app to actively shift traffic volume to the channel. Ultimately, this objective drives interesting discussions and is ensuring alignment with our Marketing and Advertising teams.

Retain, develop, and attract an engaged and high performing team

None of the above would be delivered without a dedicated and high performing Tech team. So we naturally have an objective around our talent. We track our engagement score, our employee Net Promoter Score, and our attrition.

For the first two, we use a software that sends regular anonymous survey questions to everyone in the company on a regular basis. With the help of our HR Business Partner (HRBP), I regularly review these results for the department, as well as aggregate findings from exit interviews. It allows us to detect any issues and react quickly. It also provides invaluable insights to influence our decisions. Our HRBP also uses these metrics and surveys to prepare monthly HR reviews with all managers in the department to address the most important topics brought forward by our employees.

CONCLUSION

These 5 objectives are converted into a scorecard that I am accountable for with the CEO and with the other members of the executive team. It is high-level enough to be understood well, communicated, and remembered.

To make these objectives meaningful, it is important to keep them fresh in our team’s mind. Every 6 weeks we have a Tech all-hands and I report transparently on each metric to the entire department. Every quarter we have a Quarterly Tech Review with the executive team where I give a detailed update on each of them.

Finally, within the tech department, these 5 elements are then broken down into 100+ smaller objectives that are distributed across all directors in the department and trickled down to each team. This ensures a clear alignment with the Tech leadership team.

Editorial reviews by Deanna Chow, Liela Touré, & Gregory Belhumeur.

--

--