Driving Behaviour with Data
I’ve been working with our team who develop SQL Clone to see if we can use data to encourage better decision-making in our day-to-day work.
Why are we trying this?
The team want to be doing the most valuable work they can. It’s really hard to judge that on gut feel, and Redgate has a long-term desire to be more data-driven.
We use Objectives and Key Results to set direction for our work, but doing it all with quarterly OKR reviews is a painful process. We want to focus on ambitious opportunities when discussing objectives, not fretting over team effectiveness and maintaining our current world.
We decided to split that “business as usual” work out from OKRs and handle it separately. This is encouraged in Christine Wodtke’s book Radical Focus.
What are we trying?
We’re trialing having a set of “health metrics” that the team can monitor.
These should let the team make more informed, tactical decisions on how to spend their time.
If everything is going swimmingly, we can focus relentlessly on trying to influence our objective.
If not, we will need to divert effort to addressing whatever problem we’ve identified.
How are these different from Key Results?
Unlike KRs, we aren’t trying to drive change here. We want warning signs that we aren’t taking good care of ourselves, our processes, or our products.
These signs should help prevent fires before they start, allowing more deliberate focus on other valuable work.
We expect health metrics to be long-lived, while OKRs will still be reviewed quarterly.
What metrics will we use?
This is new, so we’ll be iterating and tweaking the metrics quite a bit. To begin with we’ll look at six metrics, based on the teams’ recent experience and research underpinning Dr Nicole Forsgren’s book Accelerate.
Redgate releases user-installed product updates frequently, but irregularly. That means users must upgrade our products often to get feature improvements and bug fixes.
We’ll monitor how many users are running a version shipped in the previous 30 days. If this is cohort is too small, we’ll need to investigate why people aren’t picking up our new changes and act.
Age of unshipped work
Since we release irregularly, and upgrading can interrupt users, it can be hard to know when it’s worth pushing a new release out.
We can monitor how much work we’ve finished but not given to customers through the age of unshipped code. If we have too much work sat “on the shelf”, we can prioritise getting that work to users.
Amount of unfinished work
Worse than leaving work on the shelf is leaving it in pieces on our workbench.
Specifically, we’ll look at the number of open Pull Requests the team have. If this gets too high, we need to focus on getting work finished and ready to ship.
Active user engagement
Shipping product may be great, but we want to know people are able to successfully use what we’re giving them.
The ultimate value of SQL Clone is allowing people to create database clones. We’ll monitor how often this happens and expect it to gradually grow over time.
If it trends or spikes downwards, this warrants investigation.
We want to see how often users are encountering errors while using our product.
We don’t want this to be high, or spike upwards. That may be a sign we have misjudged the quality of a change, or perhaps need to slow down and take more care in our work.
Similarly, we want to know how often customers get in touch with Redgate support. That can be a sign of various product problems, including disruptive errors.
If the number of support tickets spikes, we’ll need to investigate the cause and do something about it.
Our internal Business Intelligence team are working hard to get us access to these metrics.
Once we have access to the data, we need to build it into how the team works — making it a valuable part of our process, not a bunch of numbers we rarely look at.
We’ll do a follow-up blog post in the not-too-distant future, showing how we’re getting on.