Benchmarking business processes
The Big Questions of Benchmarking
Benchmarking can be beneficial for any organization, yet it is fraught with many challenges. In this interview, John Tesmer, director of Open Standards Benchmarking, offers insights and advice on how to overcome some of the common challenges of benchmarking.
What are the biggest challenges in benchmarking?
The biggest challenges for benchmarking used to be data usability and validation. There are several readily available sources of benchmarking data (e.g., associations, government, and universities). However, not all data is created equal. When you pull data from multiple sources, you can run into problems where the scales don’t match or variables were assessed differently, making an apples to apples comparison difficult. Furthermore, there can be differences in how, if at all, the data was validated, calling into question the validity of the data. Frameworks that classify processes have helped create uniformity around high-level process standards and their measures, ultimately making benchmarking easier and the resulting data more reliable.
More recently, the major challenge is about getting the most current, precise, and accurate data relevant to the decision you’re trying to make. Lots of organizations are lean in their operational processes. Many continue to benchmark their operational processes to ensure they are as efficient as possible, but we’re seeing organizations asking for more information about more granular functional processes within specific industries. There are fewer consumers of this information because of its nature, and in some cases that information is very sensitive and organizations are reluctant to share it.
What are organizations using benchmarking for, predominantly? Baselines, target setting, pinpointing process improvements?
Benchmarking is being used for all of these. However, when you take a step back, benchmarking is always conducted to create context for decision making. People generally don’t go to the trouble of understanding, collecting, processing, and reporting their data to a benchmarking partner just because they’re curious. They are trying to make a decision, and in many cases that decision is a tough one: Do we outsource a particular process? Do we bring an outsourced process back in house? Decision support is at the core of all of the things people tell us they’re doing with the data, be it setting a baseline or trying to identify processes ripe for automation or outsourcing.
What areas are most commonly benchmarked within organizations?
Traditionally APQC supports the core “supporting” processes. These are ones in the traditional back-office functions of HR, IT, finance, and supply chain. There is also demand for data in product development and sales and marketing. However, the growing requests are for more specialized benchmarks, either industry or sub-process-specific metrics. This could be related to the ability of frameworks like the PCF to solve the problem of measures by function, so people are starting to request more granular information.
What advice do you have for organizations struggling to pick the right measures? How can organizations balance standardized measures for enterprise-wide comparison with customized measures for in-depth understanding?
Well, at a functional level — in back office supporting processes, APQC’s set are pretty good for a start. If you’re struggling to reduce a set of measures down to a smaller set — I’d suggest that you reconsider. The effort you’re spending to reduce 10 measures to five measures may be more than what you’d spend simply collecting those 10 measures — and no one ever got fired for making a decision with more than enough data.
When it comes to supporting decision-making with benchmarking it’s better to err on the side of too much data. Typically this type of decision making has far reaching impacts — people’s jobs, revenue, or market share. So you should make sure you have all of the information necessary to make your decision. This includes a good mix of measures that link back to your objectives. Furthermore it’s a good idea to include a couple of measures that make you uncomfortable, this ensures you aren’t (a) creating bias in the benchmarks or self-fulfilling prophecy about what the data will tell you and (b) ensure you are thinking broadly about the topic.
Do you have any advice for how an organization can ensure it will get its value from the get-go (ensuring that the data is available, the quality of the data is high, or the data will be significant to the purpose)?
The best way to get value out of the data: Use it to support valuable decisions. The data alone is not very valuable. It requires context and action to make it valuable. Consider a situation where you’re investigating the acquisition of a new division with the expectation that it will reduce your supply chain costs. You can model the scenario in a closed environment and make a decision that the acquisition will work and will have a full return on the investment within three years. Alternatively, if you benchmarked your entire supply chain costs, you could discover that your supply chain is already operating in top-performer territory and that the investment won’t make a significant change in your costs. Having that benchmarking data helps you to focus your investments in areas where there may be better potential for improvement.