I want standardised performance indicators!

Jacky Tweedie
Jul 27, 2017 · 5 min read

Standardised indicators (standard KPIs). Fun stuff. Big business — very big, very expensive business.

They aren’t new — neither to the public nor private sector. They’re just underused. Not well understood. Hidden in some hard to read quarterly report and badly mangled dashboard. When used, they are typically for back office functions.

Consider that in 2011, the UK Public Audit Forum (comprised of the five UK audit agencies) released sets of standardised indicators for each of seven corporate service activities — estates management (what we call Real Property over here), finance, human resources, ICT (what we call IM/IT as if information use was somehow separable from its enabling tool), procurement, communications, and legal.

In the Canadian public service, we usually refer to those types of categories as Internal Services (elsewhere, people call them ‘back office functions’, or administrative services, etc.) — the internally facing but critical services and programs that support the delivery of programs and services for Canadians.

Without Internal Services having your back (literally in the case of your office chair), you couldn’t be that public servant Program person working to deliver a grant to a small community to support historical events, or, that public servant Program person rebooting her computer to ensure uninterrupted Canada Pension Plan (CPP) payment processing for seniors.

These categories of indicators would go on to be adopted in whole or in part by many other OECD countries, including Australia, New Zealand, and the US, in various performance measurement initiatives [that last one is a bit tetchy these days].

I just want to leave a few points here (gotta maintain a standard performance metric of ~7 minutes a read):

1. Selecting standardised indicators requires careful thought given the weight likely to be attached to them. Getting people to agree on a set of indicators is hard (human factors issue) so the tendency is toward anodyne, proxy measures that seem to say something useful. Trap: unwarranted inferencing/erroneous assumptions about performance.

The challenge is to select robust (from an information perspective); simple; select indicators that put together, tell a compelling performance story over time.

Get a data scientist to help you. Or a statistician. Just call him Bob, ‘cos they are usually the same person.

2. Selecting standardised indicators is hard even if we have defined something as ‘a common business process’ across sectors (e.g. filing applications; issuing licences). Until we specify every assumption and practice embedded in the business process (which we want to measure with our standard indicator), we could be in for a heap o’ misery cos we thought we were measuring the same thing, but we weren’t. Let’s take service standards as an example — the time it takes to process a client’s request. Sounds easy enough. But — trap: we need to make certain we are counting the same things. Jane in sector A starts her service standard clock as soon as she receives a client request. Francois in sector B starts his service clock as soon as he has determined the client provided an accurate complete form. Amira in sector C starts her service clock as soon as she receives a client request, but stops it when she sees the form is incomplete, and then re-starts it when the client re-submits. Nobody is counting the same thing, even if they think they are and what they are counting has the same name.

The challenge is to specify all the assumptions and process points before even thinking of indicator methodology.

Get a business process mapping dude/dudette (Danielle is available) to help you with this, they are very good at this. They typically work very well alongside people like Bob cos they like metrics for business processes.

3. We can extend our knowledge and understanding about standardised indicators in Internal Services to build outward toward ‘External Services’ — programs and services for Canadian, from a federal public service perspective. A service request is a service request is a service request. Doesn’t matter if it’s HR getting a contract done, Real Property getting you a chair to sit in, or a Grant & Contribution program processing a client claim. There are standard business processes (first do this, then do that, then the form goes here, then this gets done to that form, etc.) common to both Internal and External Services. Trap: you need to map the processes and specify the terms. Carefully. Check and re-check assumptions. Then begins the very difficult work of getting people to adopt the standardised indicators. That is very very very very hard (see pt. 1 above — human factors issue).

The challenge is to make sure that what is getting selected to be measured is suitable to be measured alongside another.

Program/service people have deep subject matter expertise about their work. Make sure that they work alongside Bob and Danielle — go get Kareem, he’s ready to work with the team.

4. We can start this work now using InfoBase and other similar data sources of program and service delivery to being to identify sets of common indicators used in performance measurement. People who work in performance measurement…huh…this is tricky. Uh, we don’t have many that are…not also evaluators. That can be an issue. Look, if you have Bob, Danielle, Kareem together, your challenge is going to be finding a performance measurement specialist with a research-y mind that understands there really is only a limited set of performance indicators (see also this link) you would want to use to measure performance in a public service context. The hard stuff — the true challenge — is about unpacking the issues in points 1–3. Otherwise you’re using fairly standard calculations while swapping out names of things and values.

BUT — and this is a big BUT — one key reason why telling a results performance story is so hard is that it is so easy to get lost in the shuffle of crafting the results story.

Enter your performance measurement specialist — the key wrangler. Don’t forget them. Seriously. They provide the overarching functional expertise to make sure that the objective — getting timely, useful, credible performance information — is achieved. They are worth every little penny spent on their EC salary. Go get Danita.

Now. Throw Bob, Danielle, Kareem and Danita in a room with information/data/technology and watch magic happen. If you think someone else should come to the party, let me know.

[Full disclosure: JT thinks standardised indicators are a sexy/worthwhile pursuit and is tearing through open data to pull some out. Fellow travellers welcome.]

Jacky Tweedie

Written by

is_a cognitive scientist in public service. Files: strategic planning; performance; information; data. Opinions own. Addicted to music

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade