KT Tool Review: Audit & Feedback

How to advance a growing — yet somewhat stagnant — field of research

CHI KT Platform
KnowledgeNudge
7 min readNov 28, 2018

--

By Patrick Faucher

Audit and feedback research is a “massive field, and it’s growing fast.”

That’s Dr. Noah Ivers of the Women’s College Research Institute speaking at a June 2018 Choosing Wisely Canada talk, which you can watch here.

https://choosingwiselycanada.org/event/2018junetalk/

What is Audit and Feedback?

In short, Audit and Feedback (A&F) is the process of measuring an individual’s professional performance and reporting their results back to them within context (e.g. in comparison to their peer group, or desired standards).

The goal of this feedback loop is to encourage those receiving feedback to focus on and improve their performance according to metrics deemed important by their organization.

If you want to learn more about audit & feedback, you can read the Cochrane review by Dr. Ivers, or this section about A&F on the Canadian Institutes of Health Research website.

A Languishing Literature

Back to Ivers and his A&F talk — he adds “although it’s a growing literature, it’s a stagnant literature.” If you’re like me, your ears just perked up.

As it turns out, Dr. Ivers and his team — which includes notable knowledge translation (KT) names like Dr. Jeremy Grimshaw, Dr. Anne Sales and Susan Michie — show up in quite a few recent influential publications on the topic of A&F.

Ivers’ comment on the ‘stagnation’ of A&F literature is in reference his own paper (which I’ve referred my clients to more than once) — a 2012 Cochrane review of 140 studies which concluded that “the effect of audit and feedback on professional behaviour and on patient outcomes ranges from little or no effect to a substantial effect.” In other words, in some cases A&F doesn’t work at all, and in others, it can change behaviour by as much as 16 per cent.

This variability in A&F effectiveness underscores the lack of forward momentum in the study of A&F. “If we had done the exact same Cochrane review many years prior, we basically would have had the same conclusions in terms of how well it works.” We simply do not know how to optimize audit and feedback interventions, and we’re doing very little to learn anything new.

The Audit & Feedback Echo Chamber

Here’s a common situation in health research and/or quality improvement in healthcare:

Person 1: ‘We have a bunch of data that’s not being put to use’

Person 2: ‘We should share it with healthcare professionals so they can use it to inform their practice’

Person 3 (the implementation-minded person in the room): “Hey, this sounds like a study! Let’s get some baseline data, and measure the effectiveness of the A&F intervention!”

Source: https://giphy.com/gifs/modern-family-thumbs-up-phil-dunphy-geYwtodB9AiI0

Sound familiar? I’ve certainly been in that room. But here’s the rub: very few studies actually build on what we already know (i.e. theory) and examine the underlying mechanisms causing behaviour change. As a systematic review by Colquhoun and colleagues noted,

“The explicit use of theory in studies of audit and feedback [is] rare.”

Of all the 140 studies reviewed, only 14 per cent reported use of a theory, and only 9 per cent used a theory to inform development of the intervention. [1]

As Dr. Ivers notes in his talk,

“This is a problem. This is people doing stuff and evaluating it in trials, but not necessarily taking advantage of the opportunity to help everybody else learn how to do things better.”

Source: https://giphy.com/gifs/family-dunphy-wifflegif-fxwKMXHxeIn8k

This challenge isn’t new. We’ve touched on it before in our post How KT Can Help Reduce Research Waste.

Learning to Do Things Better

In 2014, Dr. Ivers and his team published a debate article providing guidance on creative ways to improve A&F implementation research, including:

  • Best practices for intervention design (such as going beyond just text and incorporating graphics, or having conversations with those receiving feedback);
  • Applying “relevant theory to improve design and increase contribution to the literature,” (such as the Theoretical Domains Framework) and;
  • Manipulating intervention components within real-world constraints (e.g. using multiple interventions to address specific barriers and facilitators where feedback alone is unlikely to change behaviour). [2]

Two years later, Dr. Jamie Brehaut (another member of that powerhouse A&F research team) published “Practice Feedback Interventions: 15 Suggestions for Optimizing Effectiveness”. The paper identifies four key categories to focus on when designing A&F:

  1. Nature of desired action (e.g. specific actions that “can improve and are under the recipient’s control”)
  2. Nature of the data available for feedback (e.g. provided ASAP and in multiple instances)
  3. Feedback display (e.g. “minimize extraneous cognitive load for feedback recipients” — reduce the mental effort required to process and respond to feedback)
  4. Delivering the feedback intervention (e.g. “prevent defensive reactions to feedback”) [3]

These areas of focus provide clear parameters for designing a better A&F tool. But we can’t simply be satisfied with that, challenges Dr. Ivers, concluding that “future studies need to evaluate comparative effectiveness of different methods of designing and delivering A&F.”

Photo by Sam McGhee on Unsplash

Toward that end, Heather Colquhoun (from that same omnipresent team) interviewed theory experts from relevant fields (such as psychology, education, medical decision-making and economics) to identify “testable, theory-informed hypotheses about how to design more effective A&F interventions”.

Their work yielded 313 unique hypotheses. For example, “A&F will be more effective… if [it involves] engaging recipients in social discussion about the A&F.” These hypotheses were organized into 30 themes (for the example above, the theme is “social engagement”) across five categories: A&F recipient; content of the A&F; the process of A&F delivery; the behaviour focus of A&F; and ‘other.’ [4]

These themes — and more specifically, the hypotheses associated with them — form an incredibly clear set of directions upon which to build comparative effectiveness studies using A&F.

Colquhoun et al. note that “the number of potential hypotheses identified and the range of theories and theoretical concepts discussed underscores the complexity and number of potential mechanisms underlying effective A&F.”

In short, there’s plenty to study to advance the science of A&F. For example:

  • How can we word our call to action to improve compliance?
  • How can we best engage end-users in the intervention design process to avoid pushback?
  • Does making practitioners sign a commitment to the A&F process improve the sustainability of positive intervention effects?

Audit & Feedback and the Art of the Nudge

For those who’ve read my other posts, you know I’m going to try to find an angle to work the art of nudging into the conversation (if you’re not familiar with the term, you can read about it in my post, Automatically Smarter).

In their report Behavioural insights in health care: Nudging to reduce inefficiency and waste, Chris Perry and his team at the Ipsos MORI Social Research Institute took some of the practices identified by Dr. Ivers’ team and identified their overlap within the nudging field (see figure below). [5]

Source: https://www.health.org.uk/publications/behavioural-insights-in-health-care p. 42

Looking beyond this one application, there appears to be significant opportunities for alignment between the 30 themes identified in Colquhoun’s paper and the UK Institute for Government’s MINDSPACE framework.

Some aspects of the MINDSPACE framework, like the importance of the messenger (the source of information) and norms (the relevance of who or what you’re being compared against) are certainly already on the radar of A&F researchers. But others, such as the importance of modifying default options to change behaviours, and priming — the notion that our acts are often influenced by sub-conscious cues — could perhaps be incorporated into future studies and add value to the literature.

Before you Implement Audit & Feedback

The next time you or somebody you know is about to embark on an audit & feedback project, take a look at the testable hypotheses embedded in the work above. Try to incorporate some or all of the best practices mentioned into the design and development of the intervention. Opt for a research design that allows you to compare the effectiveness of two or more different solutions, and not just against a standard control.

Challenge yourself and your team to not just add to the A&F echo chamber, but to listen to what’s there and add something new that others can build upon — I’ve certainly been guilty of this in the past (heck, my work isn’t even published, and I’ve done audit & feedback, so believe me when I say I understand the challenges in doing it right). But having familiarized myself more with the current state of the field, I’m sold on the need for more purposeful research and look forward to pushing that agenda and seeking out opportunities to advance the science.

About the Author

Patrick Faucher is the Creative & Strategic Services Lead at CHI. A communications strategist with over a dozen years experience, he specializes in creating content engineered to build awareness, understanding, engagement, and adoption through an approach rooted in design thinking (rapid prototyping) and behavioural insights (nudging).

--

--

CHI KT Platform
KnowledgeNudge

Know-do gaps. Integrated KT. Patient & public engagement. KT research. Multimedia tools & dissemination. And the occasional puppy.