By Anthea Somas
“This is about listening to stakeholders and the public so we know which messages are landing, and how we can learn from that to make our communication more effective.”
Alex Aiken, Executive Director of Government Communication (UK)
In the previous contentgroup blog post, Preparing for effective measurement and evaluation, we looked at the value of measuring communication campaigns at a deeper level. And that measurement feeds the evaluation needed to really learn from and improve our future communication strategies.
However, the job of measuring and evaluating communication strategies in both government and business alike can seem a daunting prospect. A lack of best practice examples and a range of overly confusing, complicated models and methods can lead to uncertainty about what to measure and how to determine impact.
It therefore comes as no surprise that the outcomes of many agency communication campaigns go either unchecked, or only consider vanity metrics. These metrics are fairly superficial, such as knowing the number of ‘likes’ in a social media campaign. They don’t tell us whether a campaign has achieved more substantial outcomes such as altering perceptions or behaviour.
However, given we are now in the era of improved accountability, it is vital we understand that identifying the value of communication involves more than simply reviewing content analysis. As communication professionals we must show the effect our efforts have had on audiences in relation to our organisation’s objectives. We should be looking to develop evaluation as a mindset; that is, assessing the communication strategy, implementation and potential outcomes from the outset.
Strong concepts are emerging. The International Association for the Measurement and Evaluation of Communication (AMEC) in the UK is at the cutting edge of communication evaluation. By applying principles of program evaluation theory and theory of change — that is, understanding the causality of why things change, AMEC has developed the Integrated Evaluation Framework. In a nutshell, it is an interactive tool for assessing communication endeavours. It is straightforward, includes a step-by-step tutorial and it’s free to register and use. Categories within the framework are simple; objectives, inputs, activities, outputs, out-takes, outcomes and organisational impact.
University of Technology Sydney (UTS) Distinguished Professor of Public Communication, Jim Macnamara, was heavily involved in the development of the AMEC framework. As such, he advocates use of the framework, highlighting that any evaluation of public communication should be a three-stage process, with assessment being made before (formative evaluation — e.g., to establish base lines), during (process evaluation) and after the campaign (summative evaluation). A prime example of evaluation in action, Professor Macnamara has produced an award-winning evaluation on the effects of health communication on CALD (culturally and linguistically diverse) communities.
Evaluation models not only serve to overview and guide the steps in the evaluation of strategic public communication; they also identify the intent and underlying logic of strategic communication.
Professor Jim Macnamara International Journal of Strategic Communication, Routledge, 2018
The Government Communication Service (GCS) in the UK recently released Evaluation Framework 2.0. The framework references the work being done at AMEC and is a valuable tool for any government department looking to evaluate their communication efforts. The NSW Government has also embraced the concept by introducing their own evaluation framework reporting tool.
But deciding on the ways in which to measure for extracting relevant, useful information presents a challenge. If we are to know the full extent of our communication efforts we need staff who possess an understanding of commonly used methods for obtaining information throughout the campaign duration. We can look to The European Commission that has developed a Toolkit for the evaluation of communication activities. It contains categories for the best methods of evaluating activities such as conferences, newsletters, Smartphone applications and websites, to name a few — although Professor Macnamara says the EC toolkit is too long and complex (128 pages) and he is recommending simplified tools and guides.
Government agencies would do well to ensure their communication teams are aware of the various methods available for evaluation. Professor Macnamara developed a taxonomy for AMEC — a look-up table that shows the methods and metrics that can be applied to different strategies at different stages.
Here at contentgroup we practice what we preach, as evaluation makes up a large part of our overall methodology. By controlling implementation and preparing effectiveness reports along the way, we are prioritizing impact measurement. We understand that knowing the number of people who see a campaign is only half the job. The end-game is to know the outcomes; both short and longer term. That is, knowing not just that key messages got across but whether those messages resulted in a change in attitudes or behaviour.
In Corey Bradshaw’s book, The Effective Scientist, he highlights how today’s scientists need to develop high-level communication skills in order to ‘have influence beyond academia’. In this world of cross-pollination, we recognize that modern-day communication professionals effectively need to be mini-scientists; conducting research into how things happen and why, in order to best inform future practice. It should come naturally, because if you think about it, wanting to know what makes others tick is a big part of communication.
Originally published at contentgroup.