Imagine you just made a trip to your local pharmacy. You got some advice on a sore eye, and bought some eye drops. You discover the pharmacy is interested in measuring its impact, and you’re asked to respond to a survey over the phone.
When you receive the call, the first question is:
Has your quality of life changed because of the pharmacy’s [product/service]?
Slightly taken aback, you suggest your quality of life has changed a bit, but the eye drops haven’t really worked yet. Next, you are asked:
What in your life has changed?
This time you are allowed an open response. You explain you were hoping for relief from your sore eye, so that you could get on with your work and read your computer screen properly. The next questions is:
How important are each of these changes to you?
You need to separate out the changes you’ve experienced — a bit tricky to do, but OK — and rate each of them on a scale from ‘very important’ to ‘not important at all’. The questions continue –
Did anything bad happen because of the pharmacy that is important? If so what?
Did anything else in your life improve that you think is important?
There are another ten or so questions, all of them referring to the changes you experienced as a result of buying eye drops at your local pharmacy.
What would you think if you were asked to answer these questions? Would you be supportive of your local pharmacy wanting to understand more about your experience? Would you willingly provide the answers? Would you think the questions were odd? Would it bemuse you? Worry you? Would you cut off the conversation and get on with something else?
An impact data-rich future
The survey questions are all taken from a document published by some of the leading voices in the impact investing movement. They emerged from a project exploring how the new ideas around impact measurement can be put into practice. They were trialled by several impact investors with their portfolio companies, including a community pharmacy group, as a way of using a consistent approach to measuring impact across a diverse set of activities.
It is worth drawing attention to this project, because it is part of an increasingly energetic and well-resourced movement to develop principles for impact measurement and management (IMM). In turn, the growth of IMM is central to both impact investing and the so-called ‘impact economy’. This is a vision of an economy where impact data flows from the communities in which impact is created to the inboxes and Slack channels of investors, informing their decision-making and enabling them to maximise impact. This vision is elaborated for us by the ‘impact-weighted accounts’ initative at Harvard University, by articles written by McKinsey consultants, and by the plethora of tech start-ups promising to tell you what impact your investments are having.
This vision draws power and influence from its neighbour: financial data. We have built the systems for financial data to flow around the world, enabling investors to rapidly deploy capital wherever it will attract the most return. Just imagine if we had equivalent infrastructure for impact data, and if that data was combined with the power of the financial markets to create social and environmental, as well as financial, value. This vision is gaining traction, but it needs critical attention.
For data to flow, and for investors to be able to compare different kinds of impact, we need common language, concepts and frameworks. This is where the Impact Management Project comes in. After extensive consultation, they have “reached consensus that impact can be measured across five dimensions”: who is experiencing change, what change are they experiencing, how much change, the contribution of the enterprise’s product/service to the change, and the risk the change does not occur. The next challenge is translating this abstract framework into guidance and tools for collecting and analysing data.
This is how we arrive at the questions quoted at the beginning of this article. It comes from Using self-reported data for impact measurement: How to use stakeholder surveys to improve impact performance, published May 2019. “There are a number of challenges to overcome to ensure all enterprises and investors measure and manage impact consistently and effectively”, the foreword states; “We believe that using standardised stakeholder surveys designed to collect rapid data on impact performance — across each dimension of impact identified through the IMP — is part of the solution to these challenges.”
Documents like this, though they may not top everyone’s bedtime reading list, are crucial for understanding the full implications of the data-driven impact economy. It is very easy to focus on all the new frameworks and data visualisations, without thinking about where this data will come from. This report helps us imagine what it is like to be the source of impact data, to be the person asked to turn their subjective experience of the world into the pieces of information that flow back to investors.
Being the source of impact data
Personally, I think the questions are conceptually difficult to answer. I think it safe to generalise and say most people do not experience their lives as a series of discrete effects that can be rated separately in terms of importance, and assigned a duration. Some people may find it reasonably straightforward to think in this way; many will not.
Of course, the survey has not been designed to be the best way of understanding the experience of the individual, which will be full of nuance and caveats. It is, instead, designed to produce data that is consistent and comparable across diverse interventions. The goal is not depth of understanding, but an “easy-to-use survey tool”. So the response from the IMP may be: this is not a method for gaining the most detailed or accurate understanding of an individual’s experience, but it produces data that businesses can use to understand and improve on its impact.
There is some merit to this argument, and as these practices develop there will need to be a focused debate around robustness, and the meaningfulness of the information derived from this kind of exercise. The IMP report contains thoughtful commentary on these issues. As the movement develops, advocates of the impact economy need to keep asking questions about method: which questions yield meaningful results, and in which contexts? Are some questions too difficult to answer? Do actors in the impact economy need different skills for analysing impact data? And what can be done to mitigate the risk that investors go through the motions of collecting impact data, without sufficient concern for quality of insight?
There is, however, another set of concerns. In the debate as it stands, I see no recognition of a basic assumption informing the vision of the impact economy: that investors can give themselves the right to ask these questions, and expect honest answers. As well as being struck by how difficult it was to answer the survey questions in the IMP report, I was struck by how intensely personal they could be. Eye drops are one thing, but what if the product I had bought was for an embarrassing illness? Or if the service I’d bought was bereavement counselling? An ‘easy-to-use survey’ begins to feel inappropriate.
It is important to recognise the logic driving us to this situation. The people designing this survey are driven by the need for comparability and consistency. They are making the starting assumption that it is possible to design a survey that can be used in any setting. In any other context where the goal was measuring social change, this would be a bizarre place to start. But in the world of IMM it makes sense, because it follows from their positioning of the five dimensions of impact as a neutral categorisation of reality, rather than the product of an agenda focused on creating global consistency and comparability of impact data.
The point here is not to attack this agenda as such, but to draw attention to what it would mean for those communities and individuals lucky enough to be impacted by the efforts of impact investors. If global flows of comparable impact data are going to be created, then questions like these will have to be asked repeatedly.
This opens up a different set of questions for impact economy advocates to add to their list of relevant concerns. When is a question too personal to ask? When is a question too difficult to answer? Where are the limits to the kind of insight that can be gained from an ‘easy-to-use’ survey? Above all, what are the compromises required in creating global flows of comparable impact data, and who is making those compromises?