Evidence-based advocacy, or advocacy-based evidence-making?

A plea for better use of statistics by the NSPCC and other leading charities

The widely reported 2015 NSPCC report ‘How Safe Are Our Children? (2015)’ was all over the news at the time of its publication. It was the third in a series of annual reports which seek to monitor and interpret the position in the UK with child abuse and neglect. The NSPCC is a major UK charity with an illustrious history. It works on one of the UK’s most important & troubling social problems. All my instincts are to be supportive, especially at a time when trust in charities is being challenged—last year for example in the light of the suicide of an elderly charity supporter in Bristol overwhelmed by unsolicited requests for money, continued concerns about chugging, the alleged mis-selling of electricity deals to the elderly by Age UK, the RSPB’s proposal to build on some land specifically bequeathed by a kindly donor on condition of not building, and the collapse of Kid’s Company.

Some good, caring and expert friends of mine work for or have worked for NSPCC. I have used their excellent guidance about practice and policy in other charities. I have been one of their donors. However, I cannot throw off some unease with how the society uses statistics. The problem of child abuse is bad enough and important enough without the NSPCC opening up unnecessary challenges with a cavalier approach to the use of data.

Last year their message was that their study showed a large rise in sexual abuse reported for under 16s — it was up ‘a third’ (or a sometimes quoted ‘38%’) in a year. This was indeed a large increase; but, as NSPCC pointed out, this may have reflected better record keeping and increased reporting by children. My suspicions about unhelpful spin, and that what we have here is advocacy-based evidence-making, were aroused by a small but significant detail in the analysis.

The first few ‘indicators’ in the report (e.g. for homicides) used a moving average — presumably because NSPCC recognised this to be, prima facie, an appropriate form of indicator for a trend, better than simple ‘year-on-year’ data. It smooths out varations that may be random or not part of any trend. In fact, this particular indicator encouraged some optimism about the decline in deaths through child abuse. NSPCC themselves commented “it is heartening that key outcome indicators of child deaths continue to point in the right direction, as the number of children dying as a result of homicide or assault remain in long term decline.”

It is surprising, therfore, that in the next section on abuse reported by the police they did not, for consistency and the same reasons, use the moving average. Instead, the report writers used absolute figures and rates for each year. (The note about ‘trend’ on the chart speaks misleadingly of a one year change as a trend.)

Had this section used the moving average, the picture of the trend, the ‘increase’ (for the rate per 1,000) would have been ’10%’, not the much larger, much publicised rise of 38% (in the absolute number) between last year year and the one before. Of course ten percent is bad enough, but it is a lot less than 38%, and suggests much less dramatic headlines. More cases are coming to the notice of the police—this may or may not indicate an underlying increase in child abuse itself in the present.

In making the NSPCC case for increased provision, at the conference launching the report, the NSPCC Director Lisa Harker emphasised that “compiling this data is part of (NSPCC’s) commitment to evidence”. There is however a step between compiling data, and making evidence—interpretation. To provide sound ‘evidence’ and derive well-grounded conclusions, NSPCC should set an example and make a parallel commitment to the appropiate interpretation, and accurate, consistent presentation of statistics.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.