Steven Pinker recently had an op-ed in the WSJ in which he summarised his new book celebrating the progress of mankind in recent decades. Pinker relied extensively on social statistics measuring (among other things) poverty, income, violence, the environment and health to make his point that contrary to the naysayers, things are generally getting better.
Pinker is not alone. At the beginning of last year Nicholas Kristof wrote an article called Why 2017 May Be the Best Year Ever, focusing largely on the continued decline in extreme poverty experienced by humanity. Two years before that Zach Beauchamp, senior reporter at Vox, published an article entitled The world is getting better all the time, in 11 maps and charts, an article format which is not uncommon at Vox. Ex-Adam Smith Institute Sam Bowman’s defence of existing neoliberalism also made heavy use of statistical indicators to illustrate that things are improving. Max Rosser has an entire website, Our World in Data, which is basically dedicated to showing the same thing.
This is a perspective I’ve been sceptical of for a long time. The idea is that if ‘good’ outcomes such as income and health are increasing (or if ‘bad’ outcomes such as violence and poverty are falling) then we can confidently say that things are getting better. A general faith in social statistics to measure these outcomes is a crucial part of this perspective — starkly illustrated by Jonathan Portes’ article entitled Forget anecdotes. If you want to know what’s going on in the real world, look at a spreadsheet. Bowman’s neoliberal manifesto also stressed that neoliberals prefer “rigorous quantitative evidence”. I am therefore going to associate this perspective with neoliberalism, which I think is a reasonable judgement to make since it celebrates the current set of political and economic arrangements.
I want to make a few things clear. Firstly, I am not making a partisan point: I am not intending this as a direct refutation of neoliberalism as a philosophy, or of the general arguments made by Pinker or anyone else. My point is narrower, concerning an assumption which many of these arguments lean on: that statistical social indicators, often viewed from above, are a good way to measure ‘progress’. Secondly, as you will see much of what I am saying is not new — I will be drawing from countless articles by people who have a more in-depth knowledge than me. Finally, I am not claiming that no ‘progress’ has taken place or that things are ‘actually getting worse’. It is clear that these social indicators mean something, and also clear from casual observation that in many ways and for many people, things have improved.
Nevertheless, I think that the triumphalism I have observed based on these social indicators is usually unwarranted. Favourable discussion of certain outcomes only tells us a partial story and is sometimes actively misleading. Ultimately to know if things are going well we need a fuller understanding of the context from which social statistics emerge, else we risk missing important details about the state of the world. My argument will be split into several posts, and the series will start with the most obvious reason to doubt statistics: the data are bad.
Blinded by the Data
Social data do not fall from the sky; they must be gathered both by and from people, which is costly and creates practical problems. Gathering data typically entails arbitrary methodological judgements which, in some cases, create inconsistencies in the raw data. Anyone who has worked with datasets knows the difficulties this creates: names of variables, survey questions and how the variables are recorded change year to year, and so some judgement must be used in place of anything better. For example, if we have a reported income band instead of income, do we use the mid-point, interval regression or something else to fill in the missing values? These kinds of questions are inescapable — as the political scientist Adam Przeworski put it:
“Economists are, by and large, careless about the data they use, especially the political data….I believe that results have to be reproducible from observations and rules…if you have “votes by party” and then “total number of votes,” you can do a little check to see if the votes by party add up to the total number of votes. You’d be surprised, because these things often don’t add up.”
If you take the ‘science’ part of ‘social science’ seriously — and it is not unfair to suggest that a commitment to rationalism and science are a mainstay of these types of arguments — you should worry about almost any reported data, and should not really be happy unless you are clear exactly where they came from, how they were measured, and whether they ‘add up’. I rarely see this type of detailed sleuthing when presented with pretty looking graphs showing increasing trends.
In the context of datasets on relatively easy to measure quantities (like votes) in developed countries with well-established statistical institutions, some might be tempted to hand wave this issue away — especially when presenting a simplified argument intended for a general audience. I’m not so sure: James Kwak’s intervention in a debate about whether Trump voters are driven by racism or economic anxiety (spoiler: it can be both but the available data can’t really tell you this) shows that sloppy ‘data-driven journalism’ can easily produce erroneous conclusions.
However, the general claims made by people like Pinker, Beauchamp and Kristof are far more problematic than this because they generally refer to sweeping changes across time and place and use indicators like income, poverty and violence, which are inherently difficult to measure. There is no reason that societies across the world and throughout history would gather data which reflect our modern conceptions of something like GDP. Efforts therefore have to be made to recover such data, which can be hugely flawed and costly — which is why in some cases it is not done at all.
The reduction in absolute poverty over the past few decades, often trumpeted by data-driven neoliberals, is an illustrative case. As Morten Jerven has forcefully pointed out, our data on poverty and GDP are woefully inadequate. About one fourth of countries in the IMF’s database have no data on GDP; almost half of countries in the World Bank’s database either have no data on poverty or have it for only one year. It’s difficult to speak about a reduction in poverty (or an increase in GDP) when you don’t have a comparison point, and even more difficult when you don’t have any data at all.
So why do institutions such as the World Bank and IMF, as well as writers who rely on them, report as if these statistics are available and sound? The implicit or explicit assumption is that such data are ‘missing at random’ and are therefore unlikely to lead to bias. Simulation techniques such as Monte Carlo can be used to generate similar, random data to fill in the missing points. Alternatively, statistics can be taken from purportedly similar countries (statistics which are themselves often a combination of proxies and guesswork, a point to which I will return shortly).
But there is good reason to believe that the data are not missing at random: generally, statistics are harder to gather where development is least due to geographical remoteness, low state capacity, low literacy and so forth. In Jerven’s words “we know little about poor countries and even less about the poor people who live in these countries”, raising the possibility that our estimates are understating poverty. It gets even worse: one recent paper highlights that although the household is typically assumed to be the unit of consumption by statistical authorities, this may systematically underestimate the consumption/income of non-household heads (read: women and children). It is difficult to trust measures of poverty which systematically exclude those who are the most impoverished.
Even countries which have data deemed adequate by the statistical authorities encounter a version of this ‘drunk looking for his keys in the streetlight’ problem. Data on prices, for example, are essential for properly deflating income and consumption levels but are often out-of-date, and are easier to gather from the urban sector than the rural sector (even though prices between the two may differ vastly). The cumulative effect of these complications is far from trivial: Angus Deaton calculated that the 2005 adjustments in purchasing power parity increased the recorded global poor in 1993 by a whopping 450 million*. To be sure, he believes this shift is overstating poverty, but the sheer degree of uncertainty also leads him to argue that “global poverty and inequality measures are arguably of limited interest”.
When the measures are shifting this much it is quite difficult to discern what is happening, and this applies more generally than prices. Commonly used household surveys have well-known issues with reliability: how questions are phrased, how often surveys are carried out and how respondents record the necessary information (like food expenditure) all affect the answers people give. Survey length alone can cause reported poverty rates to vary by 3 to 7 percentage points. Even in developed countries, GDP figures and therefore reported growth are hugely unreliable in the short-term; in some developing countries, a simple change in methodology can cause GDP to virtually double.
The act of measuring something should also not be equated with the existence of what is being measured. An article by Garry Leech pointed out that “As long as people have access to land they can often meet their basic needs by utilizing readily-accessible natural resources for food, shelter and clothing.” Yet this informal subsistence sector is generally not recorded by statistical authorities, since statistics are much easier to gather in ‘modern’ sectors such as Foreign Direct Investment (FDI) and exports. When people move from the former to the latter, they move from unmeasured to measured, so an increase in income or decrease in poverty is recorded — implying that those in the informal sector are necessarily below the poverty line. As David Pilling reported, those familiar with their countries know this cannot be the case: “Terry Ryan, chairman of Kenya’s National Bureau of Statistics, told me that if — as the official data suggested — some 72 per cent of Kenyans lived on a dollar or two a day, then “72 per cent of my people are dead”.
There are numerous reasons, then, that we should doubt the narrative of declining poverty. The lot of the worst off is often unmeasured, and what is measured is highly uncertain and likely to be biased. This difficulty is compounded when we talk about ‘progress’, since changes can simply reflect fluctuations in both what is measured and how it is measured. This is true of any statistic that is as hard to gather and fraught with complications as poverty, such as violence or health. It is even more true of the now omnipresent indices which purport to measure inherently troublesome concepts such as democracy, business friendliness, corruption and freedom. These latter measures have the added value of being more obviously politically and ideologically loaded - only recently it emerged that the World Bank has fiddled its ‘competitiveness’ rankings to make Chile look bad. I’d go so far as to say that reporting of these indices which takes them at face value can usually be ignored altogether (if you don’t agree with me, click the links)
Returning to the question of global income and poverty, it is clear that total production has increased, both globally and within most countries. It can plausibly be argued that this will be reflected in the material living standards of many, and/or that in the long run it will be a good thing. However, the fact is that we simply do not know enough at this stage to unequivocally celebrate the triumph of progress, something anyone committed to the scientific study of the social world should acknowledge. In the next post of this series I will ask whether, even if we take these indicators at face value, their improvements are necessarily cause for celebration, since they may hide crucial context and counter-trends which put their interpretation in a new light.