Data is not a silver bullet
I’ve recently been recording podcasts on risks, rights and the role of the state, which is the topic for our Leaders’ Forum. I’ve been lucky enough to be able to listen to and learn from academics who are leaders in their field discussing key issues that affect social work.
I’m happy to defer to the experts on the majority of what was discussed. When the topic of algorithms came up though, I felt like this was something where I could usefully contribute to the debate.
I worked with the Wales Audit Office to look at how we could improve our use of data and technology. Without doubt, better data can help us gain better insights. Public services are not making the most of the data that’s available to us.
However I’ve been a bit troubled with how some organisations are seeing data and technology as the answer to all their problems without changing their underlying thinking. It’s the latest silver bullet, and I can’t recommend enough that you read Chris Bolton (a.k.a. whatsthepont)’s post on silver bullet syndrome and management fads to understand why. Data can only effect change when we have the right mindset underpinning it’s use. We also have to ensure that we effectively interrogate that data. Because…
Data is not neutral
As humans, we are susceptible to 175 different biases. It’s tempting to think that by using technology we can remove bias from the decisions we make and rely on cold, hard evidence.
But to build an algorithm we need data from the past, which can be based on bias. I’m fortunate to be working alongside Oli Preston, our evaluation expert. He’s been explaining bias in data gathering to me as we look to improve the evaluation of our events. Bias can be inherent in the questions that we ask, which in turn manifests itself in the data that we gather and the decisions that we make. We can influence data through such simple steps as the phrasing of our questions.
Algorithms are not neutral
Algorithms are designed by people, and our innate flaws lie within them. Some algorithms have even been found to be racist, as facial recognition software has been found to have trouble identifying black faces. Relying on these algorithms without questioning assumptions is clearly problematic and flies in the face of human rights and social justice as core values of social work. As Cathy O’Neil says,
“Algorithms are opinions embedded in code….. They (people) think that algorithms are objective and true and scientific. That’s a marketing trick. It’s also a marketing trick to intimidate you with algorithms. To make you trust and fear algorithms because you trust and fear mathematics. A lot can go wrong when you put complete faith in big data.”
We’ve got data. Now what?
It’s also really important to think about how we apply the data that we gather. With big data in particular, it’s use can be problematic. Bringing population wide data together can be incredibly powerful. However if we’re bringing that data together to inform one particular action that affects a huge number of people, then we’ve missed the point.
We work in complex environments, and making judgments on what individual people need based on the needs of the whole population can mean that an overall picture can look very different to the reality of an individual’s day to day lives.
Where does the power lie?
Having worked in public engagement, it’s been obvious to me that behaviour change methodology applies as much to organisations as individuals. The Behavioural Insight Team’s EAST Framework is a really useful tool to understand bias and behaviour change, and it’s the first and last letters that are important here — we do what is easy and timely.
It’s initially much easier and quicker to design services without involving people. As I mentioned in this post for the Good Practice Exchange, consultation has become the default mode of engagement for public services because it’s so much easier to get people to rubber stamp our ideas instead of co-designing approaches and solutions.
I’ve seen data mis-used in public services. It’s not being used to draw conclusions, it is the conclusion. Data and algorithms get used to make decisions so that we can avoid the messy process of really talking to people and understanding what good looks like to them. Because public services haven’t traditionally focused on outcomes, we’ve tended to measure what’s easy to measure. We tend to focus on quantitative data over qualitative data because it’s so much easier to analyse. It’s why I found the Wales Audit Office’s use of Sensemaker so fascinating when we used it to examine the risk appetite of public service bodies.
It’s also why data visualisation is so important — the democratising of data to make it easy to understand and get to grips with. Yet like other aspects of data use, we also have to be careful about how we share our data — there are plenty of examples in the wider world where charts and graphs have been misused to illustrate certain points.
To be clear, I’m in no way suggesting that data use is a bad thing. What I am saying is that data shouldn’t be used to sideline the people they serve. Data should make us people centred. Access to data does not make a bad organisation good. It’s all about how we use it.
To me, it all comes down to control. Are we using data to make decisions about people? Or are we gathering data to gather evidence on how we can work with people to improve their lives?
Our ethics as public services should be central to the way that we use data. Does our use of data give people agency? Or does it cast people as subjects and our organisations as benevolent saviours? Data is not an end in and of itself. We need to look at and understand what good looks for the people that we work with.
Because data is not a purpose.