Three ways your data is biased

tommy pearce
Up to Data
Published in
2 min readAug 9, 2024

Originally published May 2024

My first semester of social work school felt like a relentless onslaught of nonprofit clichés: meet them where they’re at, know your own biases, evidence-based best practices, lessons learned, make time for self-care, etc. It turns out, more than a decade later, these aren’t just platitudes — they’re fundamental in our work.

As we leverage data to inform smarter strategies, knowing how to limit or at least acknowledge the bias that sneaks in can help us better understand and serve our communities. Here are three common biases:

Confirmation bias

If you live in 2024, then you’ve surely had a conversation with someone that used all the talking points that supports their case and conveniently left out what might make their case…less compelling. Confirmation bias is seeking out data that confirms your own beliefs. I know nonprofits would never do this intentionally, but we should be careful to understand issues as they are, not as we want them to be.

Often, this shows up when we highlight all the deficits of a community to get funding. But this does a disservice to your community and all their assets. We need to be willing to live in complexity, and authentically understand and represent those we serve.

⏱️ Recency bias

I love a new study and have a tendency to let it change how I think for the next couple weeks. But just because new data or analysis is released, doesn’t mean we need to immediately change our thoughts and actions. First, we need to make sure we understand it, its implications, and how it shapes our understanding of an issue. Thoughtfully updating our thinking will usually help us be more strategic and less reactionary.

If you’re a case manager or talk to lots of community members, it’s common to pick up on patterns. But we should be careful to call something a trend too early, especially if it conflicts with data trends. Instead we can think of it as a signal and try to learn more before making large programmatic decisions.

👋 Sampling bias

There are a lot of instances where we draw generalized conclusions when we probably shouldn’t. Maybe we’re using our program participants’ data to represent the broader community, while our audience looks different than who we want to serve. Or, conversely, we may only have access to national or state data, and we try to extrapolate it to a local level.

Even when we have reliable data (like the Census or other small area public data set), we have to be careful what conclusions we can draw. For instance, if you’re serving hard-to-count populations, you need to be aware of the data’s limitations. Acknowledging these limitations explicitly when sharing your analysis is also a great way to build trust when making claims or informing strategies.

Come to our next Data Breakfast Club on May 7th and tell us about biases you’ve identified or overcome in your work! And find a more complete list of research and data biases here.

--

--