Metric Fixation and Data Ethnography

Marc Hebert
Designing Human Services
12 min readNov 8, 2020

“The problem is not measurement, but excessive measurement and inappropriate measurement — not metrics, but metric fixation […] is a cultural pattern […] It affects the way in which people talk about the world, and thus how they think about the world and how they act in it.” -Jerry Z. Muller, Historian, (2018, p.4 & 17).

Flickr, musicisentropy

“Staff asked us why their performance was being measured by how quick they answer calls, how long people are on the call and how many calls they answer. They’re unclear what matters most to customer service since we aren’t measuring if callers were helped or how many times they have called about the same issue or were transferred to someone else. We aren’t asking callers for feedback about their experience.”

A coworker said something like that to me as part of my effort leading a service and systems design team in San Francisco government. Metric fixation involves performance metrics, benchmarks, audits and evaluations. Collectively these artifacts of assessment can focus people’s behaviors towards making the indicators look good in order to be promoted, praised or rewarded in some way. It’s been called “Campbell’s Law” after Donald T. Campbell, a psychologist, who started examining this in the 1970s.

Metric fixation can involve indicators that are explicit (shown in data dashboards and reports) and implicit (what may get us recognition from others).

Let’s take the above example further about confusion around what’s important at a government call center. One of the teams working in the call center admirably made not having anyone hang up a primary goal. Reducing wait times and dropped calls are an important part of helping the public. The main reason most people call a government agency, however, is to get help. People want questions answered or something else done.

Five, red, British-looking telephone booths on a walkway; adjacent to a wall.
Flickr, M Cheung

The phone system at the time didn’t ask callers if they felt helped or measured their experience in anyway. Without these metrics, call center employees are not supported to achieve their primary purpose. Managers and their frontline teams resort to the data they have. In this case, wait times and dropped calls. Incomplete dashboards and explicit metric fixation can leave us with poor substitutes for understanding our impact.

What about implicit metric fixation? For me, it involves how our colleagues, professional network, and our larger workplace culture respond to our efforts. It’s about focusing on things that get us invited to conferences, coach others, and join new teams or projects. Implicit metric fixation shapes how we understand what really matters to advance at work or our careers. We learn it, in part, by observing why others may be rewarded, punished or overlooked for what they do. It’s based on merit, favoritism and discrimination, among other things. In the absence of genuine alignment within and across teams on explicit success metrics, individual employees are left to assume what really matters, taking cues from implicit success metrics.

I think of metric fixation as part of a broader practice of data fixation: the ability to quantify nearly everything and the influence it has on us as humans. Scholars and practitioners have called it “datafication,” “dataism” and other concepts often based on their ethnographic insights. See Crystal Biruk, danah boyd, Dave Snowden, Genevieve Bell, Hannah Knox, Marliyn Strathern, Sarah Pink and others.

For example, governments use computer systems to manage people’s work. I know of one that allots time for bathroom breaks. Basically, people get to pee when the system says so. The time between pee breaks is measured. These data are available to supervisors who use it to look at employee performance. Data fixation enables this system to be built. Metric fixation encourages people to pee quickly in order to do more work.

Metric and data fixation are not:

-new ideas

-rehashing the quantitative vs qualitative data debate of which is better

-against using numbers to manage the success of people’s performance, services, policies and organizations

-opposed to transparency, accountability, being data-driven or evidence-based

Metric and data fixation are:

-about focusing on the indicators/data itself rather than their intended purpose (better information, decisions, services, outcomes, lives, and planet)

-an invitation to understand what may be limiting a team, program or department from being more effective, and partnering with them to encourage what is working, change what isn’t, and develop healthier practices

-describing a pattern in the workplace of how we experience data and information

-known to occur, in my experience, but not often prioritized for change

-within our individual power to improve to some degree

Here are four ways to identify and change metric and data fixation practices.

Looks like a chalk drawing of Albert Einstein with pointing gray hair and a bushy mustache.
Flickr, Alan Levine

1. Reflect

Imagine you and your team are working on a gnarly problem that involves lots of stakeholders. You have no authority over them. At your next meeting, you ask if metric fixation is part of the problem. The group identifies some areas but it seems the constraints are beyond anyone’s control because some of the metrics are mandated or structural. Then someone asks, “What decisions are we making that keep these structures in place? How do our actions strengthen parts of the system that hold us back?”

These questions are part of a facilitation technique calledpanarchy. It’s used to connect our individual actions to the larger forces that constrain us. Performance metrics, data dashboards, benchmarks, audits and evaluations could be questioned by those of us who create or use them or whose labor produces the numbers that fill them. The same reflective approach for explicit metric fixation is applicable to its implicit counterpart, especially when considering the role of equity.

Adapting the language of Public Design for Equity we could ask:

  • What might frontline employees and the public like to say about our data dashboards and reports that hasn’t been said?
  • Do our data practices move us toward relationships and understanding with others?
  • People experience their government through a historical context of power, privilege, discrimination and trauma. What would acknowledging this history look like on our data dashboards and performance metrics or through the ways people grow in our organization?
  • What do the approaches to how our data is gathered, stored, shared, accessed and analyzed say about our equity and inclusion practices?
  • What assumptions need teasing out to better use success metrics for our policies, programs and individual job performance?

Reflecting on any of these questions the next time we engage with data dashboards and reports may help us consider the presence of metric and data fixation, and start to make bite-sized or wholesale changes.

2. Test

“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” ~Donald T. Campbell, Psychologist (1976, p.34)

The federal government is preparing to crack down on hospitals for not reporting COVID-19 data on a daily basis. That crackdown comes in the form of cutting off Medicare funding to hospitals that don’t comply […one is] concerned that the threat of losing funding might cause hospitals to start making up data to report.” ~Pien Huang, National Public Radio (24 September 2020)

Do data points on your dashboards and reports lead your team to do what is best for the data rather than people and the planet? Is the pressure of performance metrics corrupting the intended outcomes of your work? If the honest answers to these questions are yes, you’re not alone.

Ask community organizations or local governments who receive grant money to see the paperwork they need to provide to funders. It can be a burdensome dance, in which grantor and grantee both know the reporting is excessive or not reflective of the value being provided, but performance metrics were contractually stipulated in advance and now — everyone assumes — must be met. The strategy can end up producing success indicators, not real success.

What to do? Test out some changes.

For example, I periodically gather public sector employees who are interested in how people experience the design of policy and performance metrics. We explore metric fixation at different levels of government. One of our presenters from the U.S. Department of Health and Human Services shared how she and her team partnered with their grantees to reimagine success metrics. They facilitated a session to discuss what success means to whom, the role of power and equity, and then developed additional measures besides those that were mandated.

On one project, I helped a team of social workers who had long struggled with being asked to serve a certain number of people a month. The manager at the time explicitly prioritized this number as the greatest indicator for their job performance. Most of the help these social workers provided their clients wasn’t counted: getting them access to housing, food, health care, and out of domestic violence for example.

We discovered the monthly success metric was created nearly 15 years prior by the budget office and the former manager. Back then, everyone may have been acting in good faith to justify spending money on creating this public program, but the success metric needed to be revised. Knowing this allowed the social workers and manager to work towards productive changes. The mysterious power of the metric had been replaced with something within people’s ability to change. A connection between themselves and the system had been made. They were open to testing improvements.

Flickr, Kristian Bjornard

3. Partner

I’ve found people are more willing to change their work practice if they feel what is being asked is integral to what they do rather than additional. Using this integral to additional lens, who in your government is working to develop a healthy data culture or alternatives to metric fixation? They may be keen to collaborate.

Chief Data Offices are a good place to start. Ideally, they strive to get different departments to practice a similar, ethical process for how data is gathered, stored, accessed and used. It’s a massive undertaking. How can we support their efforts?

One way in San Francisco is through a Data Academy. Municipal employees volunteer to teach different courses to our colleagues. Our team leads one on service design. We try to offer alternative ways to think about data by researching the needs of those we serve. We ask workshop participants if their clients would recognize what matters to them in the success metrics of their projects. The purpose is to find opportunities for co-producing success metrics with the public and internal stakeholders.

Besides partnering with your chief data office, are you lucky enough to work beside behavioral insights teams? What an opening to include design research about metric fixation on one of their choice architecture or nudging projects. Together, you could find out if Key Performance Indicators may be negatively influencing people’s behaviors and whether it impacts the intended results of policies, procedures and services.

If you work to deliver modern government tech then design researchers are well equipped to include questions that tease out what people say, feel, think and ultimately do in response to performance metrics. These questions could be added while researching government employees’ needs that lead to a technological solution. Understanding what data incentivizes public sector employees as part of their other needs could be revelatory.

For example, my team is part of a multi-year project to integrate several of our agency’s largest programs. We’re trying to make it easier for clients to apply for and receive as many social care services as they need for themselves and their families. As we prepare to work with others to sketch out the future state and explore technological solutions, we plan to include a data layer in our analysis. It will help us to understand how people experience performance metrics and what they would like to change.

I’ve been inspired by the Centre for Public Impact who gathered a handful of local governments in the UK earlier this year to reimagine their use of success metrics. They discuss transitioning from “measurement for control” to “measurement for learning.” They also offer a values-based framework that may encourage other communities of practices in this space or a point of comparison with your own measurement approach.

4. Imagine

“There is one stakeholder group, larger than all the other ones combined, that is almost always ignored: future generations […] We should not make decisions that reduce the range of choices available in the future, but we do so continuously. In many of our decisions we do not even take into account our own future interests.” ~Russell Ackoff, (1989, p.7)

I asked a CEO of a global manufacturing company about his organization’s data culture. He responded that for years executives and managers decided what KPIs their salespeople should meet. One day they asked their frontline teams if these were the right success metrics. Some were others weren’t. He was shocked, however, how the metrics were incentivizing the sale of certain things that weren’t in the best interest of customers nor the company. They have since created feedback loops and greater transparency within teams and across the organization to root out metric fixation early.

In a separate interview with a client-facing employee of a consulting firm, I also asked them to describe their data culture. I heard how their company talks about valuing customer experience and needs. But the database to manage employee performance and billable hours had no such indicators. My interviewee shared a sense of cynicism about the gap between the company’s explicit values and compensation.

In both cases, good people were managing or being managed, in part, by inadequate success metrics. The last mile (or kilometer) in using data to make better decisions is about how people experience information. What they think, feel, say and ultimately do with it. All of the effort that goes into collecting, storing, managing and accessing data falls short of its potential if people are focusing on the wrong thing or inventing stuff to make the metric appear to be a success or fooling themselves about helping others be better off.

Preventing metric fixation from happening every time in all places is unlikely. To start, we can name it more readily, and devote attention to it by weaving the above suggestions into our current practices. We also need more data ethnographers across the public sector. Besides the work described up to now, these systems designers could also help with:

1. Mapping information silos within and between government departments during critical projects. Many of us know these silos exist in our workplace. Try as we do to overcome it by copying everyone on emails or sending employees periodic newsletters information silos persist. Documenting in real-time information flows among stakeholders of a high-priority project, then co-developing simple changes with them, may prove incredibly valuable. Learning from our ongoing response to COVID-19 is an example.

2. Understanding the effects of disinformation that increasingly harm people worldwide. Those who have a high distrust of mainstream news, including government employees and the public, cannot be expected to keep this sentiment from spilling into the workplace or their interactions with their government. I shared this concern with a colleague who responded that they had experienced such pushback when sharing data related to COVID-19 with a group of their employees who didn’t believe them. Whether it was related to a broader, national phenomenon is unknown. What is clear is that disinformation and mistrust of information has become so widespread that governments have developed national responses. What is helping your teams and the public to trust the data and information shared through public websites or internal dashboards?

3. Supporting broader algorithmic auditing measures. U.S. state and city leaders, as well as the government of Aotearoa New Zealand are in the early phases of this work. Popular examples where this analysis is needed include different sentencing guidelines for white and non-white people and determining who should receive social care. Which departments in your government are using algorithmic products, based on what datasets, and to what ends? These sorts of questions need to be asked by government officials and from the vendors we procure such products.

If your organization is succeeding with how people are using data and information effectively or is struggling and wants to innovate in this area, let’s chat. I’m keen to learn about your current practices.

--

--

Marc Hebert
Designing Human Services

Anthropologist | Director, Innovation Office, San Francisco Human Services Agency