Power and Over-Simplification: Results from A Thought Experiment

Sarah Stachowiak
7 min readOct 28, 2019

--

This entry is part of a series of evaluator responses to critiques of evaluation and our visions for a better future. Read more about the reason for this blog and the other responses here.

This past spring I had the opportunity to attend the Center for Culturally Responsive Evaluation and Assessment (CREA) Annual Conference in Chicago. I had colleagues who raved after attending in 2017, and I left with a real jolt of renewed energy about how to think differently about my own evaluation practices. As someone who cares deeply about my work contributing to social good, the idea of holding up oppressive systems and maintaining inequity through my work is painful.

Much of what I took away about culturally responsive work had to do with how to share power and engage more directly with those who are directly impacted by the effort at hand: having more say in what questions get asked, how to interpret the findings, and how to use evaluation to keep working towards achieving the most good possible. One fun example my colleagues and I walked away with was hearing about an evaluator who had taken findings from the evaluation of a local public health effort and hired a graffiti artist to make a community mural about what they had learned. Super cool, right?

Imagining working with a graffiti artist, transforming meaningful findings into art and content for community

But most of my work is often many arm’s lengths away from the people and places my clients seek to support. I do a lot of work on systems change, policy and advocacy efforts, trying to see the degree to which initiatives and strategies are having the cumulative effect that was intended. In most of this work, there is not physical wall onto which a mural could be painted, much less a clear set of specific stakeholders who would benefit from the systems-level lessons.

The vantage point I more often have, trying to take in many systems and elements at once. This view can provide new insights, but can also diminish people to small dots in the frame. To push the metaphor farther (too far?) a mural done at this scale wouldn’t have much meaning to the people who are supposed to be affected by it.

But it didn’t feel good enough to just say culturally responsive practices weren’t applicable. It just couldn’t be true that I couldn’t do more and differently in my work that could drive those projects toward equity more explicitly. So, to be concrete, not just conceptual, I decided to take on a past project example as a thought experiment. I chose a point-in-time strategic evaluation of federal advocacy for K-12 education policy. For this evaluation, we sought to learn how effective a set of grantees were in this space, and conducted the evaluation over just four months. With the culturally irrelevant critique in mind, if I were to do this project today, what could I (or should I) do differently? While there are many possible angles of critique, I tackled two that I could have handled differently, with the benefit of hindsight: power and over-simplification.

At its root, I see much of the critique of evaluation as boiling down to who holds power. who has the power to decide things about an evaluation and who benefits.

For this project, it’s was a pretty typical case of work being done on a timeline for a philanthropic decision, and I worked most closely with the program officer and program director. There was some engagement of the grantees, but it was fairly pro forma. They knew the evaluation was happening, and they received results at the end.

How could it have been different? How might I have shared power in different ways?

Our task was to understand the effectiveness of a set of federal advocates before the latest federal education policy bill, the Every Student Succeeds Act (ESSA) was passed. Now, I have a third and a fifth grader. While they and their friends have worthwhile things to say about how school could better meet their needs — and they are some of the ultimate beneficiaries of this work — they couldn’t give information to understand if these federal advocates were being successful. Nor could their parents, and probably still not the teachers.

My daughter and her after school friends. They are adorable and brilliant about many things. Probably less so about what federal advocates are doing in Washington DC.

But what if I’d considered the mission of the foundation’s program to be my “client”? What if instead of orienting to the funder, I’d considered my priority to be advancing information that could help them be most likely to achieve better educational opportunities for all children in American? I would have asked some different questions and had different findings to share with the people who signed my contract.

I might not have talked to kids, parents or teachers, but I could have expanded my boundary about who the beneficiary of policy would be by looking to, say, state education agencies, who would have a more direct relationship to how federal policy would flow down to their worlds and the ways different proposals could advance or impede more equity in states and districts.

If I didn’t even go that far, could I have given more power to the grantees? What if they had more say in what kind of information would help advance their work? What if they had been involved in more sensemaking of the results that came from my efforts?

Lesson: be more critical of boundary choices around who is in and out, who gets to define the parameters of the inquiry, and how success is determined. Push harder to engage those effected by the work, and think more creatively about who that could be for systems change efforts.

Another critique of evaluation as being not relevant or useful is not attending enough to the context or multiple ways of knowing, flattening complexity to have a neat, consult-ified answer.

In my project example, a moment of success occurred around the real-time creation of a classic 2x2 matrix using evaluation data looking at the degree to which grantees match up around alignment to the foundation’s vision and their access to federal decisionmakers.

Mapping organizations using access to decisionmakers and alignment to foundation strategy

It felt actionable to foundation staff as they could wrestle with decisions around grantmaking considering these dimensions. While elegant and productive for decision-making, there are ways to rethink this, too.

Alignment and access are only two criteria that likely bear weight on the effectiveness of advocates. What about their commitment to equity and influence among other advocates? Or the degree to which they had an ear to what their end constituents need? Such results may have led to different findings and conversations about fit and future steps.

Another approach could have been to abandon the 2x2 altogether. What if we’d taken a field approach and considered the grantees as an ecosystem of actors, not organizations to pit against one another as though one advocate should rule supreme?

Either alternative approach could have, in a small way, helped to avoid over-simplification while still providing actionable information.

Lesson: What if we sought to build more wisdom versus providing answers? I heard this from Dr. Nicole R. Bowman-Farrell. Can we think about our work as illuminating part of a complex web of reality instead of providing an answer? Broaden the conversation: complexify things if they seem to simple and neat.

Using My Power for Good

Now, you may be sitting here thinking, what a terrible evaluation! Why did you make any of the choices you did? Truth is, this evaluation was viewed as a great success (see page 22). The Program Officer used the information. The ROI was deemed high. In a purely utilization-focused way, we knocked it out of the park.

But this just goes to illustrate how much power there is in setting the questions, the boundaries for inquiry, the criteria for defining value, key evaluative tasks.

Is the finding that there’s a happy man who’s summited this rock?
Or can we broaden the lens to better see the context, what preceded this moment and what barriers or advantages may lie ahead? Where we focus our evaluative lens matters.

In reality, there seem to be some simple ways to think about doing things differently so that our work feels more responsive to issues power and complexity and equity, even when our work is with systems, not directly with people.

Evaluators, we need to get more comfortable more quickly with our role as being one that asks hard questions and wields our power to achieve more equitable outcomes in the world rather than pretending we are neutral purveyors of information. We make so many decisions about boundaries, methods and questions that can leave a lot of people and context out of the picture.

Funders or commissioners of evaluation, we need you to start to demand this of us, be open to evaluation being different — maybe taking more time, maybe needing a few resources to do different activities that will be more culturally responsive and ultimately of higher quality — and be willing to share power differently yourselves.

As a believer in the power of systems-level change, I believe that if we start asking different questions of systems we could start to see them change to work better for the people they are supposed to serve.

More of the same isn’t good enough.

And the time to do more, better, different is now.

--

--

Sarah Stachowiak

evaluation/data/learning geek, critical friend, thought partner, what-have-you, CEO of ORS Impact