Oh for the Love of Sticky Notes! The Changing Role of Evaluators Who Work with Foundations

Center for Evaluation Innovation
16 min readApr 28, 2016

--

Julia Coffman, Center for Evaluation Innovation, April 2016

There I was, stuffing sticky notes and Sharpies into a Ziploc bag. And staring back at me, in sharp neon relief, was the realization that my job as an evaluator had changed. Big time.

But before I get to that, let me offer some background. While I haven’t done any scientific research on this, I’ve done a good amount of asking around. And I’d posit that if you mapped the personality types of evaluators, a pretty sizable chunk would fall into what the Myers-Briggs Type Indicator® personality inventory calls the INTJ type.

The letters stand for Introversion, Intuition, Thinking, and Judging. I’d guess the percentage of evaluators who are INTJs (one of 16 possible personality types) is way higher than the nationwide average of 2–4 percent of the population.[1]

Percentage of INTJs in the General Population

I am an INTJ.

INTJs, according to the folks who study this, are analytic and original thinkers who prefer to work alone. We are at our best when we can quietly develop our ideas, theories, and principles. We like things to be pragmatic and logical. We quickly see patterns and develop long-range explanatory perspectives. We are skeptical and independent, and have high standards of competence and performance.

All seemingly decent traits for evaluators to have.

My Actual Myers-Briggs Profile

Now let me put this into some work-related terms. As an INTJ, my personal list of likes and dislikes about meetings and conferences looks like this.

You get the idea. So let’s go back to where we started, which was me sitting in front of the office supply cabinet, weighing the advantages and disadvantages of 3x3 versus 3x5 sticky notes, and vacillating on how many colors to bring. Should I bring the markers that smell good too? People really like those, especially the red one. Cherry…mmm.

How did this happen? How did I come to the point where I was creating what my colleague calls, a facilitation “go-bag?” Did I mention that I hate sticky note exercises? (I did, see above.)

I’ve been an evaluator for 25 years now, and I love it. I love the opportunity to apply research skills to important questions. I love helping people to think about their hypotheses and assumptions and then testing them. I love informing strategic decisions at critical times. But the expectations about the role of the evaluator and the ways in which we work have fundamentally changed in the last 25 years. At least they have in the sector in which I work — philanthropy.

To explain what I mean, I need to offer a little history on the role of evaluation in philanthropy and how it has evolved, alongside what that evolution has meant for the considerable skills that evaluators who work with foundations now need to bring to their craft.

We are: Applied Social Science Researchers

Foundations first started engaging with evaluation only about 40 years ago in the late 1970s when the discipline of evaluation was still fairly new and just coming into its own. At that time, pioneering foundations like the Robert Wood Johnson Foundation and W.K. Kellogg Foundation began using evaluation to answer mostly impact questions about their programmatic investments. Did the program work? What was its impact on people’s lives?[2]

To answer these questions, evaluators primarily needed to apply social science research skills. They needed strong training in research design. They needed to understand validity and the various threats to it. They needed to know methods, statistics, and data analysis techniques. And when they had applied those techniques and answered foundations’ questions, they needed to write up their methodology and findings in a paper or report. Being an evaluator was a lot like being an academician.

We are also: Theorists

And then philanthropy evolved. In the late 1980s and 1990s, foundations began doing things beyond giving out individual grants on the issues they cared about, or in response to requests that came in. They began clustering grants into broader initiatives designed to tackle a common problem, and investing longer-term in a set of grantees that together were expected to achieve particular kinds of outcomes. They began to invest in communities or places in comprehensive ways. They were still asking impact questions about what their investments were achieving, but they also began asking other kinds of questions. To what extent are our grantees collaborating? How can we make sure that the whole of these grants adds up to more than the sum of its parts?

In response to these changes, evaluators had to evolve. We needed new techniques for evaluating more complex initiatives.

The hugely influential “theory of change” approach came onto the scene at that point in the mid-1990s, originally developed by evaluators to support thinking about how to evaluate comprehensive community initiatives.[3]

The Original Sources on Theory of Change

Theories of change gave us an approach for articulating how foundations think change will happen and why. Identifying the causal assumptions — evidence-based or hypothesized — embedded in foundation thinking proved enormously helpful for evaluators, especially when the initiatives they were being asked to evaluate were becoming increasingly complex.

But the introduction of theory of change, which took hold like wildfire across the philanthropic sector, required us to get new skills. We had to learn how to elicit theory and its underlying assumptions from program staff (often retrospectively after grantmaking already had started), and then use it as an evaluative framework, pressure testing the theory and its assumptions with both existing research and our own evaluation work.

We are also: Strategists

And then philanthropy evolved yet again in the 2000s, and in a big way. The embracing of strategic philanthropy principles across the sector substantially changed how foundations approach their work and grantmaking.

Strategic philanthropy holds that funders and grantees should have clear goals connected to strategies that are based on sound theories of change and clear short- and longer-term outcomes. At the heart of strategic philanthropy is a focus on results and accountability to those results. Foundations wanted to know: How are we performing against our desired outcomes and ultimate goal?[4]

Champion of Strategic Philanthropy

In response to this sector-wide shift, evaluators again had to evolve. Now we needed to become strategists and strategic planners and to engage with foundation staff in different ways. We needed to understand what strategy was and how to develop it. We had to dive into what big thinkers like Henry Mintzberg and Michael Porter were saying to the business world about how to think about and develop strategy. We had to figure out what it meant to shift the evaluand to strategy itself.[5] We had to gain skills in performance measurement. We needed to create feedback loops.

We were no longer standing off to the side of foundation work; we were firmly in the mix.

We are also: Strategic Communicators

With the ante raised on the role of evaluation as one of informing strategy and strategic learning, evaluators had to gain yet even more skills in order to be effective. To prevent our information and findings from getting dusty on that proverbial shelf, we had to become strategic communicators.

Strategic communications is “the managed process by which information is produced and conveyed to achieve specific objectives.”[6] Evaluators had to become more deliberate, savvy, and less passive in our communications.

Overall, the missive was to avoid long and technical reports that no one would read, without losing the meaning and rigor of the work. We had to learn how to:

No more evaluation reports on dusty shelves!
  • Communicate in different formats — written, verbal, electronic, visually
  • Stop writing like academics
  • Write shorter pieces (or at least an executive summary)
  • Frame our findings in ways that helped people to understand the “so what”
  • Stop putting people to sleep with text-heavy PowerPoints
  • Be more charismatic and interactive presenters.

We are also: Systems Thinkers

And then in the 2010s, philanthropy shifted yet again. Many foundations began tackling even more complex problems rooted in systems that were deeply dysfunctional. They began to embrace complexity principles and systems thinking and to recognize that many actors and factors interacted in unpredictable, and often invisible, ways to create the problems they sought to address.[7] They began treating strategy as more dynamic and emergent and profoundly affected by context, rather than as a series of well-considered and predictable steps that can be forecasted in a long-term plan. Their questions focused less on “Are we doing things right?” and more on “Are we doing the right things in the first place?”

In response, evaluators needed to add even more skills. With theories of change arguably less useful in this context, we had to learn about systems thinking and the techniques that supported it like network and system mapping. Because constant adaptation must be an essential component of strategy in complex systems, we needed to learn how to use adaptive evaluation approaches such as developmental evaluation to help us respond to questions like: What is the network of relationships in the system? What other scenarios are possible? What should the foundation do next?[8][9]

And now we are also: Facilitators, Coaches, and Trainers

Philanthropy now appears to be in the process of making yet another shift that is affecting our profession. Foundations that fund strategies and initiatives increasingly are recognizing that learning is “real work” and part of a strategy rather than an optional add-on or something that you do just at the end of an effort. They increasingly are seeing constant learning and adaptation as an essential part of a strategy’s implementation. And they are calling out the importance of learning as something distinct from evaluation. For example, the David and Lucile Packard Foundation defines the terms as:

Evaluation: The systematic collection, analysis, and interpretation of data for the purpose of determining the value of and decision-making about a strategy, program or policy. Evaluation looks at what we have set out to do, what we have accomplished, and how we accomplished it.

Learning: The use of data and insights from a variety of information-gathering approaches — including monitoring and evaluation — to inform strategy and decision-making.

The emphasis on evaluation and learning once again is raising the bar on evaluators. Learning was of course always an implicit or intended outcome of evaluation. But now we’re being tasked more than ever with making sure it happens. Foundations are demanding evaluative information that leads to decisions and concrete actions. They want to learn and apply that learning. Regularly.

This is a really good thing, this demand for learning. But the problem is that people have a harder time than you might think in connecting data to decisions and actions. It isn’t enough to strategically communicate information and then say, “Apply it.” Program staff say they don’t have time to think about how to apply evaluation findings, especially if they need to make decisions in teams. The practice of deliberating and reflecting on evaluative information often is not built into their day-to-day work or throughout their strategy lifecycles. They may not see clear incentives for prioritizing learning and adapting.

So evaluators have had to evolve again.

And this is where I come back to the sticky notes. We have had to acquire skills in supporting adult learning for individuals and teams. We have had to develop agendas for learning-focused meetings and to learn how to facilitate them effectively with group processes so that there is a clear list of “to dos” at the end. We have had to develop and get training in specific techniques to do this, like Emergent Learning.[10] We have had to train and coach foundation staff on how to manage and integrate evaluative thinking into their regular work.

The Point

So what’s the point? Am I complaining about my job and the diverse set of skills it requires in order to do it effectively? No. Well, maybe a little bit. The Introvert in my INTJ is pretty ticked off — she doesn’t like to facilitate and isn’t very good at it. But that doesn’t mean that I haven’t had to try to learn this skill, as well as the many others outlined here, as the bar continually has been raised on the responsibilities and expectations that come with being an evaluator in philanthropy.

Here’s the point. To be effective evaluators in philanthropy, we are expected to have all of these skills. None of the skills listed above have become less important over time; we’ve only had to pick new ones up.

For the record, this doesn’t just apply to external evaluators who work with foundations. It also applies to those who lead evaluation work inside foundations. The 2015 survey by the Center for Evaluation Innovation and Center for Effective Philanthropy that benchmarked evaluation practice in 127 foundations found that on average, evaluation staff reported eight different and distinct areas of responsibility, from supporting the development of grantmaking strategy, to designing and facilitating learning processes or events within the foundation, to disseminating findings externally.

It is rare to find a single evaluator or even an evaluation team that does all of these things well. The ones who do are superstars and in high demand. But what does this mean for the rest of us?

Here is what I think this means.

Social science research training should still be the core.

Evaluators historically have come to the profession from different disciplines and professional backgrounds. As an advanced degree in evaluation is offered at only a select number of colleges and universities, evaluators often are trained in a particular social science discipline in which evaluation training is embedded (e.g., education, psychology, public health, political science).

As the diversity of skills needed to do our work with foundations has expanded, we’ve seen more consultants doing evaluation at foundations who are not formally trained in social science research. Given the range of important skills listed above, this is not surprising. Consultants who can offer deep expertise in strategy or systems, for example, make sense.

But if the job to be done is evaluation, I’m a strong believer in making sure that the evaluation team either has solid social science research training and experience, or has clear access to that expertise. When I say expertise, I’m not just talking about the methodological chops needed to run a randomized controlled trial. I’m also talking about a mastery of the basics — what it takes to develop a good survey, or to conduct quality interviews and analyze qualitative data.

Evaluators answer questions by applying a methodology best equipped to answer those questions given available resources. Evaluation practice that lacks the expertise needed to make and execute sound choices about methodology is at risk of producing invalid, wrong, or even harmful findings.

This should look familiar…to someone on the evaluation team.

A possible corollary here might be medical doctors who increasingly are being asked to also be counselors. While counseling skills clearly add value to the patient experience, it’s the medical training and expertise we care most about when a doctor offers a diagnosis.

We will need training.

There are very few, if any, evaluators in the world with formal training in all of the skills I’ve mentioned. While it may be possible to assemble teams or draw on larger organizations with individuals who are experts in diverse skills, many evaluators work independently or in small teams and have to perform multiple skills at once. This means we need to pursue the training we need to do our jobs effectively. These are skills, and we need to practice them. If we aren’t good at them, it usually requires more than reading a book to learn them.

We increasingly need to work in teams or to collaborate with other evaluators.

This may sound obvious given everything I’ve said, but especially for prolonged evaluation engagements with foundations, it’s important to work in teams that can bring all of the skills needed to bear (understanding that all of the skills mentioned above will not be needed in every evaluation engagement). We often think about making sure that we have the methodological and substantive expertise we need, but as I’ve pointed out in some detail, other skills are important too, especially soft skills associated with effectively running meetings to elicit theories of change or to promote learning and the use of evaluation findings.

In my own evaluation practice, I have the great fortune of partnering with an ENTP (Extroversion, Intuition, Thinking, Perceiving), an ideal personality match for my INTJ. While we share many of the same strengths, she brings to the table many of the skills in which I am not strong. It has made my practice way more impactful.

Me and My Amazing ENTP Colleague Tanya Beer

There are times when we should push back.

For some of the skills mentioned above, you may be thinking that it’s really the program staff’s responsibility to do them more than the evaluator’s, and that taking them on is a step too far toward integration and a step too far away from objective and independent judgment.

A challenge many program staff experience, particularly when a great deal of money needs to go out the door, is that some foundations are understaffed for the many roles that program staff have to play. Their responsibilities include:

  • Being immersed enough in the field to stay sufficiently on top of the players and the politics
  • Developing program strategy
  • Making and monitoring grants
  • Convening and network building
  • Communicating with grantees and their broader fields
  • Doing organizational development work and handling internal politics
  • Courting other funders.

When program staff lack the bandwidth to add anything more to their workload, they often ask evaluators to take on responsibilities that they would and should normally lead (e.g., developing a theory of change and using data for ongoing decision making). Evaluation consultants become program staff extenders.

This is tricky, because everyone is time challenged. But we should not take on more than our role allows. We should support, but not lead, the strategy development and implementation process. We should add to, but not drive, a theory of change. We should support learning, but we should not be the only ones who learn. (I have been asked to produce unilaterally what program staff have learned when they are too busy to stop and reflect).

Our relationship with program staff and our level of integration with program teams will depend on the work that we are doing. If it’s impact evaluation, we will be separate and independent. If it’s developmental evaluation, we will be integrated. Either way, if we are making programmatic decisions ourselves or leading the work, then we are doing a different type of consulting than evaluation. What we call evaluation and what we do not is important for maintaining the integrity of our discipline.

Reconciliation and Resolve

I wrote this after waking up in the middle of the night in a hotel room after a meeting I facilitated went particularly poorly. My facilitation go-bag had been stuffed into the closet so the sticky notes couldn’t stare back at me with all of their impudent cheery brightness.

Facilitation Go-Bag

It’s hard, and sometimes not possible, to do everything well.

But in the end, where I land on all of this is a larger point about evaluative practice in foundations. While it still very much involves the critically important impact evaluation work that launched the field 40 years ago, evaluative work and thinking in some form is now expected at every level of foundation work (grant, initiative or strategy, overall foundation), and in every phase of a program strategy’s lifecycle (development, implementation, and exit). This is good. This is what we wanted.

Foundations in the U.S. give billions of dollars annually. They tackle problems that affect hundreds of millions of people in profound ways. What evaluators do to support that work at every level and in every phase is critically important. And it is our responsibility to continuously improve in order to do that as best we can.

So okay, sticky notes, let’s make up. We live to facilitate another day.

I really don’t. But I’m trying.

Julia Coffman is founder and director of the Center for Evaluation Innovation and co-director of the Evaluation Roundtable.

References

[1] The Myers & Briggs Foundation. Downloaded on March 4, 2016 from: http://www.myersbriggs.org/my-mbti-personality-type/my-mbti-results/how-frequent-is-my-type.htm

[2] Hall, P. D. (2004). A historical perspective on evaluation in foundations. In M. T. Braverman, N. A. Constantine, & J. K. Slater (Eds.), Foundations and evaluation (pp. 27–50). San Francisco: Jossey-Bass.

[3] Connell, J., Kubisch, A. Schorr, L., & Weiss, C. (Eds.). (1995). New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: Aspen Institute.

[4] Brest, P., & Harvey, H. (2008). Money well spent: A strategic plan for smart philanthropy. New York: Bloomberg Press.

[5] Patton, M. Q., & Patrizi, P. A. (2010). Strategy as the focus for evaluation. In P. A. Patrizi & M. Q. Patton (Eds.), Evaluating strategy: New directions for evaluation, No. 128, (pp. 5–28). San Francisco: Jossey-Bass.

[6] Karel, F. (2000). Getting the word out: A foundation memoir and personal journey. In S. L. Isaccs & J. R. Knickman (Eds.), To improve health and health care 2001: The Robert Wood Johnson Foundation anthology. Princeton, NJ: The Robert Wood Johnson Foundation.

[7] Kania, J., Kramer, M., & Russell, P. (2014). Strategic philanthropy for a complex world. Stanford Social Innovation Review (Summer), 26–37.

[8] Patton, M.Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York: Guilford Press.

[9] Patrizi, P., Thompson, E., Coffman, J., & Beer, T. (2013). Eyes wide open: Learning as strategy under conditions of complexity and uncertainty. Foundation Review, 5(3), 50–65.

[10] Emergent Learning is a technique developed by Signet Research & Consulting LLC and Fourth Quadrant Partners LLC. See www.emergentlearning.com.

--

--