Notes on Principles-Focused / Developmental Evaluation

What is it like to bring rigor to evaluation when working in complexity and uncertainty, with emergent approaches?

Marc Rettig
Rettig’s Notes
Published in
13 min readOct 24, 2017

--

This is a place for me to gather and organize notes, which I’m making public in case it helps someone else. These notes (which will keep changing over coming months) are an input to a further process of synthesis.

“in a complex system target setting and measurement of progress against those targets is pointless, and people will corrupt the system (change behaviour or full on cheat), and you are probably going to get unintended negative consequences — like a plague of poisonous snakes”

https://whatsthepont.com/2019/05/19/campbells-law-and-why-outcome-measurement-is-a-dead-cobra/amp/​​

Developmental evaluation in-box

My notes start after this section. I use the in-box to collect links I aim to get to in the future.

The link above includes a very interesting-looking video: a full 90-minute “M&E Sandbox” conversation among a great group of participants, followed by a list of resources mentioned in the session.

New book from Michael Quinn Patton:
https://www.amazon.com/Facilitating-Evaluation-Principles-Practice-Book-ebook/dp/B07C6NYXGY/

And that’s part of a series that looks great:
https://www.amazon.com/gp/product/B08CYWGNDN?ref_=dbs_dp_rwt_sb_tkin&binding=kindle_edition

Chris Corrigan has several posts in his blog; here’s a link to posts tagged with ‘evaluation’
http://www.chriscorrigan.com/parkinglot/category/evaluation/

Duncan Green ponders the challenge:
https://oxfamblogs.org/fp2p/how-do-you-measure-the-difficult-stuff-empowerment-resilience-and-whether-any-change-is-attributable-to-your-role/

Equitable evaluation framework
https://www.equitableeval.org/

Podcast — looks juicy! Tamarack in general is a helpful source
http://www.tamarackcommunity.ca/library/applying-principles-focused-evaluation

Collection of material on EvalCafe
https://evalcafe.wordpress.com/

Developmental Evaluation Institute
Great resource list, including a recent anthology, Developmental evaluation for social justice.
https://www.developmentalevaluation.institute/resource-library/

Michael Quinn Patton has a more recent book:
https://www.amazon.com/Principles-Focused-Evaluation-Michael-Quinn-Patton/dp/1462531822

Conversation with someone who has applied these ideas was compelling, particularly a story about drawing the language and core of the principles from the people who are affected by the work. In the case of this conversation, native people in Western Canada, whose mythic tradition was a rich source of principles for desired outcomes and the way in which process and outcomes are entangled.

Also: https://www.amazon.com/Facilitating-Evaluation-Principles-Practice/dp/1506347614/

Good stuff recently from Dave Snowden on measurement in different domains of Cynefin:
http://cognitive-edge.com/blog/a-sense-of-direction-2-2/

SITE: Better evaluation

The Approaches section is interesting, and worth a good dive. These notes are about developmental evaluation, but there are other approaches listed here that are worth exploring, and perhaps deserving of their own notes.

Evaluation for the way we work

Michael Quinn Patton, Nonprofit Quarterly, Spring 2006, pp. 28–33

“The very possibility articulated in the idea of making a major difference in the world ought to incorporate a commitment to not only bring about significant social change, but also think deeply about, evaluate, and learn from social innovation as the idea and process develops. However, because evaluation typically carries connotations of narrowly measuring predetermined outcomes achieved through a linear cause-effect intervention, we want to operationalize evaluative thinking in support of social innovation through an approach we call developmental evaluation. Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and transformative processes.”

“…Not all forms of evaluation are helpful. Indeed, many forms of evaluation are the enemy of social innovation. This distinction is especially important at a time when funders are demanding accountability and shouting the virtues of “evidence-based” or “science-based” practice. The right purpose and goal of evaluation should be to get social innovators who are, often by definition, ahead of the evidence and in front of the science, to use tools like developmental evaluation to have ongoing impact and disseminate what they are learning.”

“…Adding a complexity perspective to developmental evaluation helps those involved in or leading innovative efforts incorporate rigorous evaluation into their dialogic and decision-making processes as a way of being mindful about and monitoring what is emerging. Such social innovators and change agents are committed to grounding their actions in the cold light of reality-testing.”

“Complexity-based, developmental evaluation is decidedly not blame-oriented. Removing blame and judgment from evaluation frees sense and reason to be aimed at the light — the riddled light — for emergent realities are not clear, concrete, and certain.”

“As a complexity-based, developmental evaluation unfolds, social innovators observe where they are at a moment in time and make adjustments based on dialogue about what’s possible and what’s desirable, though the criteria for what’s “desirable” may be quite situational and always subject to change.”

Summative evaluation

“Summative judgment about a stable and fixed program intervention is traditionally the ultimate purpose of evaluation. Summative evaluation makes a judgment of merit or worth based on efficient goal attainment, replicability, clarity of causal specificity, and generalizability. None of these traditional criteria are appropriate or even meaningful for highly volatile environments, systems-change-oriented interventions, and emergent social innovation.”

“Developmentally-oriented leaders in organizations and programs don’t expect (or even want) to reach the state of “stabilization” required for summative evaluation. Staff in such efforts don’t aim for a steady state of programming because they’re constantly tinkering as participants, conditions, learnings, and context change. They don’t aspire to arrive at a fixed model that can be generalized and disseminated. At most, they may discover and articulate principles of intervention and development, but not a replicable model that says “do X and you’ll get Y.” Rather, they aspire to continuous progress, ongoing adaptation and rapid responsiveness. No sooner do they articulate and clarify some aspect of the process than that very awareness becomes an intervention and acts to change what they do. They don’t value traditional characteristics of summative excellence such as standardization of inputs, consistency of treatment, uniformity of outcomes and clarity of causal linkages. They assume a world of multiple causes, diversity of outcomes, inconsistency of interventions, interactive effects at every level — and they find such a world exciting and desirable. They never expect to conduct a summative evaluation because they don’t expect the change initiative — or world — to hold still long enough for summative review. They expect to be forever developing and changing — and they want an evaluation approach that supports development and change.”

“Moreover, they don’t conceive of development and change as necessarily improvements. In addition to the connotation that formative evaluation (improvement-oriented evaluation) is ultimately meant to lead to summative evaluation (Scriven, 1991), formative evaluation carries a bias about making something better rather than just making it different. From a complexity- sensitive developmental perspective, you do something different because something has changed — your understanding, the characteristics of participants, technology, or the world.”

“Those changes are dictated by your latest understandings and perceptions, but the commitment to change doesn’t carry a judgment that what was done before was inadequate or less effective. Change is not necessarily progress. Change is adaptation. Assessing the cold reality of change, social innovators can be heard to say:”

“We did the best we knew how with what we knew and the resources we had. Now we’re at a different place in our development — doing and thinking different things. That’s development. That’s change. But it’s not necessarily improvement.”

Jean Gornick, ED, Damiano, Duluth, MN

“The thrust of developmental evaluation as an approach to operationalizing the evaluative thinking mindset involves integrating hope and reality-testing, simultaneously and, perhaps paradoxically, embracing getting-to-maybe optimism and reality-testing skepticism.”

Case story

p. 31ff — good stuff

“Complexity-based developmental evaluation shifts the locus and focus of accountability. Traditionally accountability has focused on and been directed to external authorities and funders. But for value-driven social innovators the highest form of accountability is internal. Are we walking the talk? Are we being true to our vision? Are we dealing with reality? Are we connecting the dots between here-and-now reality and our vision? And how would we know? What are we observing that’s different, that’s emerging? These become internalized questions, asked ferociously, continuously, because they want to know.”

“That doesn’t mean that asking such questions and engaging the answers, as uncertain as they may be, is easy. It takes courage to face the possibility that one is deluding oneself. Here the individual’s sense of internal and personal accountability connects with a group’s sense of collective responsibility and ultimately connects back to the macro, to engage the question of institutional and societal accountability.”

A Developmental Evaluation Primer

Jamie Gamble, The McConnell Foundation

Dominant approach

“The dominant approach to solving problems is that of logic. There is a natural sequence of steps that moves us from problem to solution. We move methodically from assessing the situation to gathering and analyzing data, formulating a solution and then implementing that solution (see Figure 1). This linear logical approach works very well when the problem is well understood; there are clear boundaries and there is a limited set of possible solutions, of which there is likely one that is optimal. [I.E., this is a “complicated domain” approach. -mr] Current evaluation is generally built around supporting this kind of problem solving. Summative evaluations render judgments about the merit, worth and value of a standardized program. Formative evaluations help a program become an effective and dependable model.”

Typical evaluation approach in the complicated domain

“The very techniques that enable evaluation excellence in more static situations — standardization of inputs, consistency of treatment, uniformity of outcomes and clarity of causal linkages — are unhelpful, even harmful, to situations where there is a lot of uncertainty and ‘moving goalposts’. Making a judgment of merit or worth based on efficient goal attainment, replicability and clarity of causal links works for a well-defined technology or intervention. With dynamic and unpredictable phenomena, however, these same criteria can actually so narrowly define and structure the evaluative questions as to interfere with learning and adaptability. Innovation is often about breaking previous boundaries. Developmental evaluation is more suitable in such situations because it supports the process of innovation in ways that enable exploration and development.”

“Developmental evaluation applies to an ongoing process of innovation in which both the path and the destination are evolving. It differs from making improvements along the way to a clearly defined goal. Where more traditional approaches to evaluation try to predict the outcomes of the innovation and focus measurement on those goals, developmental evaluation is intended to support innovation within a context of uncertainty. The ‘developmental’ in developmental evaluation is based on the innovation driving change.”

“Evaluation is about critical thinking; development is about creative thinking. Often these two types of thinking are seen to be mutually exclusive, but developmental evaluation is about holding them in balance. What developmental evaluation does is combine the rigor of evaluation, being evidence-based and objective, with the role of organizational development coaching, which is change-oriented and relational.”

“To do this, the evaluator is positioned as a part of the team that is working to conceptualize, design and test new approaches. The evaluator’s primary role is to bring evaluative thinking into the process of development and intentional change. The developmental evaluator is there to introduce reality testing into the process of innovation. Feedback is supported by data and is delivered in an interactive way that helps the innovator(s) to fine-tune what is going on, consider and adapt to uncertainties and inform decisions. Developmental evaluation facilitates assessments of where things are and reveals how things are unfolding; helps to discern which directions hold promise and which ought to be abandoned; and suggests what new experiments should be tried.”

Skills of a developmental evaluator

Page 42 of the document

  • Truth to power
  • Process facilitation
  • Pattern recognition
  • Listening and communicating
  • Tolerance for ambiguity

Tools of developmental evaluation

Page 48 of the document

  • Network mapping
  • Revised and emergent modeling [??]
  • Simulations and rapid reconnaissance [??]
  • Appreciative inquiry
  • Visual language
  • [Surprised they didn’t list narrative inquiry!]

Issues and challenges

Page 54

  • Power
  • Perceptions of credibility
  • Ambiguity and uncertainty
  • Volume of data
  • Sustainability: building evaluation capacity
  • Keeping a results focus

Developmental Evaluation Toolkit

Is DE right for your setting?

Here’s an assessment tool / checklist. (pdf) It helps inquire into these questions:

  • Is it a developmental situation?
  • Are you ready for developmental evaluation?
  • Do you have adaptive capacity?

Understanding complexity

DE is suited for environments with a high level of complexity

  • social embeddedness
  • emergent properties
  • causality is non-linear
  • lack of central control and high level of spontaneity

(pointer to resources on Tamarack site; there are of course many more)

Understanding systems approaches

“Developmental evaluation also elucidates how the issues at hand are embedded in systems and how systems can be part of the solution. Developmental evaluators need to understand the concepts of systems and be able to observe, document, diagram, and discuss systems, as they are often the frame through which problems and solutions can be understood in a complex environment.”

Points to:

Soft skills of developmental evaluation

They kind of punt on this, aside from mentioning the great importance of process, perception [and communication?] skills. Points for that. They point to a document called The Art of the Nudge by Langlois, Blanchet-Cohen, and Beer, which discusses Five practices for the developmental evaluator:

  1. Practicing servant leadership: using an appreciative lens, listening deeply and actively, integrating reflection into practice.
  2. Sensing program energy: opening channels of communication, bringing interpersonal dynamics to the surface.
  3. Supporting common spaces: to identify observations, to prioritize interventions.
  4. Untying knots iteratively
  5. Paying attention to structure

[This paper looks like a great read. I’d argue those are core skills for all work in social complexity, not just evaluation. Come back to this!]

Readiness for developmental evaluation

“Not all settings are equally ready for developmental evaluation. In an environment where failure is not an option, developmental evaluation may not be a good choice.”

Points to:

  • a Readiness assessment protocol [an input for future attempts at a “maturity scale” for participatory, systemic, emergent, human approaches to work in social complexity]
From http://tools.sparkpolicy.com/wp-content/uploads/2014/10/ReadinessAssessmentProtocol.pdf

Other tool sets

I’m not ready to make notes on everything that’s in the Spark Toolkit just now. But to make these notes more helpful for searching and scanning, here’s the outline.

Introducing developmental evaluation practices

  • Identify your team (points to an FSG blog post)
  • establish a foundation
  • begin the learning process
  • collect data and generate findings
  • keep responding and adapting
  • other tips for ongoing practice

Staying on top of the action (including sample protocols)

  • Data collection tool: One-on-ones
  • Data collection tool: Observation
  • Data collection tool: Right Now Survey

Structure for the learning process

From collecting data to generating findings

  • Prioritizing questions to answer
  • Identifying data to collect and analyze
  • Generating your “findings”
  • The format for findings

Example frameworks (necessarily short, but still helpful?)

Example methods

  • Story formulation methods
  • Mapping techniques
  • Exploring the future
  • Uncovering underlying themes, causes and relationships

Case studies

[Generally a good resource, but all toolkits suffer from listicle syndrome. Breadth without enough room to qualify, give breadth. So much left as an exercise for the reader, so many necessary assumptions about the reader’s skill in discerning tool choice and wisdom in application.]

--

--

Marc Rettig
Rettig’s Notes

Fit Associates, SVA Design for Social Innovation, Okay Then