Evaluation: Driving Toward Impact While Encouraging Learning and Flexibility

CASE at Duke
Dec 11, 2020 · 4 min read

A funder’s role in evaluation, reporting and metrics is a delicate balance between value add versus distraction. Consider the following advice points to ensure that you are holding your grantee/investee accountable to greater impact without taking them off track.

Image by Kurt Bouda from Pixabay

Measure organizations’ progress and results against a few key milestones as opposed to a more granular list of specific activities. See the Mulago Foundation’s suggestions on specific and quantitative milestones across the areas of delivery, organizational capacity, and impact.

Many SEs prioritize data that holds them accountable to you, as a funder. Work with social enterprises to understand what KPIs would best hold them accountable to their clients — ensuring that client voice is captured in that process — and hold them accountable to those metrics

When scaling through partnerships or systems change, success will be beyond the contributions of one entity alone, yet funder requirements can sometimes create perverse incentives for enterprises to maintain ownership in order to claim direct attribution for impacts achieved. So, funders should seek to understand and reward contribution to allow social enterprises to minimize the credit they seek and empower others to sustain the change.

Scaling through partners — such as government — is often appealing due to increased reach and speed. However, achieving this reach may result in other tradeoffs — such as decreased impact per unit. Discuss this with grantees/investees to avoid surprises, and develop metrics to determine acceptable levels of tradeoff.

The Equitable Evaluation Initiative describes this challenge best: “Certain kinds of data and evidence have come to be viewed with value and legitimacy in philanthropy. Many foundation boards have come to expect simple quantitative dashboards, and those with particular academic backgrounds often value experimental research designs regardless of their fit to the situation. The field has come to treat with suspicion what is often called ’self-reported data’ and to dismiss even systematically collected and analyzed qualitative data as merely ’stories.’” Consider the types of data you value from the enterprises in your portfolio — why it is so — and how you can prioritize both quantitative and qualitative data that allow for different voices to be represented.

There are several emerging standards that can be adopted by funders as a baseline to ensure evaluation efforts — by you and your portfolio enterprises — are equitable. The Skoll Foundation is exploring the equitable evaluation principles to inform how measuring and evaluating impact can “advance progress towards equity; answer critical questions about how historical and structural decisions have contributed to the social change being addressed; and foster community participation in defining success and shaping how evaluation happens.” Other standards include the Responsible Data framework.

Funders can play a role in encouraging organizations to seek out storylines to explain the data that challenge existing organizational narratives about what works and why. If you and your portfolio organizations are open to such inquiry, it could result in significant improvements — but also means that you must be open to such pivots as needed. This also means ensuring that your funding will allow for such data- driven changes.

The collection and analysis of an organization’s core data must be simple and repeatable to allow for scale. The addition of too many additional reporting metrics from external stakeholders, including funders, can burden the data collection process and undermine the ability for strategic data to help power scale. Before asking for additional data, ask yourself the three questions that social enterprise Harambee Youth Employment Accelerator uses before adding new metrics:

  1. Why are we tracking that?
  2. What behavior will it drive internally?
  3. Does it keep the client/beneficiary at the center?

Some funders, including the Skoll Foundation, do not require grantees to report on a set of common measures because that would not truly capture the impact of each unique organization. Instead, the Skoll Foundation allows grantees to define their own metrics of success which they regularly report to the foundation. This approach is similar to the “bring your own lunch” approach described within MIT D-Lab’s “The Metrics Café.”

FSG’s six conditions of systems changes (policies, practices, resource flows, relationships & connections, power dynamics, and mental models), as described in “The Water of Systems Change,” provide a helpful framework for organizations and funders to map metrics. Work with organizations to identify the metrics that best align with the short-term and long-term change they are seeking.

As the Skoll Foundation’s Liz Diebold and Anna Zimmerman Jin shared in a devex article, “Systems change takes many years, but we look for shorter-term signals to spot it. For example, successful replication of a solution by government, the private sector, or civil society; leveraging a network of actors to achieve common impact goals; or contributing to changes in policies governing an ecosystem.”

Scaling Pathways

Hard-won insights on the path to impact at scale

Scaling Pathways

Scaling Pathways is a partnership between the Skoll Foundation, USAID, Mercy Corps Ventures, and CASE at Duke to curate and share scaling insights from the world’s leading social entrepreneurs.

CASE at Duke

Written by

The Center for the Advancement of Social Entrepreneurship (CASE) at Duke University leads the authorship for the Scaling Pathways series.

Scaling Pathways

Scaling Pathways is a partnership between the Skoll Foundation, USAID, Mercy Corps Ventures, and CASE at Duke to curate and share scaling insights from the world’s leading social entrepreneurs.