Traditional evaluation looks backward; innovation looks forward. How do we evaluate innovation in real time?

Caribou Digital
Caribou Digital
Published in
6 min readSep 30, 2024

Written by Elise Montano and Niamh Barry on the Measurement & Impact team at Caribou Digital.

Innovation programs cultivate an environment of experimentation and continuous improvement in developing, implementing, and scaling new ideas, products, or processes to drive growth. These programs depend on rapid, actionable insights to stay ahead and be ready to pivot strategies and optimize outcomes in real time. However, many traditional evaluation approaches are neither responsive nor adaptive to the speed and focus of insights needed in innovation programs.

In Caribou Digital’s work with Mastercard Strive, we sought opportunities to break away from traditional evaluation models to try a new approach that retained values of timeliness, flexibility, agility, and rigor, with a clear understanding of real-world constraints. From this experience, we devised an evaluative approach to support programs working in dynamic systems to generate impactful and incisive insights that enhance performance and impact.

Traditional evaluation is failing innovation programs.

Evaluations are usually conducted at predetermined moments in a program — for example, mid- or endpoint — rather than when stakeholders need information. Such evaluations are focused on pre-established questions and do not respond to program stakeholders’ dynamic and complex insights needs. They typically focus on accountability and documenting processes, not learning and improving program performance.

Innovative programs need real time information that supports dynamic learning, rapid response, and experimentation for continuous improvement. Traditional evaluation approaches are simply too rigid and fail to address these core needs.

Organizations that deliver complex programs need a better way of getting incisive insights at critical moments while maintaining evaluative rigor.

Our modular evaluation approach works with innovation programs.

Building on formative and developmental evaluation principles, we developed a flexible and agile approach to generating evaluative insights within the Mastercard Strive program. We call this “modular evaluation.” The characteristics of this approach include:

  • Embedded: Work is led and conducted by evaluation specialists immersed in the program delivery.
    >> The Caribou Measurement and Impact team is part of Mastercard Strive program delivery, working daily with program directors, grantees, and partners. We used our detailed knowledge of the program and its complexities, constraints, and learning objectives in evaluations`.
  • Modular: Evaluations are conducted in thematic modules that enable faster, more focused, and concise work.
    >> We deployed three thematic modules — 1) small business outcomes, 2) program strategy and governance, and 3) partner management — allowing us to focus entirely on each module in turn.
  • Flexible deployment: Evaluations are delivered as and when insights are needed to support strategic decision-making, not according to a prescribed timeline.
    >> We delivered the partner management module with our first phase of programs before developing a second phase so the insights from one could be rolled into the next. We also conducted our small business outcomes module twice, nine months apart, to generate insights when grantees had the most data available.
  • Lean: Evaluations focus only on pertinent questions and data collection methods. They enhance existing data collected through regular reporting with lean data collection where it counts.
    >> For each module, we used grantee data from existing reports and filled the information gaps through focused interviews.

The benefits of this approach were immediately evident to our team and clients. We lined up evaluation modules to deploy throughout the project to provide insights at the moment they had the most strategic value.

We identified five key outcomes of this approach based on our experience.

1. Modular evaluations enable precision and flexibility, supporting insights at decisive times.

Our approach acknowledges that some modules or topics may require faster, more focused, and more concise work, or have different internal and external stakeholders reliant on insights. Each module can be managed independently, with its own evaluation questions and analytical frameworks, according to a timeline that best supports decision-making.

In Mastercard Strive, our small business outcomes module was adapted based on the outcomes expected at specific points. For example, the first iteration delivered insights on the impact of strategies for engaging small businesses with various solutions. It suggested where pivots could support deeper engagement and what other types of programs would address gaps in our portfolio. The second iteration — conducted nine months later — assessed early outcomes from our first phase of grantees (e.g., on small business capabilities and uptake of new business practices, products, and services) and revisited solution engagement data to incorporate new results and grantees. Future outcomes modules toward the end of the program will look at long-term outcomes and the sustainability of impacts for small businesses.

Mastercard Strive small business outcomes evaluation module

2. Focused modules support rapid delivery of insights.

Each evaluation module took at most three months to complete, and interim insights were often available within a month of launching data collection. In contrast, traditional evaluations can often take over six months to deliver final insights. Collecting and combining data across multiple themes from a wide range of sources adds complexity to the process of analyzing and presenting that data. In addition to lean data collection, agile approaches allow evaluators to focus on specific topics, dig into the details, and identify more nuanced and detailed insights.

3. Rapid insights support adaptive strategies.

Access to real time learning enables grant and fund managers to be dynamic and responsive, and make evidence-based decisions by working with our measurement and impact team. Our granting strategy evaluation module built on insights gleaned through ad hoc meetings and reporting, leading to a quick — but structured — approach to collecting and analyzing primary data. Within four weeks, our team had mapped the strengths and weaknesses of the granting and grantee management processes. We delivered concise recommendations that immediately fed into our second granting phase, including how we selected, developed, and managed programs.

4. Flexible timing and focused modules support stakeholder recall.

Traditional evaluations often interview stakeholders once on a wide range of topics, making for unwieldy interviews that ask questions about decisions made over a year before. A more flexible approach allowed our teams to conduct shorter, more focused interviews with stakeholders. The interviews were concise, asked questions about recent decisions, and allowed participants to prepare more effectively.

5. Modular evaluations are more cost-efficient.

We found this evaluative approach more cost-efficient than traditional evaluations for three reasons. First, the rapid, iterative nature of modular evaluations supports learning and continuous improvement that reveals opportunities for experimentation and adaptation earlier on, avoiding costly mistakes. Second, modular evaluations are inherently lean. Data collection builds on existing knowledge and is respectful of participants’ time, giving them clear boundaries of the scope of each module. Evaluation teams are embedded within the programs and don’t need to spend time learning about programs’ context. Finally, the modular nature of this evaluation supports scalability. Program managers have flexibility on what is included and how much budget they are willing to dedicate to evaluations, ensuring that each module delivers adequate value for money.

Deploying modular evaluations in innovation programs

Modular evaluations are distinguished from ongoing monitoring or measurement. While optimizing the insights from monitoring systems, these are coupled with rigorous evaluative approaches and question how and why a particular outcome has been observed. To deploy modular evaluations, organizations require budget flexibility, an embrace of uncertainty about evaluation timing and focus, and a team that is open and supportive of real time learning.

At Caribou Digital, we’ve seen the value obtained from flexible innovation-supportive approaches and are excited to promote a method that works with and for technology-focused innovation programs. We continue to deploy modular evaluations in our work and collaborate with others who are similarly interested in ensuring that evaluations are candid, purposeful, and timely. If you are interested in this approach, please contact us at Elise Montano or Niamh Barry.

--

--