Executive Summary: Meta-Evaluation of the first round of the Humanitarian Education Accelerator (HEA)

Lessons learned on scaling education innovations in humanitarian settings

--

American Institutes for Research (AIR) researcher during evaluation of an HEA grantees’ project © WUSC/Lorenzo Moscia

Background

Identifying and scaling effective education innovations could rapidly increase both access to and the quality of education in humanitarian crisis settings. However, in protracted crisis settings many children and youth are not accessing high-quality education.

Recognising the insufficient evidence available to scale education innovations in humanitarian settings (Elrha, 2018), the Department for International Development (DFID), the United Nations Children’s Fund (UNICEF), and the United Nations High Commissioner for Refugees (UNHCR) established the Humanitarian Education Accelerator (HEA), with the aim of understanding how to create the conditions necessary to scale existing pilot programmes.

The HEA had two major components:

  1. Research:
  • Conducting rigorous process and/or impact evaluations of selected education innovations’ effectiveness and potential to scale;
  • Collating findings from HEA research and existing evidence that examines and summarises barriers to and facilitators of education innovations’ journey to scale; and identifying impacts on learning, other education, and psychosocial outcomes.

2. Support and Mentorship:

  • Leveraging a mentorship model that pairs subject matter experts with implementers to mentor, guide and support them through the scaling process;
  • Improving monitoring and evaluation (M&E) capacity through capacity building, mentorship and M&E funding for selected implementing organisations, to improve evidence-based decision making.

Five implementing agencies (“innovation teams”) that are in the process of scaling education innovations in humanitarian crisis settings were selected for the first round of the HEA:

Follow the links above to find out more about each team and their scaling journey.

Top left to right: Students at Kepler’s Kiziba campus in Rwanda© UNHCR/Antoine Tardy; LWB’s Ideas Box © UNHCR/Charlotte Jenner; War Child’s CWTL project in Sudan ©War Child Holland. Bottom left to right: WUSC’s remedial education programme in Kakuma refugee camp © WUSC/Lorenzo Moscia; Caritas’s Essence of Learning project in Bangladesh © UNHCR/Antoine Tardy.

In order to begin building the evidence base, the HEA engaged an external evaluation firm, American Institutes for Research (AIR), to produce a meta-evaluation to synthesise the findings of mixed methods process evaluations of all five of the innovation teams, as well as impact evaluations of three innovation teams’ work.

The meta-evaluation research findings include:

  • Most innovation teams started multiple pilot projects in different contexts, rather than scaling up in one context. Teams indicated this approach was due, in part, to strategic considerations, such as the generation of evidence, however, some teams did not have sufficient time or resources to build in key components for scaling their model in one context. Restricted funding limited their ability to create business systems, such as documentation of organisational, financial and partnership management, as well as other elements of project management that guide programmes as they scale. As a result, some teams operated in “Perpetual Pilot” mode (Gray 2019), wherein implementation of pilot innovations in new contexts is driven by a new emergency or funding opportunity, without the requisite sustainability components in place.
  • All five innovation teams considered it critical to partner with local and national governments, but the comprehensiveness of the partnerships varied. The nature of the programme and the stage of its scaling influenced government engagement strategies. Two teams reported that concerted efforts were required from the outset to maintain relationships with governments. Other teams preferred to resolve organisational and design-related issues to solidify the pilot programme, prior to engaging with governments on the potential to scale.
  • Innovation teams were flexible and often adapted their programmes based on community demands and in response to donor priorities. As a result, the five innovation teams were able to secure community support and generate demand for their programmes.
  • Although innovation teams used evidence to various degrees during pivotal implementation moments, the availability of evidence to inform decision-making about education programming remains limited in humanitarian contexts. Given the limited evidence on the impact of education programmes in humanitarian contexts, using evidence from development settings in low- and middle-income countries and investing in M&E systems can help strengthen theories of change and encourage evidence-based programming.
  • Remedial education programmes in Dadaab and Kakuma, on average, did not show statistically significant effects on learning outcomes, but combining remedial education with cash transfers or school feeding programmes may create synergies. WUSC’s remedial education programme showed positive and statistically significant effects on learning outcomes for girls in food-secure households, who attended at least 50 hours of remedial education.
  • The effects of technology-in-education programmes on learning outcomes likely depend on contextual characteristics, including baseline levels of learning and whether children attend school. CWTL’s digital game-based learning approach found positive effects on learning outcomes among out-of-school children in Sudan. Replacing 40% of the mathematics and Arabic reading lesson time with the CWTL programme in Jordan, however, resulted in similar learning gains as gains made by students at comparison schools that received the standard government curriculum. In Lebanon, students participating in the CWTL programme showed improvements in numeracy and psychosocial outcomes that key informants attributed to the CWTL programme, but the study did not include a comparison group.
  • Providing information about the recruitment of refugees may contribute to improving refugees’ labour market outcomes. Employers in Rwanda — a context in which refugees have the right to work — were 7 percentage points less likely to report that they would hire relatively well-educated refugees than nationals with identical characteristics. Analyses suggested that this finding was driven by employers who lacked knowledge about how to recruit refugees in the labour market. Employers who reported sufficient knowledge about hiring and recruiting refugees stated that they were just as likely to hire refugees as Rwandans.
  • The impact evaluation findings represent a substantial increase in the evidence base on impact of education innovations in humanitarian contexts. It is uncertain, however, whether the findings apply to other education programmes in different contexts. While innovators and other key stakeholders in humanitarian contexts can learn much from the findings, it is critical to continue investing in the generation of evidence on education in emergencies to examine the external validity of the results.

Based on the meta-evaluation findings, there are seven main recommendations:

  1. Donors should allocate core funding, which is not linked to a specific programme but enables implementers to dedicate time and resources to setting up all organisational capabilities and activities needed for scaling. These activities include the development of formalised business systems for improving management, financial and administrative processes, and demonstrating the effects of their innovations on learning outcomes.
  2. Donors should continue providing large-scale funding for education innovations that are further along in the scaling process and for which sufficient evidence exists regarding improving learning outcomes in complex emergencies. Donors also should continue funding smaller-scale innovations to reduce the risk of dis-incentivising early-stage innovation.
  3. The international community should encourage and provide pathways to test scaled-up education innovations with governments, in addition to piloting innovations to help accelerate learning for children and youth in humanitarian settings. Education innovators should start engaging with Ministry of Education (MoE) staff prior to and during implementation, to learn about the MoE strategy, structure and priorities, as well as options for sustainability.
  4. At the time of its inception, HEA was one of a very few initiatives that aimed to generate rigorous evidence about what works to improve learning outcomes in humanitarian settings. Evidence building should continue to be a priority of Education in Emergencies practitioners and donors, as reflected by adequate investment and incentives to generate rigorous evidence.
  5. Innovators that focus on education in humanitarian settings should inform their programme design and strategies with evidence from development settings in low- and middle-income countries in order to overcome the current evidence gaps in humanitarian contexts.
  6. Education innovators in humanitarian contexts should continue to build and strengthen beneficiary feedback mechanisms to improve accountability, focusing not only on communication but also on full disclosure and “bottom-up” dialogue.
  7. When initiating multiple programmes in new contexts, education in emergencies practitioners should prioritise early development of management systems capable of supporting operations in multiple contexts, including streamlining administrative processes, financial tracking, and partnerships procedures.

For more detailed information on research design, findings and recommendations, you can read the full meta-evaluation report here.

--

--

Humanitarian Education Accelerator
HEA Learning Series

Education Cannot Wait-funded programme, led by UNHCR, generating evidence, building evaluation capacity and guiding effective scaling of education innovations.