Learning from an employment randomized control trial in Jordan
Applying the gold standard of research to innovative programming in a refugee context
In 2017, the International Rescue Committee approached one of the biggest problems facing the more than 650,000 Syrian refugees living in Jordan: the inability to find employment and generate a sustainable livelihood while waiting to return to their homes.
The Government of Jordan has shown, and continues to show, great support for the refugees it hosts, and international aid agencies have developed truly creative approaches to the problem. One of the most visible new solutions has been the Jordan Compact of 2016, which was meant to increase formal employment for Syrians and Jordanians, but has had only partial success.
The Compact was notable for putting Syrian refugee employment at the forefront of the humanitarian response agenda, but IRC’s goal has been to go further and to identify the persistent challenges that were hamstringing efforts to improve employment outcomes and to develop creative solutions through consistent and controlled trials, innovation — and failure. Over the course of two years, 8,000 job seekers, 200 employers, and more than 1,000 successful job placements, we at Project Match learned a lot about employment in Jordan, the limits of policy change, and the promise of technology and human-centered design. Most importantly, we learned what “innovation” looks like on the ground. Here are some of the expected and unexpected lessons we took away from that experience.
All in Good Time
From the beginning, we envisioned our project as fast-moving; employment is, after all, a critical factor in refugees’ ability to rebuild their lives and to host communities’ economic health. We wanted to establish the project on the ground fast, experiment fast, learn fast, fail and iterate fast, and scale fast. And with funding to carry us only through two years, we had to move with a purpose. What we quickly realized, however, was that we were squarely in the middle of what has long been recognized as a problem for researchers working with crisis-affected populations: how to provide timely, effective interventions in dynamic and sometimes chaotic contexts while maintaining the highest research and ethical standards, most of which were designed for academic instead of humanitarian contexts. There is always a value to pushing hard for results in an emergency context, but testing various interventions and prototyping solutions take time and benefit more from deliberate investigation. This might just be par for the course: the ideal conditions for luxurious, unpressured trials may not exist, especially in an environment in which the plight of refugees becomes more desperate every day.
We had set the latter half of 2018 for prototyping and small-scale testing of interventions. One of the most important aspects of programming that we tested during this time was the distribution of cash support. In small scale prototypes, we investigated the use of cash to overcome different challenges, given at different times and in different amounts. We compared “flexible” cash packages that were given to job seekers to spend how they felt, “retention bonuses” for job seekers that stayed in their jobs, and specific cash packages to meet transportation and childcare needs. As we experimented, we realized that there are hundreds of possible ways to deliver cash support — using different eligibility criteria, making receipt of the cash conditional on labor market engagement, or simply changing the times or amounts for distributions. With the limited time assigned for prototyping, we could not adequately test all types of cash delivery — even some that may have shown some promise. For example, cash may have a very positive impact on people’s ability to retain a job longer; however, the only way to really test this is to have enough time to see long-term effect.
That said, were we to do it all over again, we would have put much more time into prototyping and testing.
Living Up to Expectations
We had decided early on that we wanted to test different interventions and approaches using the most rigorous method available: the randomized controlled trial (RCT). An RCT is a type of experiment that randomly allocates subjects to two or more groups, applying a different treatment to each, and then comparing them with a control group. The RCT has come to be known as the “gold standard” in medical, social science, and economic investigations; but there are several potential drawbacks to using it in a humanitarian context.
We were nervous about how communities of refugees and other people in need would react to receiving services on a random (what could be perceived as arbitrary) basis. Would those assigned to the control group (i.e., those receiving no support) be upset? Would the fact that we were delivering support as part of a trial make clients feel like guinea pigs?
Most organizations assign services based on a “vulnerability scale” that prioritizes (or attempts to prioritize) people with the greatest needs: based on income, family size, and other factors. Many refugees have, over the long years, come to intimately understand this system and, more importantly, resent it as being an unfair, overly complicated, or opaque system. Though targeting based on vulnerability is no doubt useful, we spoke to many clients who related stories similar to this: “I have four kids and no job. My humanitarian aid assistance was cut last month, but my neighbor, who works and has a new car, is still receiving his aid.” These types of observations are most likely the exception rather than the rule, but stories like this came up consistently in our qualitative work, and they point to the importance of perception in aid delivery.
To our surprise, communities that we spoke to regarding the random allocation of services felt it to be more just and fair than the current system. We prototyped different ways to best communicate our approach, using language that framed it as a kind of “lottery” and emphasizing how their participation would help us generate long-term impact. Though we had earnestly prepared in order to avoid miscommunication, we had underestimated our clients’ comfort with (and in some cases preference for) randomization and, crucially, the transparency that comes along with that.
Resistance to Change
Though we had handled the challenge of “randomization,” we could not escape the difficulties of “control.” Once we began the RCT, the key to getting usable and accurate data was our ability to hold constant as many variables as possible. In other words, we had to remain disciplined throughout. This meant we were locked into a pure control group when, if done again, we might have planned for different formulations of a control group to allow cross-over, as an example. Or referring our job seeking clients to a job fair in their local community. Prior to the RCT, we chased down every possibility: new data sets, referrals from different sources, and combined interventions. But during the ten months of the RCT phase of the project, we had to adjust to our inability to make drastic changes. This required discipline not only from the project’s leadership, but most of all from our front-line field officers. It also meant a lot of forbearance from our donors: an understanding that though new and potentially successful opportunities for employment arose all the time, we often had to decline them in pursuit of the research agenda, which we knew would deliver the real, long-term benefit to clients.
Appetite for Failure
The key to innovation is the acceptance of the idea that you not only must fail, and fail often, but that you must have a culture and a process in place to deliberately learn from failure in order to arrive at a new understanding of a problem. In other words, plenty of projects fail, but few have the ability to adapt to failure.
In order to do this, it is necessary not only to become comfortable with the idea of failure, but to both cultivate a culture within your team that embraces it because it allows us to strengthen the work going forward, and create expectations in order to prepare for it. This is difficult at multiple levels. In our project, it was difficult at first to get junior and mid-level team members comfortable with the idea of discussing initiatives that went wrong. At the same time, we also had to build an expectation among those who funded our project (often with public money) that failure and learning were valuable results in and of themselves.
In late 2018, we participated in a job fair for blue collar workers to which we invited dozens of employers and hundreds of job seekers. The high expectations for the event were slowly dashed as, after six months, less than ten of our beneficiaries had found work through the fair. By emphasizing values of curiosity and learning with our team, we were able to dig into this experience and arrive at a deeper understanding. What we learned was that, though companies in manufacturing sectors often advertise large numbers of job vacancies that need to be filled, their production schedules are often erratic and unpredictable. A firm trying to fill 20 open positions for next month may only have two positions open when the time comes to hire. Moving forward after this experience, we found more success in working with a smaller number of larger employers and building relationships with them to truly understand their hiring needs.
“Innovation”, “innovative programming”, “iterative design”, “flexibility”, and “failing fast” have all become buzz phrases in the international humanitarian and development world. As more organizations are realizing, breakthrough solutions to the world’s most intractable problems require an adaptive approach that emphasizes small-scale prototyping of solutions, the gathering of comprehensive data, fast learning, and nimble scale-up. An entrepreneurial mindset is welcome in contexts in which poorly-designed or slow-moving programs have real, negative effects on the most vulnerable.
However, what is not always clear to many who advocate this approach is the on-the-ground challenges that quickly appear when implementing. Many questions arise: How do we balance serving the vulnerable as quickly as possible with the need to find the right solution? How do we communicate with communities that they are not only recipients of a service, but participants of a trial — and one that we don’t know will work? How do we maintain the strict parameters for testing, while being flexible enough to take advantage of opportunities? How do we condition ourselves to acknowledge failure and learn from it, rather than explain it away?
These few examples are only a sample of the challenges that we have tackled and attempted to answer over the course of Project Match. The questions they prompt are not easy to answer, but must be recognized in order to provide more impactful and more efficient support to those who need it most.