You Have the Evidence…Now What?
An Evaluation’s Endgame for Programs: Challenges to Implementation
Haidee Cabusora | Chief Program Officer | The Financial Clinic
Editor: Kristen McGuire | Development and Communications |The Financial Clinic
Evidence-based impact. Third-party evaluation. Rigorous evidence.
In the words of Billy Joel, the phrases fill you either with sadness or euphoria. Nonprofits sit on one side of the spectrum — all doing amazing and impactful work — but not always prepared for the data collection, staffing, resources, or logistics of research and evaluation. On the other side, the external stakeholders, like government and funders, increasingly demand impact evidence for continued investment to ensure that dollars are spent on quality programs.
In 2012, the Urban Institute selected The Financial Clinic and Branches in Miami to participate in a random control trial (RCT), commissioned by the Consumer Financial Protection Bureau, on the impact of financial education and coaching on household balance sheets and well-being.
When its results were released, we were thrilled to see that the data conclusively demonstrates our financial coaching model and programs make a measurable difference in the lives of low- to moderate-income individuals and families. If you have a few hours and love appendices, you can find the full report here. But in sum, financial coaching works across a broad range of financial (i.e. savings and debt levels, credit scores) and well-being (confidence, attitudes, stress levels) outcomes. There were surprisingly no heterogenous differences in outcomes achievements based on baseline financial or demographic differences. Most importantly for our mission, even the 40 percent of our participants on fixed incomes showed statistically significant results in savings frequencies and amounts.
Studying the results of the RCT provided a unique opportunity to further the Clinic’s vision of a financially secure America — lifting the proverbial hood of our programming and demonstrating what worked, for which low-income populations, and why. We hoped that we could make a contribution to this evolving field through scientific evidence and expand the boundaries for the new generation of financial coaching programs.
However, we soon realized that it would be a whole new challenge to actually implement the results in not just our financial coaching program, but our technical assistance and training, and the Clinic’s social enterprise, Change Machine. While the end-goal is easy to understand — improving the capacity to deliver high-quality programs — how to actually read the RCT results and interpret its findings into actionable items can be complicated for any organization:
Lack of formal experience.
Nonprofit program managers may not have the education, know-how, or personal experience in concepts like intent to treatment vs. treatment on magnitudes and scales. Likewise, even organizations with dedicated staff members who focus on data analysis still may not have this specialized research knowledge. Without this base, the process of assigning weights to findings can be difficult. There is always the additional risk that organizations will focus on the findings that reinforce existing assumptions or desires.
The Solution: We institutionalized the implementation of the RCT results by quickly following up their public release with a prominent place in the annual organizational workplan. Within weeks, we began marshaling resources to ask ourselves which findings we would focus on, how it would impact our programs and capacity building, and where we should start. We developed a continuous quality improvement process, D3, named for John D. Rockefeller’s practice of keeping his accounting figures to the third decimal for an additional level of granular information. The D3 process focuses on identifying a problem, uses powerful questions to narrow down a set of hypothesis and creates a collaborative forum to discuss, test and collect data. Lather, rinse, repeat.
In this way, our team methodically worked through the 34 findings of the RCT to identify the next steps for our program. We asked:
- Is this metric important for our work?
- Should the result change the way we do services?
- Is our financial coaching platform (Change Machine) set up to facilitate progress in this area?
- Are there specific questions (or other data collection changes) that we should be asking individuals in subsequent meetings?
Staff expectations and fatigue.
Even if results can be immediately applied, there remains a lengthy process of having staff understand the results and prepare them for new changes. Programs are acutely aware of the dangers of “air dropping” new procedures and so even those that come from evaluations must compete for spots on a long line of improvements to meet contract deadlines, capacity issues, or the many factors that daily programming encounter. With attention towards minimizing staff turnover and burnout, it’s challenging to keep up morale in realizing the work isn’t over when RCT results are received — the real finish line is optimizing programs based on the results.
The Solution: First, the top of the organizational chart and down consistently clarified and reinforced the motivation for participation — within the context of organizational mission and vision — throughout the entire period of participation and implementation. The talking points were simple:
Financial insecurity is pervasive but resources are limited.
Mission: For those we serve directly, how can our model reach more customers in a better way?
Vision: For those we cannot serve but who we can impact (1,000,000 by 2020), how can leveraging our experiences be helpful to other programs and stakeholders?
Within this framework, it was easier to implement the results because the connection to our everyday was clear and consistent. It also allowed us to work collaboratively across departments on a single finding. For example, if the RCT showed coaching could improve debt repayment on 90-to-180 day balances, we could begin by ensuring that financial coaches highlighted or prioritized this step with customers, then expand that to refine our training on specialty debt topics and add new measures to track in our data collection so those beyond the Clinic’s own coaching customers could benefit from this finding.
Resources can be scarce in supporting implementation of results. It is an exaggeration to say that once the lights flip on, the party is done. But the focus may continue to be on the static results and not what should happen next. For programs and fundraising, there is pressure for brand-new innovation, rather than implementation. While programs are typically refining their models as part of their day-to-day, the additional resources to incorporate a large study across different departments may not be planned for or preemptively funded.
The Solution: We emerged from our D3 process with a roadmap for improving our models, programs and resources with evidence from the RCT to keep the Clinic on the cutting-edge of innovation and outcomes, and relevant for funding:
- Well-being metrics. Prior to the findings, the Clinic hadn’t formally incorporated any explicit outcomes on improved attitudes, feelings, or levels of confidence. There were concerns about their subjectivity, and we believed that the coaching process was already focused on empowering customers and therefore pure financial indicators would reflect this process. As a result of the RCT, we decided that not only would we permanently embed the Center for Financial Security’s Financial Capability Scale (which includes a question on confidence), we would also incorporate the Consumer Financial Protection Bureau’s Well-Being Scale.
- Goals calculator. The Clinic’s most emphasized coaching mantra is that “goals are the driver.” One of the findings that we were especially pleased about were the positive impacts on confidence in achieving goals. The tech team created a significantly improved goals calculator on Change Machine that allows customers to play with date ranges and savings amounts to create an action plan that best meets both motivation and capacity.
- Customer dashboards. Program managers need help synthesizing all the data that coaches input. The Clinic has come a long way in continuing to develop robust dashboards within Change Machine for tracking individual customer progress, a coach’s overall performance, and a new manager portal for access to easily customizable reports. We are now currently moving towards using the actual results as benchmarks for coaching performance by building it into the logic of the system.
- Ecosystem. We also launched a financial security ecosystem last year, thanks to JPMorgan Chase, with New York City workforce development programs. Its theory of change is simple. Since financial insecurity is pervasive, a broad range of nonprofits and agencies who focus on low-income populations can better achieve their respective missions by incorporating financial coaching techniques. In workforce development, the job placement specialist knows that the jobseeker will get more employment offers if he or she has better credit and will stay on the job longer if he or she deals with wage garnishment before the first paycheck, so we are teaching those specialists the promising practices highlighted by the RCT. We reserve the most acute and persistent cases for an on-site financial coach who can provide the high impact, intensive coaching that the RCT proved is effective.
Conducting an RCT and implementing its results is extremely challenging, but possible — and worthwhile. After more than a year, we can see the extent to which the RCT has shaped our work by using the results to define improvements in all levels of the organization. We won’t stop there! With a new and improved model for our programs and partnerships, the Clinic continues to grow rapidly. As a practitioner-driven organization, we hope our experience can support other organizations to make research a part of their DNA.