Improving Lives Through Research: Q&A with Justine Hastings, Founding Director of RIPL

--

How can innovations in health and education make a difference in millions of lives? At the Inaugural Market Shaping Conference, participants highlighted innovative and successful applications of large-scale data to improve social welfare. The conference was supported by Schmidt Futures and took place on September 21 and 22, 2018 at Stanford University’s Koret-Taube Conference Center.

Science and technology research can make drastic improvements to health, education, and well-being, but far too few innovations are reaching people who need them most.

Justine Hastings, professor of economics and international and public affairs at Brown University and a speaker at the Inaugural Market Shaping Conference, is working to change that for the citizens of Rhode Island. She is the founding director of Research Improving People’s Lives, a nonprofit research institute that uses data science to improve policy and which has grown out of her work in the Rhode Island Innovative Policy Lab. The lab, which is using research-based methods to help improve policy decisions and outcomes, has significant methodological overlap with market shaping. Hastings discussed her experience using data to improve lives in Rhode Island.

RIPL promises that its “unique approach turns facts into results by combining high-powered research, policy expertise, and technical know-how to give governments the tools they need to better serve their communities.” How is your research improving the lives of the people of Rhode Island?

We have used research methods first to learn about, and then to improve, social services. To do this, we applied methods from data science, economics, and machine learning. This means we can help Rhode Island offer state-of-the-art programs that are simply more effective.

We specialize in the research and data analysis that can measure the success of current programs. Then we identify ways we can improve them and partner with organizations to implement those recommendations.

But for us to get to this stage, we first had to establish relationships with the right organizations and build a rich database.

What did you need to do to make the existing data useful for your research?

We knew we could develop better interventions and programs if we could access more comprehensive and relevant data. We also knew data was out there in administrative records that span birth, health, education, labor, human services, crime, and civic participation, but it was stored in a lot of different places. And to get access to each dataset, we needed other groups to support us.

Gina Raimondo, the governor of Rhode Island, was interested in what we wanted to do, so with the help of her office we were able to combine hundreds of datasets of administrative records, stretching back more than 20 years, and create one big database from them.

We also wanted to respect the privacy of the citizens of Rhode Island, and so we were able to develop an algorithm to de-identify individuals. We also put a lot of effort into making sure that data was secure. Then the final step was to optimize the database structure so that it was suitable for data mining and rapid machine learning.

What can you learn from this type of database about how to implement policy?

We have learned a lot, in many areas that we think are important to people who live in Rhode Island.

For example, we already know that reading proficiency by the end of third grade is a crucial indicator in a child’s development. Students who do not read at grade level in third grade are four times more likely to drop out of high school than those who do. Gov. Gina Raimondo has a goal of raising the share of students who are proficient at third grade levels to 75 percent by 2025.

One existing policy for child development in Rhode Island is that babies who weigh less than 1.5 kilograms at birth receive an investment of $4,000 toward their healthcare and to provide training for their parents.

We wanted to understand the impact of this investment later in life. So, we looked at how children whose birth weight had been just above this cutoff were doing at school compared to children who had been just below this cutoff. We found that children below the cutoff — who had received the extra investment — did significantly better in school afterward. Those who weighed slightly more than 1.5 kilograms but did not receive the investment were more likely to need the support of social programs. This additional support cost the state far more than $4,000 per child.

So, if we change the way in which we determine who receives extra post-childbirth services, we get closer to meeting this target. Our policy prescription was to increase the birth weight at which babies would qualify to receive these additional post-childbirth services. As a result of our evidence, the share of children who achieve third-grade proficiency is likely to increase.

Improvement in College Enrollment by weight at birth with discontinuity at the birthweight of 1,500 grams indicating that policy investments in very low birth weight children lead to be improved outcomes later in life

birth weight in grams
Notes: Panel shows the relationship between birth weight and 4-year college enrollment by age 22. Dots represent means within 20 grams bins of the running variable. The dark lines are predictions from a linear model using the individual-level data.

Can this be used to create better evidence for existing policies?

Rhode Island has a child welfare services program to ensure that every child is protected from child abuse and neglect. It tries to keep children safely at home whenever possible, but it also has a process to remove children if it believes those children are at risk of maltreatment. But the providers of the services were “flying blind” in the implementation of the policy. They had no concrete evidence that this policy was improving outcomes for children, or even that it was being applied to the right families.

We wanted to learn how to do a better job of identifying families in which a child is at risk — and have better data on the effects of removing those children from the family home on their long-term outcomes. Using our database, we used all the available information about the family so that we could better identify the children at highest risk, who should be visited most often.

We also examined the impact of removing a child on that child’s subsequent academic performance. To do this, we used what is called an instrumental variable approach to ensure that the effect we found was causal. Interestingly, the removal effect differed for boys and girls. For boys, removing them from the home caused a decrease in their academic performance, while it improved test scores for girls. We hope that this more nuanced understanding of the long-term consequences of removals can help to improve the outcomes when the Department of Children, Youth & Families has to intervene.

How is Rhode Island improving the way it targets training and learning?

Gov. Raimondo has also said she wants to increase job skills and employment for Rhode Islanders, so we examined how to improve the targeting of labor training programs administered by the RI Department of Labor and Training. We conducted a field experiment and then applied machine learning to identify the types of people the training program had helped get a new job. We found that the training program had different effects for different people. Identifying those effects will help us to improve the way the training program works in future, because we can target it to people who would benefit the most from it.

Also, students in low-income households are less likely to enroll in four-year colleges compared to students in high-income households. So we also want to increase college enrollment among students from low-income households.

The standard intervention to do this is by awarding need-based scholarships; unfortunately, those scholarships are expensive. Instead, working with the Chan Zuckerberg Initiative, we developed a program called Rhode2College, in which students enroll at the start of their high school junior year. It uses an approach based on small nudges and is administered through an app.

For example, the program pays students up to $2,000 if they complete milestones toward applying to college, such as updating college plans, completing the Common App, or preparing for the SAT. The mobile app also has a chatbot that answers student questions about college.

What do you do to ensure that interventions and policies based on this work will be transparent and fair?

We learned that it is important to use interpretable algorithms if we want to inform the policy discussion. Policymakers want to be able to understand not just that something works, but also why, and so they need to see the relationships between the factors. Algorithms whose output is based on a “black box” are not satisfactory in this context.

We also need to be careful about the features and variables we include in our models so we do not make conclusions that reinforce disadvantage or cause political hurdles. For example, we took race predictors out of our models.

And finally, not all the insights we uncovered can be turned into policy. For example, we cannot create policies that treat boys differently than girls, so any evidence on differential effects that are based on gender can only inform the advice we give and cannot be made into new service requirements.

What advice do you have for someone who wants to do similar work in another state?

The most important aspect is that we could not have done this work alone.

We needed to partner with a lot of groups and advocacy organizations to build our database. Thus, making the extra effort to communicate clearly with partners — and being collaborative with them in return — has been vital to our success.

Is there a role for academic research to help initiatives like RIPL?

Yes. We used methods developed by academics at Stanford to improve our understanding in the labor market program analysis. In particular, we measured the impact of the labor program using the causal forest algorithm. If we had not used this machine learning technique, we would not have been able to understand the varying success of the program in different groups. Using that machine learning technique meant that we could identify how to improve the way program spending was allocated, which has ultimately made the program more effective.

What did you need to do to make the existing data useful for your research?

We knew we could develop better interventions and programs if we could access more comprehensive and relevant data. We also knew data was out there in administrative records that span birth, health, education, labor, human services, crime, and civic participation, but it was stored in a lot of different places. And to get access to each dataset, we needed other groups to support us.

——
Learn more about the Golub Capital Social Impact Lab at Stanford Graduate School of Business.

Follow us @GSBsiLab.

Learn more about Justine Hastings.

With writing help from Erin Fahrenkopf and Tim Phillips.

--

--

Golub Capital Social Impact Lab @ Stanford GSB

Led by Susan Athey, the Golub Capital Social Impact Lab at Stanford GSB uses tech and social science to improve the effectiveness of social sector organizations