An Interview with Susan Athey, Founding Director of the Golub Capital Social Impact Lab at Stanford​

Stanford GSB’s Golub Capital Social Impact Lab uses technology and social science to improve the effectiveness of social sector organizations.

But how?

Learn more about our focus and work in this audio interview (transcript below) with Dr. Susan Athey, the lab's Founding Director.

A Conversation with Susan Athey | Golub Capital Social Impact Lab at Stanford GSB

Stay updated on our news and ongoing projects by following us on Medium, X, and Threads!


A Conversation with Susan Athey (Transcript)

David Finegold: Hello, my name is David Finegold, and I have the pleasure of serving as the program officer for the three social impact labs that Golub Capital has funded at Stanford, Northwestern, and Chicago Business Schools. Each of these labs is taking a different approach to trying to help nonprofits solve important social problems. I’m delighted to be here today with Dr. Susan Athey, the economics of Technology professor at the Graduate School of Business at Stanford and the founding director of Stanford’s Golub Capital Social Impact Lab. Susan, it’s great to be with you today.

Susan Athey: Great to be here. Thanks.

David Finegold: Could you start out by just sharing a little bit about the work that the Golub Capital Social Impact Lab does here at Stanford?

Susan Athey: Yes. Here at Stanford, what we’re doing is using the tech toolkit to try to make social impact organizations more effective, and that work is carried out by bringing new technologies like AI, machine learning, and advanced capabilities and experimentation in partnership with social impact organizations, or sometimes for-profit organizations that have access to a customer base that might be of interest. We carry out that work by bringing together faculty, postdocs, and students across a range of disciplines, from engineering to psychology, in order to partner with these organizations to solve their problems. For example, we do a lot of prototyping and early implementation measurement and experimentation to establish what works and whether that’s something the organization should further invest in. The students involved with this are really excited about the fact that they can go from sort of abstract technical work in their classes to actual-world implementation. And there’s so much that falls between the theoretical idea or the hypothetical use case of a neural network and actually putting it into practice. So they’re really excited to have that opportunity to learn and practice while the organizations are often short on people, and it’s hard for them to invest in prototyping things they haven’t done before. So they’re getting the benefit of Sanford students who, of course, are the best and brightest.

David Finegold: Can you give, just to make it concrete for folks, a few examples of some projects you’ve been really excited about that the lab’s been able to do?

Susan Athey: Yes, one I’m really excited about is a project we did with a small nonprofit in Poland. When we got involved with them, they were running a mentoring program that helped mentor women who wanted to get into the IT industry. And Poland has a lot of IT outsourcing. So there were actually a lot of jobs, very high growth areas, but there were not very many women going into those careers. So there was this mentoring program, and we did an evaluation of it, and we showed that it was working really well, but it was just very limited in how it could scale because they’re limited by one-on-one mentoring. After talking with the nonprofit and the employers, we realized that one of the really big problems for women doing this transition was just to demonstrate their work. We created from scratch right out of the lab, a brand new program where the women would work together in teams to create portfolios and the employers told us what kind of portfolios would be useful, so we created these teams. The women worked together, they met every other week, and by the end, they had these portfolios, and this program was very much in demand. So we randomized admission to the program, and that way, we were able to compare what the outcomes were for folks who had the program and didn’t have the program. And we found a very large increase in employment. We estimated that we were able to get a woman a job for a cost of about $15 a person, which is pretty good.

David Finegold: Oh, that’s a great return on investment. Yeah, absolutely. How about some other areas in which the lab has worked?

Susan Athey: So, another project we worked on, which I think is a really good example of how technology can be used beyond just directly interfacing with constituents, was a case for a hospital in Cameroon where we developed a tablet application that would help nurses counsel patients. In a lot of developing countries, there can be challenges to have your medical professionals have all the latest and most up-to-date information. And, for counseling anywhere, it’s especially challenging to make sure you’re getting the right information to the patient in a reasonable amount of time. The tablet application helped the nurses walk through finding the patients' desires and concerns and then prioritize potential treatments, a course that would be the best fit for the patients. We found that letting the patients know that these were sort of recommended choices actually led the patients to be more engaged in their own healthcare, to spend more time talking about things and to consider more alternatives, and also to have a higher uptake of alternatives that were most consistent with the medical knowledge.

This was a very popular application. The nurses liked it, the patients liked it, it led to good outcomes. Again, it’s very inexpensive, and I like it because it’s also an example of how we can deploy technology, as AI gets better and better, not necessarily to go directly to the patient, but to help a provider help the patient; it expands the knowledge and helps the provider give a more customized experience without having had to memorize everything. But still, there’s the human in the loop making sure that the patient’s need is met, that everything makes sense, and that you’re not making mistakes.

David Finegold: And in addition to these two, I know you’ve also worked on really topical things like misinformation. How do you decide, given the breadth of applicability of the lab’s toolkit and the things that can work, which projects are really worth the lab’s time to focus on?

Susan Athey: In trying to figure out the best things to work on, we have to start with: Do we have value to add that matches the organization’s need but is a little more nuanced than that? Does the organization have the capacity to absorb help? Because in the very early stages, you may not even have the people and the time to adopt something new or to try something new, so you have to have enough infrastructure to be able to absorb that input and be able to act on it. Of course, sometimes, when you’re too far along, things are set in stone, and it’s actually very hard to change. So there’s a Goldilocks point to come in when you can really have a high impact, and an organization is ready for you. Then, in terms of the substance, I’m really looking for use cases that will demonstrate something generalizable so that other organizations will be able to see these case studies and get an idea of what might work for them.

This example of having a tablet that helps nurses is great. There are many places where we might have coaches, teachers, or other counselors trying to help someone. Their job is to help someone; technology could help the coach or service provider help the constituent. So I really liked that as a demonstration of a concept. In other cases, there can be a demonstration of a methodology so that we can provide notebooks and code and visualizations of data that other folks could build on if they were trying to understand their own data and measure success.

David Finegold: Great. One of the common underlying things in a lot of the approaches you’ve had is the use of machine learning and AI. Of course, that’s all over the news these days with ChatGPT and
OpenAI. As one of the early and leading academic experts on this, what are your views on the potential of this technology, both the things we should worry about and the opportunities it creates for nonprofits or higher education?

Susan Athey: In terms of adopting these technologies, and I should say maybe if you even look back over only the last decade, we’ve had folks coming in for executive education and for consulting in these areas. First, it’s big data, then machine learning, and now AI; there’s often a lot of fear of missing out. And then, when organizations started trying to adopt these things, they realized they often had a years-long journey just to get the data organized and the technology set up so they could do something. Then they also faced a lot of challenges in terms of hiring people who knew what to do. And if you get the wrong people, you might build the wrong thing, and you could even waste all that money. So it’s been quite a big startup cost, I would say, to adopt these modern technologies.

This latest incarnation with Large Language Models is a bit different because you can use them even if you aren’t a very large organization with a huge amount of data yourself. So that’s one big breakthrough where, in the past, a company might say: “Hey, I want to use neural nets and don’t have enough data to train one”; it’s just not going to work for your application. But now, smaller organizations can use these things, and there’s a much wider set of use cases. I think that’s exciting and that you can use a lot of these things without a lot of coding, but that doesn’t mean that it’s completely easy, and it also doesn’t mean that you’re actually going to get value out of every application. So one of the things that we need to happen next is to see in which use cases this really helps just a nonprofit.

Is it using AI to directly provide information to your final constituents, the people you’re serving? Is it to help coaches help your constituents or help your volunteers help your constituents? Is it going to be to help your other employees do their jobs better? Is it in helping in your operations? Is it something on your website? Is it in fundraising? We also have some projects in the lab actually using chatbots for fundraising, and we have experiments running right now. So, have a more personalized conversation, providing customized information to your donors. These are all just some of the many use cases. And until we see these case studies written up and played out, organizations are going to be guessing a little bit about where they’re going to get the most value. And then one other issue is that we don’t have off-the-shelf tools yet to really make sure that the implementation is safe or meets the needs of your users.

Is it going to give incorrect information? Is it going to give offensive information? It’s going to make mistakes. I have confidence that over the next years, we will see more and more tools that make it easier for companies to figure out how to check their work. But right now, it’s kind of the wild west, and you to kind of make it up as you go along. You have to think of what the problems could be and assess whether those problems are occurring with a large language model. Still, you shouldn’t underestimate the work it will take to sort of audit, check and monitor, and make sure that it’s working as intended.

David Finegold: It’s an exciting time at Stanford because now the university as a whole is launching an initiative to do more work related to the kinds of problems you’ve been tackling in the lab on business, government, and society. Can you say a little bit about how that’s unfolding and what you think the implications will be for the lab's work?

Susan Athey: Yeah, so there’s a lot happening at Stanford right now. We have a university-wide initiative on social impact labs, and here at the GSB, we have a Business, Government & Society initiative. I’m actually co-leading the Beneficial Technology pillar of that. The Beneficial Technology pillar includes the kind of work that we’re doing in the lab. Our lab is a part of that pillar, and really, the theme of the pillar is using technology constructively. All of these things happening at Stanford are really helpful for the lab. Our lab has very much of a build-it, let’s make things, let’s use technology for good, let’s show. Then, through the process of using technology for good, let’s learn how to make technology do good and avoid unintended consequences. So, in the process of our work, we’re learning how to measure and keep that technology effective and beneficial for society.

That’s one part of what we have going on in this pillar. The pillar also has been working a lot on AI leadership, generally trying to get new classes organized at the school and trying to help the school and researchers get more involved in this whole idea of bringing AI for impact. I feel like there’s a much larger community of scholars, researchers, staff, and a lot of energy among the students to take these ideas and make sure that we’re creating this technology here in Silicon Valley. We’re educating the students right here at Stanford who are going out and building this stuff in for-profit companies and social impact organizations. And so we are, as a university and many in all of these different initiatives, better preparing our students to be leaders and hopefully have all of this technological innovation benefit society and have our students recognize when there might be unintended consequences.

David Finegold: So you’re just finishing your tenure as president of the American Economic Association, and you’ll be giving the presidential address to the annual gathering, which is, I think, the most economists that come together any time of the year, thousands of them in San Antonio, Texas. Can you give us a little preview of some of the things you’ll be sharing with your colleagues?

Susan Athey: In my presidential lecture, I’m going to take a look at some of the implications of the digitization of the economy for how economists can do research, especially research for social impact. Pretty much every part of the economy becoming digitized means that when students are getting their education, when patients are getting health, when consumers are finding products, when firms are producing things, all of that is getting digitized. But it’s not just that we are creating data that can be analyzed later.

It’s also the case that we can intervene in how those services are delivered. We can change the technology delivering services to make it more personalized, to get people the right information in the right place at a convenient time for them. We can change what people are receiving. That puts the social scientist potentially into part of an engineering loop, instead of just coming up with an idea and testing it and getting the results two years later. In principle, you can change things in software code and see the results very quickly, measure it, figure out what’s working, what’s not working, how to make it better, how to make it more personalized, and iterate again. So this type of very fast iteration cycle is something that’s new, and it creates a lot of different opportunities for how we intervene, how we measure, how we experiment, how we analyze the results, and even what kinds of interventions are possible to study.

So, I’ll give examples of some of the research we’ve done in the lab that has shown that things like personalization can improve outcomes for people, whether it’s patients being counseled or students interacting with an educational technology app. I’ll show these case studies as well as some of the ways in which carrying out those case studies has posed new research questions, new social science research questions, behavioral science research questions, and statistics and methodology research questions. All of these new questions emerge when you start becoming an economist in the loop of this sort of delivery of digital services.

David Finegold: Great. Well, Susan, thank you so much for taking the time from that busy schedule to talk to us today. It’s been a real pleasure.

Susan Athey: Thank you.



Golub Capital Social Impact Lab @ Stanford GSB

Led by Susan Athey, the Golub Capital Social Impact Lab at Stanford GSB uses tech and social science to improve the effectiveness of social sector organizations