Summer Research Experiences: AI and the Criminal Justice System

AI4ALL Team
AI4ALL
Published in
6 min readNov 26, 2018

AI4ALL Editor’s note: Meet Rebekah Agwunobi, a 2016 Stanford AI4ALL alumna, an NCWIT 2018 National Award Winner, a Caroline D. Bradley Scholar and a senior at Choate Rosemary Hall in Wallingford, CT. In this article, we interview Rebekah about her experience at MIT, where she interned this past summer with researcher Chelsea Barabas, using AI and machine learning techniques as a lens to look at the criminal justice system.

As told to Eunice Poon of AI4ALL by Rebekah Agwunobi

I understand that you were exposed to JavaScript at a young age, and that’s where you got your initial exposure to CS. Can you talk about how you got more involved in AI and machine learning work presently?

I’ve always been interested in CS, but I think my interest in AI really wouldn’t have happened without AI4ALL. At the time [as a freshman in high school], AI wasn’t something that was accessible for me, so AI4ALL’s program was essential in terms of closing that gap.

I also think collaboration was a huge part for me as well. Learning alongside girls the same age really has an important psychological impact. Before Stanford AI4ALL, I don’t think I had an established community of women that I knew that also did CS and AI, who had a myriad of different personalities and backgrounds.

It was powerful to be exposed to a community of people with similar interests who were applying AI to social issues. It made me want to learn and be part of something broader.

So this summer you had the opportunity to complete an internship at MIT. Can you share your experience, what you did during the internship and the big takeaways?

This summer I was working as a researcher for MIT, and I am still there currently as an offsite assistant. At the lab, I am part of a team that uses AI and machine learning techniques to look at current methodology that is being used in the criminal justice system and coming up with other data-driven narratives that could be more effective. We are really trying to oppose the current methods in which AI is being used within the criminal justice system — primarily on risk assessments — just because they are typically based on datasets that aren’t the most ethical to begin with.

So a lot of the research that I did and am still doing has to do with ethical AI: the way algorithms are being used to oppress people and how to counter those narratives.

During the internship, I was doing CS research, but I also got to do fieldwork like court watching, which became an integral part of my research too. It was such a great experience to be able to create tangible work that combines academic disciplines and quantitative, qualitative, and perhaps philosophical ways of thinking.

I think the biggest takeaways for me were the idea of missing datasets and the idea that we assume data is being collected all the time. In reality, organizations aren’t incentivized to collect data that makes them look less favorable or sometimes it may be less advantageous for them to publish data. But this lack of data causes huge gaps in what we know about our systems and the data that we have collected, and that’s really harmful to a lot of sociopolitical research because a lot of the time, you don’t really have any math systems that are keeping track of statistics that have to do with underrepresented or disenfranchised populations.

In a way, researchers are trying to quantify conditions they know little about.

That’s why when we think of CS as objective truth, it’s really dangerous. There are so many ways to interpret problems, and there are so many ways to interpret solutions.

Especially when we’re quantifying an experience like being jailed pre-trial or what happens in a courtroom hearing, we really have to think about how our internal biases are being reflected in our research. Seeing the lack of clarity in some data and observing the lack of data in some cases really made me question the notion of science as fact or the assumption that science is always true. It makes me want to create more informed and more pragmatic solutions moving forward.

After your experience working as a researcher this past summer, what type of impactful research do you think can be done with AI in regards to the criminal justice system?

I think it’s really important to ask for transparency from those in power in systems like these, and I think that comes in the form of checks and balances and publishing research. I also think that some of the existing solutions that have been presented from a criminology/sociology standpoint need to be questioned, verified, and validated before being implemented, just because of the drastic importance that they have on the lives of people.

In terms of AI, it should be used to create more people-oriented solutions by including voices from diverse disciplines, where we could work together to find what would be constructive, what would be helpful. I think AI would be more helpful if it wasn’t just making assumptions about the criminal justice system, but rather taking a more ethical approach, by, for example, taking down accounts of various courtroom players and combining those to create different types of solutions.

From a creative perspective, how can AI be combined with social justice or an individual’s personal interests?

I think that AI has been really important to my growth because even when I used it throughout school and my independent studies, I’ve applied it to things like art, music, and various academic disciplines. AI is a very interdisciplinary tool, and it can be used in any field.

It was interesting to observe the application of AI to the criminal justice system in this internship because maybe it is less obvious how AI could be beneficial to a field like criminal justice that has been practiced in a lot of ways without much change for decades or centuries. But using AI in the criminal justice system, where things have been the same for a long time, or other fields or academic disciplines that have stayed fairly unchanged for many years, is really cool because when we combine AI with the old, we can make something new.

What are the next steps for you? In terms of goals, aspirations, internships, projects for the future?

I’m currently finishing my senior year, but I already want to continue doing research next summer, just because research has been so important in shaping my experience in school. I think after doing CS and research, it really does completely change the way you look at yourself and the field that you’re studying, because it requires you to think about questions in different ways. That’s what I’ve grown to love about CS and research.

I’m not too sure yet if I want to stay in academia or work in the industry, but I’m going to college next year, and I want to major in CS and political science. I really want to explore what social problems should be targeted using CS and AI. I want to ask questions like when are certain methods more or less effective and which communities benefit when certain methods are used? I think sometimes the way CS is applied, either directly or indirectly, doesn’t acknowledge the voices of the populations targeted, or maybe interprets or quantifies an experience incorrectly, so I want to do more research on that and other social issues.

About Rebekah

Rebekah Agwunobi is a high school senior in Wallingford, CT. She is an NCWIT 2018 National Award Winner, Caroline D. Bradley Scholar, and Stanford AI4ALL alumna. She is unbelievably passionate about applying STEM to solve issues ranging from women’s healthcare access to disaster relief. In her free time, she enjoys playing guitar, singing, wakeboarding, skiing, and musical theatre. In addition, she teaches all-girl coding classes and is invested in making STEM accessible to underrepresented minorities and women. She is an organizer for MAHacks, co-president of Choate Diversity Student Association, co-president of Choate Programming Union, as well as Senior Editor for three publications. After high school, she hopes to pursue research to solve complex humanitarian issues with creative CS solutions.

--

--

AI4ALL Team
AI4ALL

AI4ALL is a US nonprofit working to increase diversity and inclusion in artificial intelligence.