Role Models in AI: Amy Jin

AI4ALL Team
AI4ALL
Published in
8 min readFeb 28, 2018

Meet Amy Jin, a current high school senior at The Harker School, and Stanford AI4ALL (formerly SAILORS) alumna. Her avid interest in developing computer vision applications to solve healthcare problems led to co-authoring an award-winning research paper, Tool Detection and Operative Skill Assessment in Surgical Videos Using Region Based Convolutional Neural Networks, which you can read here.

Learn about what got her started in AI, who inspires her, and where she plans to go next. We interviewed Amy as a special edition of AI4ALL’s Role Models in AI series, where we feature the perspectives of people working in AI. Usually we interview adults working in AI, but Amy proves that you can make contributions to AI at any age. Check back here on Wednesdays for new interviews.

As told to Nicole Halmi of AI4ALL by Amy Jin; edited by Panchami Bhat

NH: What was your experience at NIPS 2017? Did you know that your paper had won when you went?

AJ: I actually didn’t know that my paper was up for the best award [in the Machine Learning for Health workshop]. I found out the day of. There’s a short 20-minute awards session and they just announced the top two papers, which was really exciting.

About my experience at NIPS (Neural Information Processing Systems) and at the Machine Learning for Health Workshop, I thought it was a really amazing experience. The poster sessions and the talks were super interesting and it was great to hear about all the projects and initiatives that other people were working on, across the country and world. One person standing next to me at the poster session was from MIT and he did a project on unsupervised medical image segmentation which was really cool to learn about. Professor Fei-Fei Li’s talk, hearing about Stanford Vision Lab’s initiatives in monitoring hand hygiene and senior well being, and a talk on using AI to do drug repurposing were other highlights.

I also got to talk with people and share our research with others, which gave us a lot of useful feedback. I was speaking with a graduate student who asked whether we’d ever considered using voice input into our deep learning model and it was something I never thought about before.

How did you get interested in the work you’re doing now, the work that’s in your paper? How did you get interested in AI generally?

Stanford AI4ALL (formerly SAILORS) is what really got me interested in AI and computer vision especially. At Stanford AI4ALL I was a part of the hand hygiene monitoring research group, where we tried to create a computer vision system to monitor hand hygiene and combat hospital-acquired infections.

That showed me how powerful AI and computer vision are in addressing real world problems.

Stanford AI4ALL was the springboard into my research journey now.

I reached out to Serena Yeung, one of my project mentors at Stanford AI4ALL, asking how I could learn more about computer vision. She recommended that I look at a Stanford course, CS 131, a foundations and applications of computer visions course. I worked with Serena and another Stanford AI4ALL alumna, Vivian. While working through this course under Serena’s guidance, we created a computer vision tutorial website.

All of this gave me the foundations to pursue research in this area. From there I continued to talk with Serena, and eventually decided to go with the surgical video analysis project that I’m working on right now.

Amy presenting her co-authored paper at the Machine Learning for Health workshop and poster session at NIPS 2017

Can you describe the research in simple terms for people who might not be familiar with what you’re working on?

Every year, millions of patients from suffer physical complications and it’s actually estimated that half of them are preventable, but surgeons don’t really receive any formal feedback on their performance.

We wanted to use deep learning to more easily assess operative skills so we can give feedback to surgeons and reduce patient complication rates.

There are two parts to our project. First we use deep learning to track and classify surgical instruments in videos, and then we use our tool detection model to further track and characterize tool movements. We did this by extracting assessment metrics that reflect surgical skills. For example we looked at motion economy, tool trajectories, and tool usage patterns.

What were some of your results from the research?

For the first part, where we classified and tracked surgical instruments in videos, there had been previous research done on tool presence detection in laparoscopic cholecystectomy (or physical surgical removal of the gallbladder) videos. We were able to improve on these previous results pretty significantly, by 28%. Previous studies hadn’t really explored using tool movements and patterns to analyze surgical skill, so that was a unique contribution of our research.

Are you working on any more research now or do you have any other big projects going on?

I definitely want to continue this research into next year as well. For our research we looked at 15 videos of the surgical procedure, and we’re hoping to expand on the scope of our project and look at a dataset of 80 videos. We also want to automate the process of surgical skill analysis and assessment. Right now we’re extracting assessment metrics from the videos, but we want to see how we can automatically score a procedure and a surgeon’s skill based on a standardized surgery assessment rubric.

After I self-studied Stanford’s CS131 computer vision course, I saw an opportunity to apply the image processing and machine learning techniques that I learned from the class. Around my neighborhood there are many citrus trees that I noticed had these really strange black spots. I started to wonder if they were diseased, and whether they had different diseases. That developed into a side project where I tried to automatically classify and detect plant diseases. I collected an image dataset of diseased or potentially diseased plant tree leaves, and created a machine learning algorithm to automatically classify them.

The computer vision hand hygiene monitoring group at Stanford AI4ALL 2015

What has been the proudest or most exciting moment in your work so far?

All of it has been really exciting. I think attending the Machine Learning for Health workshop was really the highlight since I’d never really been to a conference before.

Being surrounded by such an amazing community of like-minded people who are interested in doing research and pushing the frontiers of AI was really exciting.

Who are your role models?

Serena, one of my project mentors, is actually one of my biggest role models, because throughout this research process she’s been super supportive. Her dedication and commitment to what she loves really inspires me. No matter how busy she is, she always finds time to meet with me and discuss the project’s next steps with me. I really hope I can follow in her footsteps in this field.

What advice do you have for other young people who are interested in artificial intelligence and are just getting started?

I’d say first don’t be afraid and don’t let anything limit you if you’re interested in AI research. I learned the importance of having a “commit first” mindset. If you see an opportunity, you should take advantage of it. Reach out to mentors. I found that working with Serena and Jeff, who are my two main mentors on this project, was really helpful. I really benefited from their guidance and mentorship.

Be persistent, as well. Try and take advantage of any resources that you see around you. For me, that was not only working with mentors and meeting with them, but also making use of online resources. For example whenever I ran into a bug that I couldn’t figure out how to resolve, I would turn to Stack Overflow forums, GitHub, and other website tutorials.

What’s next for you? Do you have college plans? Are you planning to major in computer science or AI-related fields in college?

I’m still in the college application process, but definitely look forward to continuing AI research in college and beyond. Hopefully I can intern in a lab and continue this type of research in the future.

I plan to major in computer science, but I’m definitely open to exploring other fields. One other interest of mine is the ethical dimension of artificial intelligence. There’s been a lot of discussion going on about the safety of AI in the future. That’s an area I want to learn more about and take classes on in college to further guide my interest in computer science and in AI.

About Amy

Amy Jin is a senior at The Harker School in San Jose, California. She likes science research and is interested in developing computer vision applications to solve healthcare problems. At school, she serves as president of the Women in STEM Club and French National Honor Society and is Co-Editor-in-Chief of Horizon, Harker’s student-published science research journal. In her free time, she enjoys playing the violin, dancing, and studying the ethics of science and technology.

Follow along with AI4ALL’s Role Models in AI series on Twitter and Facebook at #rolemodelsinAI. We’ll be publishing a new interview with an AI expert on Wednesdays this winter. The experts we feature are working in AI in a variety of roles and have taken a variety of paths to get there. They bring to life the importance of including a diversity of voices in the development and use of AI.

--

--

AI4ALL Team
AI4ALL

AI4ALL is a US nonprofit working to increase diversity and inclusion in artificial intelligence.