An interview with Dr. Patterson on our National Science Foundation Research Grant

Team TRASH
Future Labs
Published in
3 min readMar 4, 2019

Back in the summer, our co-founder Geneviève suggested we apply for an NSF SBIR grant (US government funding for new science research and invention) and guess what!? We got it! 😇

With thousands of companies applying for roughly 300 awards, it’s a tough race to qualify, so we’re pretty proud of this achievement and recognition!

We’re in excellent company. Since 2014, NSF SBIR-backed companies have raised $6.5B in investment, and 87 have had exits 🚀

The exciting part is that being awarded Part I unlocks the opportunity to apply for Part II, totaling in $1M. This funding goes directly to our science team led by Dr. Geneviève Patterson. She was interviewed by NYU’s Future Labs about our NSF research proposal on a new branch of computer vision in the field of computational photography and video understanding: computational cinematography. Computational cinematography is an emerging topic at SIGGRAPH, and the graphics/computational photography groups at Stanford, MIT, and Adobe Research have been recent contributors. TRASH is one of the first startups to work on it!

Q&A with NYU Future Labs and Dr. Patterson

Dr. Geneviève Patterson, Co-founder & CSO TRASH

Tell us about your research!

We’re developing tools for mobile video editing. I’m doing this by taking successful machine learning and computer vision techniques like face detection, activity recognition and sequence modeling, and using them to turn everyday videos shot on an iPhone into exciting, impactful mini-movies.

How will you spend the grant money? What are your goals?

We’re growing our AI team this summer with three new researchers. Our main goal is to develop new ML algorithms that find the best moments in your videos. We’ll also be researching how to automatically recreate edited content in different genres and aesthetic styles.

Who will you be working with?

I’ll be advising two graduate interns, PhD candidates from RPI and Georgia Tech, as well as two data scientists who recently earned degrees from MIT and Columbia.

I believe in consulting with domain experts too, so we regularly have NYU film students interning, as well as a professional video editors advising our team too, sharing their techniques, insights and gut-checking creative ML decisions.

What excites you most about what you’re doing?

Combining machine learning and creativity is a fantastic challenge. I’m excited about using precise and deterministic techniques to make a huge variety of artistic output for our community of users!

How do you see your research contributing to the broader field of computer vision?

The field of computational photography (which is a subfield of computer vision) completely revolutionized post-processing on images and advances in how cameras take photos. Iconic products like Photoshop wouldn’t even exist without discoveries in this subfield! I feel lucky to be at a place in history where computational cinematography is just starting to develop as a research topic, and I’m excited to be a part of making that happen!

We intend to publish our research and contribute to the computer vision field as we build machine-assisted creative tools for our nascent community!

Thank you to the National Science Foundation SBIR and NYU Future Labs for helping make our dreams possible with funding and benefits!

--

--