YOLO Creator Joseph Redmon Stopped CV Research Due to Ethical Concerns

Synced
Synced
Feb 24, 2020 · 4 min read
Image for post
Image for post

Joseph Redmon, creator of the popular object detection algorithm YOLO (You Only Look Once), tweeted last week that he had ceased his computer vision research to avoid enabling potential misuse of the tech — citing in particular “military applications and privacy concerns.”

His comment emerged from a Twitter discussion on last Wednesday’s announcement of revised NeurIPS 2020 paper submission guidelines, which now ask authors to add a section on the broader impact of their work “including possible societal consequences — both positive and negative.”

University of Cambridge probabilistic machine learning PhD student Maria Skoularidou tweeted “I think that broader impacts statements might also help authors rethink/realise whether their work is worth submitting,” prompting University of Toronto Assistant Professor of Computer Science and Vector Institute Co-founder Roger Grosse to challenge Skoularidou to provide “an example of a situation where you think someone should decide not to submit their paper due to Broader Impacts reasons?”

That’s where Redmon stepped in to offer his own experience. Despite enjoying his work, Redmon tweeted, he had stopped his CV research because he found that the related ethical issues “became impossible to ignore.”

A current graduate student at the University of Washington’s programming languages and software engineering lab, Redmon proposed the YOLO model in a CVPR 2016 paper that won the OpenCV People’s Choice Award. YOLO was hailed as a milestone in object detection research and led to better, faster and more accurate computer vision algorithms. Redmon’s updated YOLO9000 earned a Best Paper Honorable Mention at CVPR 2017, and he was also part of the team that proposed XNOR-Net using Binary Convolutional Neural Networks for ImageNet classification.

Grosse argued that predicting the societal impacts of AI is a tough area that requires expertise and should be dealt with by professional researchers and organizations instead of the paper authors themselves. This drew a prompt Redmon counter: “‘We shouldn’t have to think about the societal impact of our work because it’s hard and other people can do it for us’ is a really bad argument.”

Stanford Computer Science Master’s student and former Google Brain intern Kevin Zakka meanwhile chimed in that rather than abandoning his research out of fear of potential misuse, Redmon might have used his respected position in the CV community to raise awareness. Others suggested Redmon confine his work for example to the medical imaging domain.

Redmon said he felt certain degree humiliation for ever believing “science was apolitical and research objectively moral and good no matter what the subject is.” He said he’d come to realize that facial recognition technologies have more downside than upside, and that they would not be developed if enough researchers thought about the broader impact of the enormous downside risks.

Ethical discussions around AI are not new and will undoubtedly intensify as the technologies move from labs to the streets. This new attention from a high-profile conference like NeurIPS and Redmon’s recent reveal suggest that experts in particular fields will join the broader ML community and the general public in this ongoing process.

Journalist: Yuan Yuan | Editor: Michael Sarazen

Thinking of contributing to Synced Review? Synced’s new column Share My Research welcomes scholars to share their own research breakthroughs with global AI enthusiasts.

Image for post
Image for post

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Image for post
Image for post

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Image for post
Image for post

We produce professional, authoritative, and…

Synced

Written by

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Synced

Written by

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store