ICCV 2019 Best Papers Announced

Synced
SyncedReview
Published in
5 min readOct 29, 2019

ICCV 2019 today announced its Best Paper Awards in three categories. The ICCV (IEEE International Conference on Computer Vision) is a top international biannual computer vision gathering comprising a main conference and several co-located workshops and tutorials. ICCV 2019 received 4,303 papers — more than twice the number submitted to ICCV 2017 — and accepted 1,075, for a reception rate of roughly 25 percent.

Best Paper Award (Marr Prize): SinGAN: Learning a Generative Model from a Single Natural Image

Authors: Tamar Rott Shaham and Tomer Michaeli from the Israel Institute of Technology, and Tali Dekel, Google Research.

Abstract: We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.

Best Student Paper Award: PLMP — Point-Line Minimal Problems in Complete Multi-View Visibility

Authors: Timothy Duff and Anton Leykin from the School of Mathematics, Georgia Tech; Kathlen Kohn (KTH), and Tomas Pajdla CIIRC from the Czech Technical University in Prague

Abstract: We present a complete classification of all minimal problems for generic arrangements of points and lines completely observed by calibrated perspective cameras. We show that there are only 30 minimal problems in total, no problems exist for more than 6 cameras, for more than 5 points, and for more than 6 lines. We present a sequence of tests for detecting minimality starting with counting degrees of freedom and ending with full symbolic and numeric verification of representative examples. For all minimal problems discovered, we present their algebraic degrees, i.e. the number of solutions, which measure their intrinsic difficulty. It shows how exactly the difficulty of problems grows with the number of views. Importantly, several new minimal problems have small degrees that might be practical in image matching and 3D reconstruction.

Best Paper Honorable Mentions

Paper: Asynchronous Single-Photon 3D Imaging

Authors: Anant Gupta, Atul Ingle, and Mohit Gupta from the University of Wisconsin-Madison.

Abstract: Single-photon avalanche diodes (SPADs) are becoming popular in time-of-flight depth-ranging due to their unique ability to capture individual photons with picosecond timing resolution. However, ambient light (e.g., sunlight) incident on a SPAD-based 3D camera leads to severe non-linear distortions (pileup) in the measured waveform, resulting in large depth errors. We propose asynchronous single-photon 3D imaging, a family of acquisition schemes to mitigate pileup during data acquisition itself. Asynchronous acquisition temporally misaligns SPAD measurement windows and the laser cycles through deterministically predefined or randomized offsets. Our key insight is that pileup distortions can be “averaged out” by choosing a sequence of offsets that span the entire depth range. We develop a generalized image formation model and perform theoretical analysis to explore the space of asynchronous acquisition schemes and design high-performance schemes. Our simulations and experiments demonstrate an improvement in depth accuracy of up to an order of magnitude as compared to the state-ofthe-art, across a wide range of imaging scenarios, including those with high ambient flux.

Paper:Specifying Object Attributes and Relations in Interactive Scene Generation

Authors: Oron Ashual from Tel Aviv University and Lior Wolf from Tel Aviv University and Facebook AI Research)

Abstract: We introduce a method for the generation of images from an input scene graph. The method separates between a layout embedding and an appearance embedding. The dual embedding leads to generated images that better match the scene graph, have higher visual quality, and support more complex scene graphs. In addition, the embedding scheme supports multiple and diverse output images per scene graph, which can be further controlled by the user. We demonstrate two modes of per-object control: (i) importing elements from other images, and (ii) navigation in the object space, by selecting an appearance archetype.

The ICCV also announced its Pattern Analysis and Machine Intelligence Technical Committee (PAMI TC) Awards for fundamental contributions in computer vision. The Helmholtz Prize recognizes CVPR papers from ten years ago that have since had a significant impact on computer vision research. This year’s co-winners are Building Rome in a Dayby Sameer Agarwal, Noah Snavely, Ian Simon, Steven M. Seitz, and Richard Szeliski from University of Washington, Cornell University and Microsoft Research respectively; and Attribute and Simile Classifiers for Face Verification by Neeraj Kumar, Alexander C. Berg, Peter N. Belhumeur, and Shree K. Nayar of Columbia University.

The PAMI TC Azriel Rosenfeld Lifetime Achievement Award, in honor of outstanding researchers with significant contributions to Computer Vision over longtime careers, went to Shimon Ullman, a professor of computer science at the Weizmann Institute of Science, Israel, and an adjunct professor in MIT’s Brain and Cognitive Sciences Department.

Over 7,500 people will attend the weeklong ICCV conference, which features 72 exhibitors, 60 workshops, and 12 tutorials. A list of Best Paper Nominations including seven papers that didn’t make it to the final award has been released on the ICCV 2019 website.

Journalist: Yuan Yuan | Editor: Michael Sarazen

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Need a comprehensive review of the past, present and future of modern AI research development? Trends of AI Technology Development Report is out!

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global