Filling Holes: Adobe Proposes Foreground-Aware Image Inpainting
Digital editing has dramatically changed the way humans interact with images, so much so that “to photoshop” has become a common verb in everyday speech. People often photoshop an image in order to remove undesirable compositional elements. This process leaves empty areas behind, which computer scientists refer to as “holes.”
Filling these holes is done with image inpainting, an important computer vision task with applications ranging from image editing to composition and restoration. Existing image inpainting methods fill holes by borrowing information from surrounding image regions. These methods however produce unsatisfactory results if holes overlap with foreground objects, suffering from a “lack of information about the actual extent of foreground and background regions with the holes.”
To solve this problem, a research team from the University of Rochester, University of Illinois at Urbana-Champaign and Adobe Research has proposed a foreground-aware image inpainting system to achieve superior results on challenging cases with complex compositions.
How This Is Achieved
The research team’s model first learns to predict the foreground contour, then inpaints the missing region guided by this prediction. The overall architecture of the inpainting system is shown in the diagram below, comprising an incomplete contour detection module, a contour completion module and an image completion module.
As the paper explains: “We automatically detect the contour of the incomplete image using the contour detection module. Then the contour completion module is adopted to predict the missing parts of the contour. Finally, we input both the incomplete image and the completed contour to the image completion module to predict the final inpainted image.”
Comparison Between Different Approaches
Researchers did a qualitative comparison with other state-of-the-art methods, including widely-used, patch-based PatchMatch; and deep network based models such as Global&Local, ContextAttention, PartialConv and GatedConv. The results show their model producing almost perfect fixes compared with ground-truth images.
The team also conducted a user preference review of their methods. They randomly selected 50 images from the testing dataset, corrupted them with random holes, then obtained inpainted results from each method. Users were asked to select the single best result. From 1,099 votes, 877 selected the team’s method.
The research paper Foreground-aware Image Inpainting is on arXiv.
Author: Jessie Geng | Editor: Michael Sarazen
2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.
Follow us on Twitter @Synced_Global for daily AI news!
We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.