The #paperoftheweek 11 is: TraVeLGAN: Image-to-image Translation by Transformation Vector Learning
Did you ever try to turn a volcano into a Jack O’Lantern? In the domain of deep learning, this problem is described as image-to-image translation. In this specific case, we are talking about an unsupervised problem since there aren’t many matching image pairs between a volcano and a corresponding pumpkin. It’s not even quite clear what “matching” means in this case. With “TraVeLGAN: Image-to-image Translation by Transformation Vector Learning” the authors are attempting to solve this and similar challenges using a novel GAN system. In contrast to the well known CycleGAN, which is able to map between similar domains like apples->oranges, horses->zebras and so on, but fails on more complex related domains, the authors of TraVeLGAN add a siamese network to the classical GAN setup. This network enforces semantic consistencies in a vector space and works similarly to word2vec.
“Interest in image-to-image translation has grown substantially in recent years with the success of unsupervised models based on the cycle-consistency assumption. The achievements of these models have been limited to a particular subset of domains where this assumption yields good results, namely homogeneous domains that are characterized by style or texture differences. We tackle the challenging problem of image-to-image translation where the domains are defined by high-level shapes and contexts, as well as including significant clutter and heterogeneity. For this purpose, we introduce a novel GAN based on preserving intra-domain vector transformations in a latent space learned by a siamese network. The traditional GAN system introduced a discriminator network to guide the generator into generating images in the target domain. To this two-network system, we add a third: a siamese network that guides the generator so that each original image shares semantics with its generated version. With this new threenetwork system, we no longer need to constrain the generators with the ubiquitous cycle-consistency restraint. As a result, the generators can learn mappings between more complex domains that differ from each other by large differences — not just style or texture)”.
You can read the full article here.
About the author:
Patrick Kern, CTO & Co-founder at Brighter AI.