Hi Michael, thank you for sharing these interesting information.
Hamdi Bouchech
11

I have not tried this out with driving scenes. But in theory SimGANs should perform well with these more complex scenes because:

  1. SimGANs ‘minimize the difference between the refined and synthetic data with a self-regularization loss term.’ This means it isn’t really inferring the contents of the scene itself, and you shouldn’t see some of the problems common to other types of GANs (i.e. a generated image of a face that has two different eye colors).
  2. SimGAN’s ‘average local adversarial losses for a more balanced global adversarial loss.’

So really the SimGAN gets the global structure from the synthetic image (i.e. there is a stop sign in the upper right-hand corner, a sky and clouds on the horizon, a car to the left, etc…) and then it locally refines the patches of the image to make it look realistic.

Show your support

Clapping shows how much you appreciated Michael Dietz’s story.