Member-only story
StyleGAN v2: notes on training and latent space exploration
What: in this entry I’m going to present notes, thoughts and experimental results I collected while training multiple StyleGAN models and exploring the learned latent space.
Why: this is a dump of ideas/considerations that can range from the obvious to the “ holy moly”, meant to possibly provide insight — or starting point for discussions — for other people out there with similar interests and intents. As such it can be skimmed through to see if anything is of interest.
I’m also here heavily leveraging Cunningham’s Law.
Who: many considerations apply to both StyleGAN v1 and v2, but all generated results are from v2 models, unless explicitly specified.
In term of code resources:
- StyleGAN v1 and v2 official repos
- Encoder for v1 (+ directions learning) and encoder for v2
All of the content here has been generated on top of such repos via my customized Jupyter notebooks.
Worth mentioning people who diligently shared code, experiments and valuable suggestions: Gwern (TWDNE), pbaylies, gpt2ent, xsteenbrugge, Veqtor, Norod78, roadrunner01. All more than worth following.