Skip to content

PyTorch Implementation: "Optimizing the Latent Space of Generative Networks"

License

Notifications You must be signed in to change notification settings

nathanaelbosch/generative-latent-optimization

Repository files navigation

PyTorch Implementation: Optimizing the Latent Space of Generative Networks

My PyTorch implementation of the paper “Optimizing the Latent Space of Generative Networks” by Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam. It is a very interesting read and good to understand!

My personal goal with this project was to practice reimplementing a paper, in order to gain more experience. This paper was not completely trivial, but at the same time also very non-standard.

Setup and Installation

Install the dependencies, ideally in a virtual environment:

pip install -r requirements.txt

Install PyTorch as described on the website, depending on your python version, CUDA, etc.

Train the model

python main.py

You can also see all available options with, to e.g. decide on the dataset, the path where the dataset is stored, or the training parameters.

python main.py -h

TensorboardX

To-Do

  • [ ] explanation for the tensorboard usage
  • [X] logging of the model parameters for nice histograms
  • [X] for visual testing: evaluate always the same images to send to tensorboard
  • [X] rename model
  • [ ] Describe the `plac` parameters well
  • [X] Cleanup the laploss and pca parts
  • [X] Store the PCA part locally to save some time
  • [X] Use ignite

Related:

Another implementation I found on the topic: https://github.com/tneumann/minimal_glo

About

PyTorch Implementation: "Optimizing the Latent Space of Generative Networks"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published