Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions regarding evaluation #1

Open
Seth-Park opened this issue Nov 14, 2021 · 1 comment
Open

Questions regarding evaluation #1

Seth-Park opened this issue Nov 14, 2021 · 1 comment

Comments

@Seth-Park
Copy link

Hi,

Thanks for sharing the codebase and the pretrained models! Your work is very impressive :)
I just wanted to ask a few questions regarding quantitative evaluation:

  1. How many images were used for computing the FID scores in Table 1 of the main paper?
  2. How are the reference images that guide the generation process chosen?
  3. What codebase did you use to compute the FID scores?

Regards,
Seth

@jychoi118
Copy link
Owner

Hi, thank you for enjoying our paper.

  1. We used 50k generated samples and 50k random real images from the training set. (Generating 50k images with full 1000 steps takes a long time..)
  2. For FID score, 50k reference images are randomly chosen from the training set.
  3. We followed https://github.com/mseitzer/pytorch-fid for FID scores. I would like to note that our experiments in the paper are conducted with this codebase: https://github.com/rosinality/denoising-diffusion-pytorch . So, we measured FID scores with data pre-processing from this codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants