Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training dataset and model parameter details #17

Open
Wp-Zhang opened this issue Dec 1, 2022 · 1 comment
Open

About training dataset and model parameter details #17

Wp-Zhang opened this issue Dec 1, 2022 · 1 comment

Comments

@Wp-Zhang
Copy link

Wp-Zhang commented Dec 1, 2022

Hi, I can successfully train the model with my own training code right now but I found that the dataset composition is critical to the model performance.

The original Adobe5K dataset contains 5000*5=25000 image pairs, and according to the paper and the published code, you used hueshift and randomcrop for augmentation.

My questions are:

  1. did you perform the augmentations on the original pairs, so the length of your full dataset is 50000 (including the identical pairs)?
  2. did you perform other often-used image augmentation methods like flipping and rotation?
  3. did the identical pairs only contain pairs from the raw images or also all the images that are retouched by different experts?

Thank you in advance!

@Wp-Zhang Wp-Zhang changed the title About training dataset details About training dataset and model parameter details Dec 1, 2022
@Wp-Zhang
Copy link
Author

Wp-Zhang commented Dec 1, 2022

Here's another question:
You stated in the paper that the optimal bin_num for l-channel and ab-channel is 64, and you set them both as 64 in the experiments. But why is bin_num of the l-channel set as 8 in the pre-trained model? Is it a trade-off for training efficiency?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant