-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traning dataset #4
Comments
We train our teacher model on a mixture of LoLv1 and LoLv2-real due to the limited training samples in LoLv1. Notice that there is an overlap between the LoLv1 test set and the LoLv2-real training set, redundant samples should be removed during merging to prevent data leakage. For other datasets, we train the teacher model separately. During distillation, we train the student model on each training dataset separately. |
How should I go about retraining and testing on a new dataset? I want to retrain the SID dataset, but after training, the PSNR for testing is only around 6, while the PSNR result using the trained LOL model is around 17. I find this very confusing. |
Hi! thank you for your interest in our work! To use our method, you first need a well-pretrained teacher model. Our approach focuses on distilling for efficient low-light image enhancement. This repository contains only the distillation code; the pretraining code is not included. In our paper, the teacher model is based on GSAD and SR3. Please refer to these pioneering works for more details. |
Did you train your student model and teacher model on a single training dataset (say for lolv1) respectively or train it on a mixture of the datasets (say for lolv1 + lolv2 syn + lolv2 real) ?
The text was updated successfully, but these errors were encountered: