-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss of sharpness with larger resolutions #6
Comments
@sapoluri We have not tried larger resolutions. Can you post some exemplar input and output images? |
share.zip I have trained the model with 250 images of 800x600 size. All the metrics like SSIM, PSNR etc. are close to the results posted in the paper. The only issue is the sharpness of the images overall degrades. |
@sapoluri If above information is correct, a potential problem is in |
I do set the projector to 800x600 and regarding the resized images, I did initially try with original 800x600 images for training except with just 75 images rather than the 250 with the resized images. I will retry with 250 original 800x600 images and see how the results look. I am assuming the problem with the resized images is that the autoencoder might be learning the blurred look of the resized images. Is that your thought as well? |
@sapoluri |
Will do |
The compensated images obtained after inferencing lose their sharpness. As I understand, autoencoders due to their nature of encoding and then decoding, will result in the loss of detail. Do you observe this behavior as well in your experiments? Is it possible to somehow reduce this loss? A way that we preserve the edge detail or perhaps combine the original and the generated compensation image to get the best of both worlds.
I am curious as to what the results in your lab looked like with larger resolutions. Although compensation works to hide the screen imperfections, the loss in sharpness results in an objective observer preferring the sharp uncompensated image over a compensated image.
The text was updated successfully, but these errors were encountered: