Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is code version correct? #1

Open
TDZ-z opened this issue Mar 23, 2024 · 7 comments
Open

Is code version correct? #1

TDZ-z opened this issue Mar 23, 2024 · 7 comments

Comments

@TDZ-z
Copy link

TDZ-z commented Mar 23, 2024

Hello, normal should be False when calculating D1_PSNR in test.py, but it is True in the code.
Is the source code version correct?

@aprilbian
Copy link
Owner

Thanks for the question,

Having 'normal = True' means that the return of the function 'pc_error' will also include point-to-plane distance, i.e., the D2_PSNR. It does not affect the results for D1_PSNR. You can look into the 'utils/pc_error_wrapper.py' for more info.

@TDZ-z
Copy link
Author

TDZ-z commented Mar 23, 2024

Thanks for your answer! I got it. There are another question:
In the case of SNR=5dB, bottleneck_size=200, your source code test shows D1_PSNR=18.79, D2_PSNR=20.22, which are very different from the result in the paper.
And I also found that there was an overfitting problem during the training phase, but I didn't modify your core code.
Is there a problem?

@aprilbian
Copy link
Owner

Is it because you modify some hyper parameters? If you run the code with the hyper parameters, e.g., learning rate, batch size unchanged, it should produce the same result... Why not first try reducing the learning rate to see if it works?

@TDZ-z
Copy link
Author

TDZ-z commented Mar 23, 2024

I didn't modify your hyper parameters, e.g., learning rate=0.001, batch_size=32, I just set your SNR=5 and bottleneck_size=200 according to your paper.
Is there something should be changed when SNR=5 and bottleneck_size=200?
And is your transformer block module consistent with what is stated in the paper?

@aprilbian
Copy link
Owner

No, nothing needs to be changed.
Yes, it is based on the point transformer paper shown in the reference of the paper.

@aprilbian
Copy link
Owner

BTW, please make sure that you decrease the learning rate during the epochs, this should be quite important for a good performance.

I can provide you with the checkpoints when it gets accepted.

Best,
Chenghong

@TDZ-z
Copy link
Author

TDZ-z commented Mar 23, 2024

Ok, thanks for your help.😊

Best wishes,
Vi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants