Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about result #24

Open
Zoro1092000 opened this issue Nov 5, 2022 · 2 comments
Open

Question about result #24

Zoro1092000 opened this issue Nov 5, 2022 · 2 comments

Comments

@Zoro1092000
Copy link

Hi, thank for your repo.
I have 1 question, usually when running the model on Chord, Leet, Debru, Kadem, C2 datasets, the results are the same as you described in the README file but the results when I run the model on the effective P2P dataset model yield >= 99.1% (higher than the data you announced 98.692% F1-score). I didn't change your settings, I just removed the fill value and changed some versions of the libraries so the code could run. In short, I want to ask why the model when I run it again gives significantly better results?
Thanks!

@jzhou316
Copy link
Collaborator

jzhou316 commented Nov 11, 2022

Hi thanks for your observation! The library does come with some small randomness, resulting from non-deterministic behaviors from such as PyTorch (https://pytorch.org/docs/stable/notes/randomness.html) and pytorch_scatter (rusty1s/pytorch_scatter#226), depending platforms and environment that can only be limited. My guess is that there might also be some changes in between the package versions that can lead to the small differences. Also do you observe the same number between different runs, or there is also small variations in between runs?

@Zoro1092000
Copy link
Author

I also guess the same as you. Thank a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants