You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some questions about the experiments of self negative sampling.
In Table 4, the results show that removing self negative sampling can hurt the performance. Have you tried to sample negative entities from both the Gx and Gy?
Since the alignment accuracy are quite good on the examined datasets, have you tried to reduce the number of conflicts when sampling from another KG using the alignment result?
The text was updated successfully, but these errors were encountered:
For the first question, we don’t think it’s necessary to sample from both the KGs, because, just as you said, negative samples from the other KG that can be false negatives, will hurt the performance, which is also explained in the third paragraph of Section 4.2 of our paper. Therefore, it’s better to merely sample from the same KG, i.e., self negative sampling, to guarantee the performance in our self-supervised setting.
For the second question, if the “alignment result” you said means the alignment labels, we have done experiments, which is in Section 4.3 of our paper. If the “alignment result” you said means the alignment pseudo labels in the training process, we have tried an algorithm similar to a method called self-training, but the result is not as good as SelfKG, which may results from the error cascades.
I have some questions about the experiments of self negative sampling.
The text was updated successfully, but these errors were encountered: