You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
First of all, thanks for making your great code and models available.
I am currently trying out two of your models (MP-CNN and VDPWI) and noticed that when evaluating trained models (via --skip-training), different batch sizes give different results.
For example,
Thanks for your interest, I've confirmed this issue. My guess is that the amount of padding depends on the batch size due to varying sentence lengths, and the resulting padding is not implemented as a no-op. Using a batch size of 1 should be the correct thing to do during inference (for now).
Hi,
First of all, thanks for making your great code and models available.
I am currently trying out two of your models (MP-CNN and VDPWI) and noticed that when evaluating trained models (via --skip-training), different batch sizes give different results.
For example,
returns a different results than
Have your encountered this behavior before and do you know what the reasons might be? Which would be the correct result?
The text was updated successfully, but these errors were encountered: