-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Result accuracy is different with PyTorch #6077
Comments
Is 1.13.1 the pytorch version being used? If so please test with torch>2.0. It would also be helpful to upload the resulting onnx model. |
It would also help to create the issue in pytorch repo. I guess it could be an issue in pytorch-to-onnx exporter, or in onnxruntime implementation, or a potential mismatch in the ONNX op spec. But it is a bit weird, in that it uses only a linear layer and Relu ... both of which must be very well tested by now in all 3 components. |
System information
My Sample Code:Just only have FFN, i had cal the output from pth or onnx model,
then cal the np.mean from output:
Mistake image, i had deleted that
**this image has mistake, do not notice it""
You can see the precision diff in the two model, why? please
and the size of the accuracy gap will change with input
In this sampleModel,the result only have 0.001 diff, but in my another mask2fomer model, still get this problem
""Detail Only Test""
class MLP(nn.Module):
""" Very simple multi-layer perceptron (also called FFN)"""
I had relaunch more than one times, you can see the value will get a little change when processed by liner function
after using nn.Linear() one more times, accuracy problems will be exacerbated
the zip has a tensor i had saved: x.pt
The text was updated successfully, but these errors were encountered: