New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNXRuntimeError with a model that passes onnx.checker and was supported on previous ort versions #20641
Comments
Please use the following file: https://github.com/cestpasphoto/alpha-zero-general/blob/master/splendor/example_onnx_file.onnx To answer the auto-label, CUDA is NOT involved here, it happens with CPU-only environment. |
I get a similar error after switching from onnxruntime 1.17.1 to 1.18.0. The error log can be found at
(Note that the only change is to use a newer version of onnxruntime. All other things are kept the same.) |
Describe the issue
I have a model exported by pytorch which was supported until ort 1.16.3 but now fails with ort 1.17.0:
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (MatMulBnFusion_Gemm) Op (Gemm) [ShapeInferenceError] First input does not have rank 2
I have tried several versions of pytorch, several config of torch.onnx.export but always fail. As soon as I downgrade to 1.16.3, it works whatever the pytorch version is, so that makes me think that ort is the culprit.
Plus I've run
onnx.checker.check_model(mymodel, full_check=True)
and raised no issue.It happens even with pure CPU environment.
The following code passes with ort 1.16.3 but returns an error with 1.17.0
To reproduce
Urgency
No response
Platform
Linux
OS Version
Debian stable
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
1.17.0
ONNX Runtime API
Python
Architecture
X64
Execution Provider
Default CPU
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: