Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX inference issue #134

Open
YenYunn opened this issue Mar 11, 2024 · 3 comments
Open

ONNX inference issue #134

YenYunn opened this issue Mar 11, 2024 · 3 comments

Comments

@YenYunn
Copy link

YenYunn commented Mar 11, 2024

when i run this code, i got the warning
import torch
import onnx

parseq = load_from_checkpoint('pretrained=parseq').eval()
parseq.refine_iters = 0
parseq.decode_ar = False

image = torch.rand(1, 3, *parseq.hparams.img_size)
parseq.to_onnx('parseq.onnx', image, do_constant_folding=True, opset_version=14)

onnx_model = onnx.load('parseq.onnx')
onnx.checker.check_model(onnx_model, full_check=True)
image

then, when I use this ONNX model for inference, I encounter the following error
image

and this is my code
image

@YenYunn
Copy link
Author

YenYunn commented Mar 12, 2024

my version
onnx==1.15.0
onnxruntime-gpu==1.17.1
torch==2.1.1+cu118
pytorch-lightning==2.1.0

@IceboxDev
Copy link

Exactly same issue with:

  • onnx==1.15.0
  • torch==2.2.1
  • pytorch-lightning==2.2.1

Stacktrace:

(.venv) [mantas@WS21 parseq]$ python onnx_runtime.py
Traceback (most recent call last):
  File "/home/mantas/Documents/Projects/parseq/onnx_runtime.py", line 4, in <module>
    session = ort.InferenceSession(model_path)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mantas/Documents/Projects/parseq/.venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/mantas/Documents/Projects/parseq/.venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 472, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from modified.onnx failed:Type Error: Type parameter (T) of Optype (Where) bound to different types (tensor(bool) and tensor(float) in node (/Where_23).

@YenYunn
Copy link
Author

YenYunn commented Mar 22, 2024

@baudm
Hello, author,

I am currently encountering some technical issues and would appreciate your assistance. Firstly, I would like to inquire about how to resolve the aforementioned problem. Secondly, after using an older version of the project and converting it to ONNX format, I noticed a significant discrepancy between the output results and the pre-trained model. Regarding this issue, could you provide some suggestions to help address this problem?

Thank you very much for taking the time to respond amidst your busy schedule.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants