-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to create a TRT Model from ONNX or direct from PyTorch to TRT #62
Comments
hi @baudm why iam not getting any response , can u solve it fast |
Hello, I think you need to convert onnx into simple onnx using onnxsim , then convert it into TRT. example: |
actually i have been tryin the same
if you know how to load trt model and predict the output let me know advance thanks |
If you share the conversion code of onnx. We may find the probelm. To use different batch size , you need to use dynamic axis during onnx conversion. please share your whole code work. So we can figure out the problem and solve it. I actually change to onnx using author guidelines. You can check onnxruntime output before coverting to TRT to check if ONNX is working fine or not. Compare the result of Pytorch,onnxruntime and than try to convert into TRT. Thank you |
Hi @naveenkumarkr723. Did you deal with this issue? |
Haa ,
…On Fri, Feb 17, 2023, 3:25 PM KrlKarol8 ***@***.***> wrote:
Hi @naveenkumarkr723 <https://github.com/naveenkumarkr723>. Did you deal
with this issue?
—
Reply to this email directly, view it on GitHub
<#62 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQCX7KWFWN2PMSJH36OKY4LWX5DJHANCNFSM6AAAAAATIY7L4Y>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It is working smoothly with the following configuration, which includes the following versions of the software: Nvidia SMI Driver Version: 525.85.12 |
I converted parseq |
I'm facing the issue in processing the output. The tensorrt raw output is always same. I have cross checked the preprocessing steps as I have done the onnx inference with the same set of preprocessing steps. So far not able to get the correct output on tensorrt engine |
Onnx and TRT. |
Hello, Do you successfully v converted to TRT with these version without any inference problem? |
i fix it by modify this line as fellow
|
This issue has been solved, check this out : https://github.com/gaurav-g-12/parses_jetson_porting.git |
hi @baudm ,@huyhoang17
actually im using this parseq to convert ONNX model like
import torch
parseq = torch.hub.load('baudm/parseq', 'parseq', pretrained=True, refine_iters=0).eval()
dummy_input = torch.rand(1, 3, *parseq.hparams.img_size) # (1, 3, 32, 128) by default
To ONNX
parseq.to_onnx('parseq.onnx', dummy_input, opset_version=14) # opset v14 or newer is required
but when i tried to convert ONNX to TRT Model like
trtexec --onnx=/workspace/data/NaveenJadi/ParSeq/onnx-simplifier/parseq_ref_sim.onnx --saveEngine=parseq_simple.engine --exportProfile=parseq_simple.json --separateProfileRun > parseq_simple.log
i was getting ERROR like
[12/08/2022-08:01:19] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/08/2022-08:01:19] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
[12/08/2022-08:01:19] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[12/08/2022-08:01:19] [E] Error[3]: /ArgMax: at least 2 dimensions are required for input.
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:773: While parsing node number 614 [ArgMax -> "/ArgMax_output_0"]:
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:774: --- Begin node ---
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:775: input: "/Squeeze_output_0"
output: "/ArgMax_output_0"
name: "/ArgMax"
op_type: "ArgMax"
attribute {
name: "axis"
i: -1
type: INT
}
attribute {
name: "keepdims"
i: 0
type: INT
}
attribute {
name: "select_last_index"
i: 0
type: INT
}
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:776: --- End node ---
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:778: ERROR: parsers/onnx/ModelImporter.cpp:163 In function parseGraph:
[6] Invalid Node - /ArgMax
cannot create std::vector larger than max_size()
[12/08/2022-08:01:19] [E] Failed to parse onnx file
[12/08/2022-08:01:19] [E] Parsing model failed
[12/08/2022-08:01:19] [E] Failed to create engine from model or file.
[12/08/2022-08:01:19] [E] Engine set up failed
Environment
TensorRT Version: 8.4.1.5-1
NVIDIA GPU: tensorrt
NVIDIA Driver Version:
CUDA Version: cuda11.6
CUDNN Version: 8.6
Operating System: Linux
Python Version (if applicable): 3.8.13
Tensorflow Version (if applicable): No
PyTorch Version (if applicable): '1.13.0a0+08820cb'
Baremetal or Container (if so, version): No
Some one go through it and provide the piece of code to convert the model from ONNX to TRT
The text was updated successfully, but these errors were encountered: