Skip to content

How to create a TRT Model from ONNX or direct from PyTorch to TRT #62

Open
@naveenkumarkr723

Description

@naveenkumarkr723

hi @baudm ,@huyhoang17

actually im using this parseq to convert ONNX model like

import torch
parseq = torch.hub.load('baudm/parseq', 'parseq', pretrained=True, refine_iters=0).eval()
dummy_input = torch.rand(1, 3, *parseq.hparams.img_size) # (1, 3, 32, 128) by default

To ONNX

parseq.to_onnx('parseq.onnx', dummy_input, opset_version=14) # opset v14 or newer is required

but when i tried to convert ONNX to TRT Model like

trtexec --onnx=/workspace/data/NaveenJadi/ParSeq/onnx-simplifier/parseq_ref_sim.onnx --saveEngine=parseq_simple.engine --exportProfile=parseq_simple.json --separateProfileRun > parseq_simple.log

i was getting ERROR like
[12/08/2022-08:01:19] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/08/2022-08:01:19] [W] [TRT] parsers/onnx/onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
[12/08/2022-08:01:19] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[12/08/2022-08:01:19] [E] Error[3]: /ArgMax: at least 2 dimensions are required for input.
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:773: While parsing node number 614 [ArgMax -> "/ArgMax_output_0"]:
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:774: --- Begin node ---
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:775: input: "/Squeeze_output_0"
output: "/ArgMax_output_0"
name: "/ArgMax"
op_type: "ArgMax"
attribute {
name: "axis"
i: -1
type: INT
}
attribute {
name: "keepdims"
i: 0
type: INT
}
attribute {
name: "select_last_index"
i: 0
type: INT
}

[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:776: --- End node ---
[12/08/2022-08:01:19] [E] [TRT] parsers/onnx/ModelImporter.cpp:778: ERROR: parsers/onnx/ModelImporter.cpp:163 In function parseGraph:
[6] Invalid Node - /ArgMax
cannot create std::vector larger than max_size()
[12/08/2022-08:01:19] [E] Failed to parse onnx file
[12/08/2022-08:01:19] [E] Parsing model failed
[12/08/2022-08:01:19] [E] Failed to create engine from model or file.
[12/08/2022-08:01:19] [E] Engine set up failed

Environment
TensorRT Version: 8.4.1.5-1
NVIDIA GPU: tensorrt
NVIDIA Driver Version:
CUDA Version: cuda11.6
CUDNN Version: 8.6
Operating System: Linux
Python Version (if applicable): 3.8.13
Tensorflow Version (if applicable): No
PyTorch Version (if applicable): '1.13.0a0+08820cb'
Baremetal or Container (if so, version): No

Some one go through it and provide the piece of code to convert the model from ONNX to TRT

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions