Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support dynamic shape mechanism for TensorRT #266

Open
luohao123 opened this issue Jan 9, 2022 · 4 comments
Open

Support dynamic shape mechanism for TensorRT #266

luohao123 opened this issue Jan 9, 2022 · 4 comments
Labels
deployment Inference acceleration for production enhancement New feature or request help wanted Extra attention is needed

Comments

@luohao123
Copy link

🐛 Describe the bug

error:

Error Code 4: Internal Error (Network must have at least one output)

I can visualize the yolov5 outputs using netron, but go trough all onnx model can not detection outputs which marked as output, And tensorrt parsing it error.

However, if using onnx2trt it seems can parse,

Versions

please help

@zhiqwang
Copy link
Owner

zhiqwang commented Jan 9, 2022

Hi @luohao123 ,

Thanks for creating this ticket, What is the TensorRT version you are using now? We are only support TensorRT 8.0+ now. And could you supply a more detailed example to reproduce this error so that we could debug?

@luohao123
Copy link
Author

@zhiqwang I think I forget to initPlugin when parsing trt since it contains BatchedNMSPlugin.

However, I actually want set the model input width height when convert, rather than export. So I set enable_dynamic to be True, but got error like this:

[TensorRT] INFO: Searching for plugin: BatchedNMS_TRT, plugin_version: 1, plugin_namespace: 
[TensorRT] WARNING: builtin_op_importers.cpp:4779: Attribute scoreBits not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
[TensorRT] INFO: Successfully created plugin: BatchedNMS_TRT
[TensorRT] ERROR: batched_nms: PluginV2Layer must be V2DynamicExt when there are runtime input dimensions.

How to change the model input width and height after convert onnx?

@zhiqwang
Copy link
Owner

zhiqwang commented Jan 9, 2022

Hi @luohao123 ,

I actually want set the model input width height when convert, rather than export.

Got it, currently we don't support the dynamic shape mechanism for TensorRT in yolort, we will implement this feature as soon as possible, and PR for this is welcome here.

@zhiqwang zhiqwang added enhancement New feature or request help wanted Extra attention is needed labels Jan 9, 2022
@zhiqwang zhiqwang changed the title ONNXGraphsurgeon output model can not detect output Support dynamic shape mechanism for TensorRT Jan 9, 2022
@zhiqwang zhiqwang added the deployment Inference acceleration for production label Feb 12, 2022
@omair18
Copy link

omair18 commented Nov 12, 2022

Hi @zhiqwang. Is there any update on the dynamic shape mechanism for TensorRT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deployment Inference acceleration for production enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants