Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add qualcomm SNPE compatible ONNX #248

Open
hansoullee20 opened this issue Dec 14, 2021 · 4 comments
Open

Add qualcomm SNPE compatible ONNX #248

hansoullee20 opened this issue Dec 14, 2021 · 4 comments
Labels
deployment Inference acceleration for production enhancement New feature or request

Comments

@hansoullee20
Copy link

🚀 The feature

Hello again

do you offer onnx model that can be simplified?

Motivation, pitch

I am trying to use yolov5 on qualcomm snpe(https://developer.qualcomm.com/sites/default/files/docs/snpe/overview.html) and would need the onnx model to be compatible.

Alternatives

No response

Additional context

No response

@zhiqwang
Copy link
Owner

zhiqwang commented Dec 14, 2021

Hi @hansoullee20

Sorry, we do not currently offer an ONNX model that is compatible with Snapdragon Neural Processing Engine.

@hansoullee20
Copy link
Author

@zhiqwang thank you for the reply. is there still any ways to simplify the onnx model using onnxsim?

@zhiqwang zhiqwang added the enhancement New feature or request label Dec 14, 2021
@zhiqwang
Copy link
Owner

zhiqwang commented Dec 14, 2021

Hi @hansoullee20 ,

Because we currently placed both pre-processing (interpolation operator) and post-processing (nms) into the ONNX graph. I did a quick check of SNPE's documentation and seems that they don't support these ops very well.

I guess we can use the export.py in ultralytics/yolov5 with

python export.py --weights path/to/your/model.pt --include onnx --simplify --train

to get a SNPE compatible ONNX model.

@hansoullee20
Copy link
Author

@zhiqwang Thank you for your kind response.

@zhiqwang zhiqwang changed the title simplifying onnx model Add qualcomm SNPE compatible ONNX Dec 15, 2021
@zhiqwang zhiqwang added the deployment Inference acceleration for production label Feb 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deployment Inference acceleration for production enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants