Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would you support conversion from torch to onnx and ncnn? #46

Open
stereomatchingkiss opened this issue Jul 22, 2024 · 14 comments
Open
Labels
enhancement New feature or request

Comments

@stereomatchingkiss
Copy link

As the title mentioned, one of the strength of yolov9 is the relatively high accuracy with smaller size and faster speed, a great tool for embedded devices. I think it would be a nice feature for this project

@stereomatchingkiss stereomatchingkiss added the enhancement New feature or request label Jul 22, 2024
@ramonhollands
Copy link
Contributor

@henrytsui000 I would like to contribute in this one. Is there any work done on exporting to different formats like onnx, coreml and tflite?

@henrytsui000
Copy link
Member

@henrytsui000 I would like to contribute in this one. Is there any work done on exporting to different formats like onnx, coreml and tflite?

Thanks a lot!

You may find some existing code here:
https://github.com/WongKinYiu/YOLO/blob/dc88787a7f6fff89d6e60fb63ee20a4be1721b34/yolo/utils/deploy_utils.py#L11

To be honest, I'm not sure if the code is robust enough, but you can activate it using the following command:

python yolo/lazy.py task=inference task.fast_inference=onnx 

Currently, it only supports ONNX and TensorRT.

If you're willing, you can add support for CoreML and TFLite, and help make the code more robust.

Best regards,
Henry Tsui

@ramonhollands
Copy link
Contributor

Thanks for your reply. Gonna check and try to contribute on this next weeks!

@ramonhollands
Copy link
Contributor

@henrytsui000
I think we want to remove the auxiliary branch on all export formats, right?

'''

if self.compiler == "onnx":
return self._load_onnx_model(device)
elif self.compiler == "trt":
return self._load_trt_model().to(device)
elif self.compiler == "deploy":
self.cfg.model.model.auxiliary = {}

'''

@henrytsui000
Copy link
Member

Yes, the auxiliary header is only used to train the model

@ramonhollands
Copy link
Contributor

@henrytsui000

Thanks for your reply. I managed to export tflite and now in the progress in getting all things on GPU.

For that, can we get rid of the Conv3d's in the network? Can we rewrite them to be Conv2D in some manner? I think I can find a workaround for those other dimension related problems since they are all in the detection head.

09-11 16:31:08.935 26380 26380 E tflite  : Following operations are not supported by GPU delegate:
09-11 16:31:08.935 26380 26380 E tflite  : CONV_3D: Operation is not supported.
09-11 16:31:08.935 26380 26380 E tflite  : GATHER: Only support 1D indices
09-11 16:31:08.935 26380 26380 E tflite  : 
09-11 16:31:08.935 26380 26380 E tflite  : RESHAPE: OP is supported, but tensor type/shape isn't compatible.
09-11 16:31:08.935 26380 26380 E tflite  : SOFTMAX: OP is supported, but tensor type/shape isn't compatible.
09-11 16:31:08.935 26380 26380 E tflite  : TRANSPOSE: OP is supported, but tensor type/shape isn't compatible.
09-11 16:31:08.935 26380 26380 E tflite  : 763 operations will run on the GPU, and the remaining 15 operations will run on the CPU.

@SamSamhuns
Copy link

Any updates on this? I am trying to export the model to ONNX/Tensorrt as well but I do not see any explicit export file/module other than the unused export inside FastModelLoader in YOLO/yolo/utils/deploy_utils.py

Are we supposed to use the deploy_utils.py inside our custom files to export the model properly right now?

@HopeSuffers
Copy link

@SamSamhuns Same issue here. Im able to run the inference with onnx but have found nothing on export of onnx. @henrytsui000 Do you have any info if this is currently supported?

Lightning has the ability to export onnx in general:
https://lightning.ai/docs/pytorch/stable/deploy/production_advanced.html

@mgreyes427
Copy link

Any update about this? I converted the model to ONNX using FastModelLoader and the inference runs correctly using the InferenceSession. But I don't know how to parse the output of the network. The output layers and shapes:

[('output', (1, 80, 80, 80)), ('2801', (1, 16, 4, 80, 80)), ('2806', (1, 4, 80, 80)), ('2824', (1, 80, 40, 40)), ('2836', (1, 16, 4, 40, 40)), ('2841', (1, 4, 40, 40)), ('2859', (1, 80, 20, 20)), ('2871', (1, 16, 4, 20, 20)), ('2876', (1, 4, 20, 20)), ('3176', (1, 80, 80, 80)), ('3188', (1, 16, 4, 80, 80)), ('3193', (1, 4, 80, 80)), ('3211', (1, 80, 40, 40)), ('3223', (1, 16, 4, 40, 40)), ('3228', (1, 4, 40, 40)), ('3246', (1, 80, 20, 20)), ('3258', (1, 16, 4, 20, 20)), ('3263', (1, 4, 20, 20))]

@verrassendhollands
Copy link

@henrytsui000
I think it would be nice to create a separate export task. Do you agree with that? I created a placeholder to show the direction I'm aiming for:
https://github.com/ramonhollands/YOLO/tree/add-export-task

Next to that:
On torch.jit.trace I get a strange error message, any idea what is causing this?
'''RuntimeError: Encountering a dict at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.'''

@verrassendhollands
Copy link

PS: I'm not able to create a WIP-PR?

Image

@verrassendhollands
Copy link

Found a workaround for the torch.jit.trace issue and added Coreml export:
ramonhollands@6554034

Next step will be the tflite export.

@ProfessorHT
Copy link

Hello,

I am currently working on an iOS application that requires object detection functionality, and I’m interested in using the YOLOV9 model for this purpose. I would like to ask if there's any update for a full code to load and export YOLOV9 model to coreml.

Thank you in advance!

@ramonhollands
Copy link
Contributor

I managed to add a pull request: #174 including coreml and tflite export. Next to that you can also do inference with coreml and tflite.

To use the coreml export in swift you have to implement 'class Vec2Box' in Swift code. Coming month I will dive into this myself as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants