-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would you support conversion from torch to onnx and ncnn? #46
Comments
@henrytsui000 I would like to contribute in this one. Is there any work done on exporting to different formats like onnx, coreml and tflite? |
Thanks a lot! You may find some existing code here: To be honest, I'm not sure if the code is robust enough, but you can activate it using the following command: python yolo/lazy.py task=inference task.fast_inference=onnx Currently, it only supports ONNX and TensorRT. If you're willing, you can add support for CoreML and TFLite, and help make the code more robust. Best regards, |
Thanks for your reply. Gonna check and try to contribute on this next weeks! |
@henrytsui000 ''' if self.compiler == "onnx": ''' |
Yes, the auxiliary header is only used to train the model |
Thanks for your reply. I managed to export tflite and now in the progress in getting all things on GPU. For that, can we get rid of the Conv3d's in the network? Can we rewrite them to be Conv2D in some manner? I think I can find a workaround for those other dimension related problems since they are all in the detection head.
|
Any updates on this? I am trying to export the model to ONNX/Tensorrt as well but I do not see any explicit Are we supposed to use the |
@SamSamhuns Same issue here. Im able to run the inference with onnx but have found nothing on export of onnx. @henrytsui000 Do you have any info if this is currently supported? Lightning has the ability to export onnx in general: |
Any update about this? I converted the model to ONNX using
|
@henrytsui000 Next to that: |
Found a workaround for the torch.jit.trace issue and added Coreml export: Next step will be the tflite export. |
Hello, I am currently working on an iOS application that requires object detection functionality, and I’m interested in using the YOLOV9 model for this purpose. I would like to ask if there's any update for a full code to load and export YOLOV9 model to coreml. Thank you in advance! |
I managed to add a pull request: #174 including coreml and tflite export. Next to that you can also do inference with coreml and tflite. To use the coreml export in swift you have to implement 'class Vec2Box' in Swift code. Coming month I will dive into this myself as well. |
As the title mentioned, one of the strength of yolov9 is the relatively high accuracy with smaller size and faster speed, a great tool for embedded devices. I think it would be a nice feature for this project
The text was updated successfully, but these errors were encountered: