Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thank you for excllent work. How about TRT batch inference? #123

Open
tungdq212 opened this issue Jan 3, 2024 · 1 comment
Open

Thank you for excllent work. How about TRT batch inference? #123

tungdq212 opened this issue Jan 3, 2024 · 1 comment

Comments

@tungdq212
Copy link

tungdq212 commented Jan 3, 2024

Thank you for excllent work.

Detection models now can be exported to TRT engine with batch size > 1 - inference code doesn't support it yet, though now they could be used in Triton Inference Server without issues.

Is there any plan for this? Or how can I implement batch inference myself?

@SthPhoenix
Copy link
Owner

Hi! Batch inference is already supported for all recognition models and for SCRFD and YOLOv5 family detection models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants