You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a fine tuning TrOCR model, and i'm using from optimum.onnxruntime import ORTModelForVision2Seq
how i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request
The text was updated successfully, but these errors were encountered:
I have a fine tuning TrOCR model, and i'm using
from optimum.onnxruntime import ORTModelForVision2Seq
how i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request
The text was updated successfully, but these errors were encountered: