You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem?
Following up on #2891, I’d like to request support for the asynchronous batch ingestion API to handle embedding ingestion after inference is completed, instead of requiring the client to perform the ingestion.
What solution would you like?
An API that automatically ingests embeddings once they are generated by the model.
What alternatives have you considered?
Allowing the client to handle the embedding ingestion after the embeddings are generated.
Do you have any additional context?
N/A
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem?
Following up on #2891, I’d like to request support for the asynchronous batch ingestion API to handle embedding ingestion after inference is completed, instead of requiring the client to perform the ingestion.
What solution would you like?
An API that automatically ingests embeddings once they are generated by the model.
What alternatives have you considered?
Allowing the client to handle the embedding ingestion after the embeddings are generated.
Do you have any additional context?
N/A
The text was updated successfully, but these errors were encountered: