Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Document ingestion with offline batch inference #3428

Open
heemin32 opened this issue Jan 24, 2025 · 0 comments
Open

[FEATURE] Document ingestion with offline batch inference #3428

heemin32 opened this issue Jan 24, 2025 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@heemin32
Copy link

Is your feature request related to a problem?
Following up on #2891, I’d like to request support for the asynchronous batch ingestion API to handle embedding ingestion after inference is completed, instead of requiring the client to perform the ingestion.

What solution would you like?
An API that automatically ingests embeddings once they are generated by the model.

What alternatives have you considered?
Allowing the client to handle the embedding ingestion after the embeddings are generated.

Do you have any additional context?
N/A

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Backlog
Development

No branches or pull requests

3 participants