Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: Nvidia Embedding Connector #10410

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

raspawar
Copy link

@raspawar raspawar commented Feb 5, 2025

Nvidia Embedding Connector

Description

Contribution Checklist

@markwallace-microsoft markwallace-microsoft added the python Pull requests for the Python Semantic Kernel label Feb 5, 2025
@github-actions github-actions bot changed the title Nvidia Embedding Connector Python: Nvidia Embedding Connector Feb 5, 2025

"""Specific settings for the text embedding endpoint."""

input: str | list[str] | list[int] | list[list[int]] | None = None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the OpenAI Embedding API supports list[int] and list[list[int]] to accept pre-tokenized input. the NeMo Retriever Embedding API does not support this.

@raspawar please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

Copy link

@mattf mattf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is looking good. i've added some comments and questions.

"""Send a request to the OpenAI embeddings endpoint."""
try:
# exclude input-type from main body
response = await self.client.embeddings.create(**settings.prepare_settings_dict(exclude="input_type"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why exclude input_type?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because openai client does not allow input_type parameter, so I excluded it from main dict and added in extra body

As well as any fields that are None.
"""
return self.model_dump(
exclude={"service_id", "extension_data", "structured_json_response", "input_type"},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does the input_type need to be passed as extra_body

for i in range(0, len(texts), batch_size):
batch = texts[i : i + batch_size]
settings.input = batch
raw_embedding = await self._send_request(settings=settings)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it looks like batch_size is controlling the number of texts that are sent for embedding in a single call to the service.

each of those batches can also be sent in parallel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed, batch size refers to how many characters are included in each call, if this can be parallelized, please do, also moving the actual embedding calls here makes this simpler (vs the handler, see earlier comment)

def to_dict(self) -> dict[str, str]:
"""Create a dict of the service settings."""
client_settings = {
"api_key": self.client.api_key,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are the security implications of including the api_key here?

(Env var NVIDIA_BASE_URL)
- chat_model_id: The NVIDIA chat model ID to use see https://docs.api.nvidia.com/nim/reference/llm-apis.
(Env var NVIDIA_CHAT_MODEL_ID)
- text_model_id: str | None - The NVIDIA text model ID to use, for example, nvidia/nemotron-4-340b-reward.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there are also vision language models (VLMs), which take text or images as input and produce text

Copy link
Author

@raspawar raspawar Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes will add when working on vlm support

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

those models are supported with the ChatCompletionClientBase as well

model=ai_model_id,
dimensions=embedding_dimensions,
encoding_format="float",
extra_body={"input_type": "passage"},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this demonstrates there are two ways to pass input_type. i suggest only providing one.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is right, as I am validating the request bosy and in that input_type goes in extra_body

@raspawar raspawar marked this pull request as ready for review February 6, 2025 11:12
@raspawar raspawar requested a review from a team as a code owner February 6, 2025 11:12
@raspawar raspawar mentioned this pull request Feb 6, 2025
10 tasks
@raspawar raspawar marked this pull request as draft February 6, 2025 16:26
Copy link
Member

@eavanvalkenburg eavanvalkenburg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a bunch of comments, most important is to simplify this by a lot!

class NvidiaModelTypes(Enum):
"""Nvidia model types, can be text, chat or embedding."""

TEXT = "text"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we don't support these modalities, they shouldn't be in here (yet)

model: str = None
encoding_format: Literal["float", "base64"] | None = "float" # default to float
truncate: Literal[None, "NONE", "START", "END"] | None = None
input_type: Literal["passage", "query"] | None = "passage" # default to passage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is preferred to set defaults to None if the service itself has default values, that way we don't get out of sync with the default documented by services

"""Specific settings for the text embedding endpoint."""

input: str | list[str] | list[int] | list[list[int]] | None = None
model: str = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

model is a reserved term for pydantic, please use a alternative with a Alias, for instance for openai we use:
ai_model_id: Annotated[str | None, Field(serialization_alias="model")] = None

class NvidiaEmbeddingPromptExecutionSettings(NvidiaPromptExecutionSettings):
"""Settings for NVIDIA embedding prompt execution."""

"""Specific settings for the text embedding endpoint."""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please combine into one docstring

RESPONSE_TYPE = Union[list[Any],]


class NvidiaHandler(KernelBaseModel, ABC):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the goal to also add chat and or text completions or other modalities? If not, or if only one other, then we don't need all this extra stuff that we have for OpenAI since that support a lot of things and has 2 configs (azure and openai), we could simplify this PR by a lot and only extend the pieces that are shared later on.

# move input_type to extra-body
if not settings.extra_body:
settings.extra_body = {}
settings.extra_body.setdefault("input_type", settings.input_type)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't we move this logic into the prepare_settings_dict?

for i in range(0, len(texts), batch_size):
batch = texts[i : i + batch_size]
settings.input = batch
raw_embedding = await self._send_request(settings=settings)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed, batch size refers to how many characters are included in each call, if this can be parallelized, please do, also moving the actual embedding calls here makes this simpler (vs the handler, see earlier comment)

and more information refer https://docs.api.nvidia.com/nim/reference/
use endpoint if you only want to supply the endpoint.
(Env var NVIDIA_BASE_URL)
- chat_model_id: The NVIDIA chat model ID to use see https://docs.api.nvidia.com/nim/reference/llm-apis.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since chat and text are not supported yet, please remove.

(Env var NVIDIA_BASE_URL)
- chat_model_id: The NVIDIA chat model ID to use see https://docs.api.nvidia.com/nim/reference/llm-apis.
(Env var NVIDIA_CHAT_MODEL_ID)
- text_model_id: str | None - The NVIDIA text model ID to use, for example, nvidia/nemotron-4-340b-reward.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

those models are supported with the ChatCompletionClientBase as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation python Pull requests for the Python Semantic Kernel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants