You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TL;DR: When the target HuggingFace API backend is H2O's ModelHub, automatically pass along the user's access token.
Full info below.
Background
ModelHub (MH) is the H2O service implementing a subset of the HuggingFace APIs using services in the H2O platform. It's the successor to Models Hub (...yes, the overlap in name is a bit confusing at first).
MH is meant to run securely in customer environments (both airgapped and non-airgapped) and implements some "enterprise" features as a result (e.g., authentication, authorization, scalability, some auditing, etc.)
Differences from (legacy) Models Hub (limited to what's relevant for this issue)
Authentication. Requests to MH are verified to come from legitimate users (or from apps forwarding user tokens). In the same way that H2O services like DAI/Appstore/Drive/etc require a valid access token be passed in, so does MH.
Authorization. MH verifies that the user has permission to the resources they're attempting to access. MH integrates with H2O AuthZ and is H2O workspace-aware. This also requires that a valid access token be passed in.
Issue
LLM Studio has support for "Models Hub" (legacy) by way of having HF_ENDPOINT set on the environment. As ModelHub (new) applies security, it also requires the equivalent of an HF_TOKEN.
While the app does indeed have a field in the UI for setting this token, it is cumbersome for users to try and find this access token and to set it before each operation (access tokens expire every few minutes).
Since users are likely not willing to do that, and access tokens are indeed currently required, this effectively means calls to ModelHub will fail.
Proposal
When the target HuggingFace API backend is H2O's ModelHub, automatically pass along the user's access token.
We likely want to pass the token explicitly in the HF API call (e.g., the optional token parameter that HF APIs have) rather than set a global HF_TOKEN, as every H2O user has different access tokens.
The access token that Wave provides is indeed a valid H2O access token and can be used for this purpose. Access tokens can be retrieved from Wave via await q.auth.ensure_fresh_token() (asynchronous) or q.auth.ensure_fresh_token_sync() (synchronous).
The text was updated successfully, but these errors were encountered:
TL;DR: When the target HuggingFace API backend is H2O's ModelHub, automatically pass along the user's access token.
Full info below.
Background
ModelHub (MH) is the H2O service implementing a subset of the HuggingFace APIs using services in the H2O platform. It's the successor to Models Hub (...yes, the overlap in name is a bit confusing at first).
MH is meant to run securely in customer environments (both airgapped and non-airgapped) and implements some "enterprise" features as a result (e.g., authentication, authorization, scalability, some auditing, etc.)
Differences from (legacy) Models Hub (limited to what's relevant for this issue)
Authentication. Requests to MH are verified to come from legitimate users (or from apps forwarding user tokens). In the same way that H2O services like DAI/Appstore/Drive/etc require a valid access token be passed in, so does MH.
Authorization. MH verifies that the user has permission to the resources they're attempting to access. MH integrates with H2O AuthZ and is H2O workspace-aware. This also requires that a valid access token be passed in.
Issue
LLM Studio has support for "Models Hub" (legacy) by way of having
HF_ENDPOINT
set on the environment. As ModelHub (new) applies security, it also requires the equivalent of anHF_TOKEN
.While the app does indeed have a field in the UI for setting this token, it is cumbersome for users to try and find this access token and to set it before each operation (access tokens expire every few minutes).
Since users are likely not willing to do that, and access tokens are indeed currently required, this effectively means calls to ModelHub will fail.
Proposal
When the target HuggingFace API backend is H2O's ModelHub, automatically pass along the user's access token.
We likely want to pass the token explicitly in the HF API call (e.g., the optional
token
parameter that HF APIs have) rather than set a globalHF_TOKEN
, as every H2O user has different access tokens.The access token that Wave provides is indeed a valid H2O access token and can be used for this purpose. Access tokens can be retrieved from Wave via
await q.auth.ensure_fresh_token()
(asynchronous) orq.auth.ensure_fresh_token_sync()
(synchronous).The text was updated successfully, but these errors were encountered: