You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
AI Playground incorrectly constructs local file paths when attempting to add GGUF models from Hugging Face using the "Add Model" feature with the Llama.cpp-GGUF backend. This prevents users from loading GGUF models hosted on Hugging Face.
To Reproduce
Steps to reproduce the behavior:
Open AI Playground.
Select the Llama.cpp-GGUF backend.
Click the "+" button ("Add Model").
Enter a valid Hugging Face model identifier for a GGUF model, for example: Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF/MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
Click "Add."
Observe the error message dialog, which shows an incorrect local file path.
Expected behavior
AI Playground should recognize the input string as a Hugging Face model identifier, download the specified .gguf file from Hugging Face, and load the model into the Llama.cpp-GGUF backend. It should not attempt to find a local file.
Screenshots
The Models/ prefix being added to the path.
The use of backslashes \ instead of forward slashes /.
The resulting incorrect local file path.
Environment:
OS: [ Windows 11]
GPU: [Intel Arc A770 Bifrost 16G]
CPU: [ Intel i7-14700K]
Version: [AI Playground [v2.0.0-alpha-preview]
Additional context
The issue appears to be related to incorrect path handling within AI Playground. The input string, which is a valid Hugging Face model identifier, is being misinterpreted as a local file path. AI Playground is prepending resources/service\models\llm\ggufLLM\ (or similar) to the input and using backslashes \ instead of forward slashes /, resulting in an invalid local file path.
This issue is specific to adding GGUF models using the Llama.cpp-GGUF backend. It is not related to the transformers version issue that affects PyTorch models and the IPEX-LLM backend.
The incorrect path in the error message looks something like this:
resources/service\models\llm\ggufLLM\Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF\MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
The correct input provided by the user is:
Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF/MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
This clearly shows that AI Playground is constructing an incorrect local file path instead of using the provided string as a Hugging Face identifier.
This bug prevents the intended functionality of easily adding GGUF models from Hugging Face
The text was updated successfully, but these errors were encountered:
Describe the bug
AI Playground incorrectly constructs local file paths when attempting to add GGUF models from Hugging Face using the "Add Model" feature with the Llama.cpp-GGUF backend. This prevents users from loading GGUF models hosted on Hugging Face.
To Reproduce
Steps to reproduce the behavior:
Open AI Playground.
Select the Llama.cpp-GGUF backend.
Click the "+" button ("Add Model").
Enter a valid Hugging Face model identifier for a GGUF model, for example: Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF/MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
Click "Add."
Observe the error message dialog, which shows an incorrect local file path.
Expected behavior
AI Playground should recognize the input string as a Hugging Face model identifier, download the specified .gguf file from Hugging Face, and load the model into the Llama.cpp-GGUF backend. It should not attempt to find a local file.
Screenshots



The Models/ prefix being added to the path.
The use of backslashes \ instead of forward slashes /.
The resulting incorrect local file path.
Environment:
OS: [ Windows 11]
GPU: [Intel Arc A770 Bifrost 16G]
CPU: [ Intel i7-14700K]
Version: [AI Playground [v2.0.0-alpha-preview]
Additional context
The issue appears to be related to incorrect path handling within AI Playground. The input string, which is a valid Hugging Face model identifier, is being misinterpreted as a local file path. AI Playground is prepending resources/service\models\llm\ggufLLM\ (or similar) to the input and using backslashes \ instead of forward slashes /, resulting in an invalid local file path.
This issue is specific to adding GGUF models using the Llama.cpp-GGUF backend. It is not related to the transformers version issue that affects PyTorch models and the IPEX-LLM backend.
The incorrect path in the error message looks something like this:
resources/service\models\llm\ggufLLM\Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF\MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
The correct input provided by the user is:
Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF/MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
This clearly shows that AI Playground is constructing an incorrect local file path instead of using the provided string as a Hugging Face identifier.
This bug prevents the intended functionality of easily adding GGUF models from Hugging Face
The text was updated successfully, but these errors were encountered: