Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to chat with GGUF model. After downloading, duplicates and asks to download again. #139

Open
Algolgaming opened this issue Jan 4, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Algolgaming
Copy link

Describe the bug
AI Playground incorrectly constructs local file paths when attempting to add GGUF models from Hugging Face using the "Add Model" feature with the Llama.cpp-GGUF backend. This prevents users from loading GGUF models hosted on Hugging Face.

To Reproduce
Steps to reproduce the behavior:

Open AI Playground.
Select the Llama.cpp-GGUF backend.
Click the "+" button ("Add Model").
Enter a valid Hugging Face model identifier for a GGUF model, for example: Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF/MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
Click "Add."
Observe the error message dialog, which shows an incorrect local file path.
Expected behavior
AI Playground should recognize the input string as a Hugging Face model identifier, download the specified .gguf file from Hugging Face, and load the model into the Llama.cpp-GGUF backend. It should not attempt to find a local file.

Screenshots
unnamed (2)
unnamed (1)
unnamed

The Models/ prefix being added to the path.
The use of backslashes \ instead of forward slashes /.
The resulting incorrect local file path.
Environment:
OS: [ Windows 11]
GPU: [Intel Arc A770 Bifrost 16G]
CPU: [ Intel i7-14700K]
Version: [AI Playground [v2.0.0-alpha-preview]
Additional context
The issue appears to be related to incorrect path handling within AI Playground. The input string, which is a valid Hugging Face model identifier, is being misinterpreted as a local file path. AI Playground is prepending resources/service\models\llm\ggufLLM\ (or similar) to the input and using backslashes \ instead of forward slashes /, resulting in an invalid local file path.

This issue is specific to adding GGUF models using the Llama.cpp-GGUF backend. It is not related to the transformers version issue that affects PyTorch models and the IPEX-LLM backend.

The incorrect path in the error message looks something like this:

resources/service\models\llm\ggufLLM\Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF\MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
The correct input provided by the user is:

Ttimofeyka/MistralRP-Noromaid-NSFW-Mistral-7B-GGUF/MistralRP-Noromaid-NSFW-7B-Q8_0.gguf
This clearly shows that AI Playground is constructing an incorrect local file path instead of using the provided string as a Hugging Face identifier.

This bug prevents the intended functionality of easily adding GGUF models from Hugging Face

@Algolgaming Algolgaming added the bug Something isn't working label Jan 4, 2025
@iMonZ
Copy link

iMonZ commented Feb 7, 2025

Same issue...

Model: Qwen2.5-Coder-32B-Instruct-GGUF

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants