-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Triton? #1
Comments
Sorry for this, we had to add that you should better create virtual environment and install all packages there |
Triton is notoriously hard to install (if even possible) on windows triton-inference-server/server#4737 |
open a linux or wsl cli and install requirements |
pip install requirements results in Triton error message. I can open a WSL window in Windows, but how do I install requirements there? Why should this resolve the Triton error message? |
I got the same error message in an earlier project and just like the previous commenter and ChatGPT said, it’s virtually impossible to install triton on windows but not Linux and WSL is Linux for windows. I use VS Code. Just open a new terminal and the chose a Ubuntu wsl terminal and follow project instructions. It works. Correction, got it installed but ran out of memory before Incould use it |
Instructions seem to say that I just need to install the requirements.txt from a wsl window. Is that right and what exactly is the command to install requirements.txt? |
And what is the error... ValueError: Non-consecutive added token '<extra_id_99>' found. Should have index 32100 but has index 32000 in saved vocabulary? |
No, I was responding to the triton installation problem. The command is the same except I have use python3 instead of python. My suggestion is to use a LLM to help with your issues. I gave copilot your error message and this is what it responded- It seems like you're encountering an error related to the tokenization process in a language model, possibly while using the Hugging Face's Transformers library. The error message suggests that there's an issue with the indices of the added tokens. The token <extra_id_99> should have an index of 32100, but it has an index of 32000 in the saved vocabulary. This could be due to a mismatch between the pre-trained model's tokenizer and the one you're using. If you've added new tokens to the tokenizer, you need to make sure that the model is aware of these new tokens. Here's a general way to add new tokens: tokenizer = AutoTokenizer.from_pretrained('model_name') Add new tokensnew_tokens = ['<extra_id_99>', '<extra_id_98>', ...] # Add your new tokens here Resize the token embeddings of the modelmodel.resize_token_embeddings(len(tokenizer)) |
Guys, have you just tried to ignore triton installation? |
and all torch dependencies installation too |
I haven't but it's good to have WSL installed for other linux projects or projects that don't have Windows compatibility yet like ollama when it fisrt came out. I also use Copilot or the internet for most of my installation issues. |
Please try to just ignore triton and run the code again |
any suggestions?
11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 8 16
Intel(R) UHD Graphics 30.0.101.2079
NVIDIA GeForce RTX 3050 Ti Laptop GPU 32.0.15.6081
The text was updated successfully, but these errors were encountered: