-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why always Downloading the tokenizer of seamlessM4T_v2_large #409
Comments
If I understand correctly, it looks like you're using from seamless_communication.models.unity import (
load_unity_model,
load_unity_text_tokenizer,
load_unity_unit_tokenizer
)
model = load_unity_model(model_name_or_card)
tokenizer = load_unity_unit_tokenizer(model_name_or_card)
tokenizer = load_unity_text_tokenizer(model_name_or_card) Here |
How load checkpoints that i got from fine-tuning. |
You can start by loading the original model (e.g. Also, please take a look at the excellent note from Alisamar Husain about fine-tuning M4T models. |
Thank you very much. |
Hi, I have finetuned the model using the notes from Alisamar, but the model is not able to be loaded, as its throwing error that some weights are missing. final_proj.weights missing. I modified the seamlessm4t_v2_large.yaml to my model checkpoint, but getting this error. does finetune models have different weights compared to original model? |
If you're having trouble loading checkpoints saved after fine-tuning, you can use the |
Hi, I followed the steps you mentioned. But as I said, its throwing error at final_proj.weight. This is my query. does the finetuned model weights differ from original model? If so how can we use our finetuned model? m4t_evaluate |
I already set up CHECKPOINTS_PATH and cards, but why always Downloading the tokenizer of seamlessM4T_v2_large when I python app.py? Please help, thanks.
The text was updated successfully, but these errors were encountered: