You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the code and learned from your code! I have a small question below:
I found that for other models you use from torchtext.vocab import load_word_vectors to load pretrained word vectors while for SSE model, you directly use torch.load(EMB_FILE) to load the pickled embeddings. However, I could not find the embedding files in the data folder you provided. Could you please upload this file?
Thanks!
Regards,
Shuailong
The text was updated successfully, but these errors were encountered:
When you specify the embedding path and name, torchtext will automatically download data from GloVe website: https://nlp.stanford.edu/projects/glove/ , which is saved into your local machine.
When you specify the embedding path and name, torchtext will automatically download data from GloVe website: https://nlp.stanford.edu/projects/glove/ , which is saved into your local machine.
just run this command: wv_dict, wv_arr, wv_size = load_word_vectors('/your/local/path/', 'glove.840B', 300)
It will download glove.840B.300d.zip
Note: the torchtext version in my code is 0.1.1
Hi Wuwei,
Thanks for sharing the code and learned from your code! I have a small question below:
I found that for other models you use
from torchtext.vocab import load_word_vectors
to load pretrained word vectors while for SSE model, you directly usetorch.load(EMB_FILE)
to load the pickled embeddings. However, I could not find the embedding files in the data folder you provided. Could you please upload this file?Thanks!
Regards,
Shuailong
The text was updated successfully, but these errors were encountered: