Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Word2Vec #11

Open
YiQ-Zhao opened this issue Jun 25, 2018 · 5 comments
Open

Word2Vec #11

YiQ-Zhao opened this issue Jun 25, 2018 · 5 comments

Comments

@YiQ-Zhao
Copy link

Hi,

It seems that the example didn't use Word2Vec. I am trying to load pre-trained vectors to represent these words. Do you think it will be helpful to to improve the bot performance? I can't figure out how to do that. Do you have any examples showing the similar procedures?

Thank you in advance!

@zsdonghao
Copy link
Member

if you don't have the word2vec parameters and table, you can train one here https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_word2vec_basic.py

then you can use the pre-trained parameters like that https://github.com/tensorlayer/tensorlayer/blob/master/example/tutorial_generate_text.py#L371

hope it helps

@YiQ-Zhao
Copy link
Author

YiQ-Zhao commented Jun 26, 2018

Thank you for your reply. I'm a newbie to deep learning. I didn't quite understand the code in these two examples. What I wanted is to use GloVe to represent these words and train the seq2seq model.

glove_weights_initializer = tf.constant_initializer(weights)
embedding_weights = tf.get_variable(
    name='embedding_weights', 
    shape=(VOCAB_LENGTH, EMBEDDING_DIMENSION), 
    initializer=glove_weights_initializer,
    trainable=False)

 net_encode = tl.layers.EmbeddingInputlayer(inputs=encode_seqs, vocabulary_size=xvocab_size, embedding_size=emb_dim, E_init=embedding_weights)

I'm quite sure this would not work. I saw in the tutorial_generate_text you used
tl.layers.assign_params(sess, [load_params[0]], emb_net) to load the existing embedding matrix. However, the net_encode is a part of model() function, how do I load the pre-trained parameters to it? Also, I don't know where the model_word2vec_50k_128.npy comes from. Is it possible to generate such a .npy file using glove.42B.300d.txt file?

Any help will be greatly appreciated!

@zsdonghao
Copy link
Member

Hi, you need to make sure the lookup table you are using have the same dimension with your layer, otherwise, that parameters can't be assigned into the layer.

the model_word2vec_50k_128.npy is from the word2vec tutorial the link I send you, you can train one by simply run the code.

@YiQ-Zhao
Copy link
Author

Thanks. Your suggestions were all very helpful to me.
谢谢!

@zsdonghao
Copy link
Member

不客气

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants