-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A problem about __init__() got an unexpected keyword argument 'name' #28
Comments
Because my English is particularly poor, so please forgive my rudeness,This sentence is translated by google。 |
Looks like we are using different versions of tensorflow. I'm using tensorflow 1.4 nightly. |
I remve the "{}_cudnn_bi_lstm".format(scope_name)" ,and get follow error TypeError Traceback (most recent call last) in main(FLAGS) /notebooks/BiMPM1/src/SentenceMatchModelGraph.pyc in init(self, num_classes, word_vocab, char_vocab, is_training, options, global_step) /notebooks/BiMPM1/src/SentenceMatchModelGraph.pyc in create_model_graph(self, num_classes, word_vocab, char_vocab, is_training, global_step) /notebooks/BiMPM1/src/layer_utils.py in my_lstm_layer(input_reps, lstm_dim, input_lengths, scope_name, reuse, is_training, dropout_rate, use_cudnn) TypeError: init() takes at least 4 arguments (5 given) |
I plan to change tensorflow for the same version as yours, |
My current version is also 1.4 |
I just check my version. Here is the one I'm using tensorflow
1.5.0-dev20171106.
Sorry for the inconvenience.
…On Tue, Jan 30, 2018 at 10:06 PM, lqjsurpass ***@***.***> wrote:
The version I use now is also tensorflow 1.4
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#28 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AYEV-Sp86Ki9utYhpcaQ9d6ltF1a5WO9ks5tP9i3gaJpZM4Rx1-N>
.
|
meet the same problem, I think the "Requirements" chapter in README.md of this project should be modifiled |
I have changed my tensorflow version to 1.5.0
|
looks like you are not using GPU, or don't have the cudnn. If you are using
cpu, you can change
https://github.com/zhiguowang/BiMPM/blob/master/configs/snli.sample.config#L46
to false, and try again.
…On Mon, Feb 5, 2018 at 1:45 AM, vacingFang ***@***.***> wrote:
I have changed the tensor flow version to 1.5.0, but encountered another
error.
2018-02-05 14:37:26.647844: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Traceback (most recent call last):
File "src/SentenceMatchTrainer.py", line 253, in <module>
main(FLAGS)
File "src/SentenceMatchTrainer.py", line 191, in main
sess.run(initializer)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 895, in run
run_metadata_ptr)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1128, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1344, in _do_run
options, run_metadata)
File "/usr/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1363, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'CudnnRNNCanonicalToParams' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
[[Node: Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/CudnnRNNCanonicalToParams = CudnnRNNCanonicalToParams[T=DT_FLOAT, direction="bidirectional", dropout=0, input_mode="linear_input", num_params=16, rnn_mode="lstm", seed=0, seed2=0](Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/CudnnRNNCanonicalToParams/num_layers, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/CudnnRNNCanonicalToParams/num_units, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/CudnnRNNCanonicalToParams/input_size, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_1, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_2, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_3, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_4, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_5, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_6, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_7, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_8, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_9, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_10, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_11, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_12, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_13, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_14, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/random_uniform_15, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_1, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_2, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_3, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_4, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_5, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_6, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_7, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_8, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_9, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_10, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_11, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_12, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_13, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_14, Model_1/aggregation_layer/right_layer-0/right_layer-0_cudnn_bi_lstm/right_layer-0_cudnn_bi_lstm/Const_15)]]
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#28 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AYEV-YtU1nj4WO9H2g2vsSw5DP_yPyDbks5tRqN_gaJpZM4Rx1-N>
.
|
it works, many thanks |
TypeError Traceback (most recent call last)
in ()
251 FLAGS = enrich_options(FLAGS)
252
--> 253 main(FLAGS)
254
in main(FLAGS)
173 with tf.variable_scope("Model", reuse=None, initializer=initializer):
174 train_graph = SentenceMatchModelGraph(num_classes, word_vocab=word_vocab, char_vocab=char_vocab,
--> 175 is_training=True, options=FLAGS, global_step=global_step)
176
177 with tf.variable_scope("Model", reuse=True, initializer=initializer):
/notebooks/BiMPM1/src/SentenceMatchModelGraph.pyc in init(self, num_classes, word_vocab, char_vocab, is_training, options, global_step)
8 self.options = options
9 self.create_placeholders()
---> 10 self.create_model_graph(num_classes, word_vocab, char_vocab, is_training, global_step=global_step)
11
12 def create_placeholders(self):
/notebooks/BiMPM1/src/SentenceMatchModelGraph.pyc in create_model_graph(self, num_classes, word_vocab, char_vocab, is_training, global_step)
95 (question_char_outputs_fw, question_char_outputs_bw, _) = layer_utils.my_lstm_layer(in_question_char_repres, options.char_lstm_dim,
96 input_lengths=question_char_lengths,scope_name="char_lstm", reuse=False,
---> 97 is_training=is_training, dropout_rate=options.dropout_rate, use_cudnn=options.use_cudnn)
98 question_char_outputs_fw = layer_utils.collect_final_step_of_lstm(question_char_outputs_fw, question_char_lengths - 1)
99 question_char_outputs_bw = question_char_outputs_bw[:, 0, :]
/notebooks/BiMPM1/src/layer_utils.py in my_lstm_layer(input_reps, lstm_dim, input_lengths, scope_name, reuse, is_training, dropout_rate, use_cudnn)
18 inputs = tf.transpose(input_reps, [1, 0, 2])
19 lstm = tf.contrib.cudnn_rnn.CudnnLSTM(1, lstm_dim, direction="bidirectional",
---> 20 name="{}_cudnn_bi_lstm".format(scope_name), dropout=dropout_rate if is_training else 0)
21 outputs, _ = lstm(inputs)
22 outputs = tf.transpose(outputs, [1, 0, 2])
TypeError: init() got an unexpected keyword argument 'name'
Thanks very much!
The text was updated successfully, but these errors were encountered: