-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: 'tuple' object has no attribute 'backward' #15
Comments
(1) 这个提示的意思是,没有针对于当前任务的初始权重,bert预训练的权重已经载入进去了 新版的pytorch-transformer无法复现作者的结果,原因不明,建议按照作者指定的版本运行程序 |
新版的pytorch-transformer也是可以兼容的,不过需要对modeling.py/modeling_bert.py稍作修改,也欢迎你的贡献(最近忙哇...),like this pull。 |
跑train.py,在训练中出现:
loading weights file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-pytorch_model.bin from cache at /home/xinliu/.cache/torch/pytorch_transformers/b1b5e295889f2d0979ede9a78ad9cb5dc6a0e25ab7f9417b315f0a2c22f4683d
Weights of BertForTokenClassification not initialized from pretrained model: ['classifier.weight', 'classifier.bias']
Weights from pretrained model not used in BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
Starting training for 20 epoch(s)
Epoch 1/20
0%| | 0/1400 [00:00<?, ?it/s]Traceback (most recent call last):
File "train.py", line 220, in
train_and_evaluate(model, train_data, val_data, optimizer, scheduler, params, args.model_dir, args.restore_file)
File "train.py", line 99, in train_and_evaluate
train(model, train_data_iterator, optimizer, scheduler, params)
File "train.py", line 64, in train
loss.backward()
AttributeError: 'tuple' object has no attribute 'backward'
看起来有两个问题:
(1)BertForTokenClassification模型并没有从pretrained 模型初始化权重,但是pretrain 模型是下载成功了啊
(2)AttributeError: 'tuple' object has no attribute 'backward'
The text was updated successfully, but these errors were encountered: