You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train on the custom dataset having 6 classes. But here is the error I'm facing. Any help is appreciated.
Exception has occurred: RuntimeError
Error(s) in loading state_dict for C1DeepSup:
size mismatch for conv_last.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
size mismatch for conv_last_deepsup.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last_deepsup.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
File "/home/irl/sem_seg _custom/mit_semseg/models/models.py", line 160, in build_decoder
torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
File "/home/irl/sem_seg _custom/train.py", line 155, in main
weights=cfg.MODEL.weights_decoder)
File "/home/irl/sem_seg _custom/train.py", line 288, in
main(cfg, gpus)
RuntimeError: Error(s) in loading state_dict for C1DeepSup:
size mismatch for conv_last.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
size mismatch for conv_last_deepsup.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last_deepsup.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
The text was updated successfully, but these errors were encountered:
Hello.
I am trying to train on the custom dataset having 6 classes. But here is the error I'm facing. Any help is appreciated.
Exception has occurred: RuntimeError
Error(s) in loading state_dict for C1DeepSup:
size mismatch for conv_last.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
size mismatch for conv_last_deepsup.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last_deepsup.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
File "/home/irl/sem_seg _custom/mit_semseg/models/models.py", line 160, in build_decoder
torch.load(weights, map_location=lambda storage, loc: storage), strict=False)
File "/home/irl/sem_seg _custom/train.py", line 155, in main
weights=cfg.MODEL.weights_decoder)
File "/home/irl/sem_seg _custom/train.py", line 288, in
main(cfg, gpus)
RuntimeError: Error(s) in loading state_dict for C1DeepSup:
size mismatch for conv_last.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
size mismatch for conv_last_deepsup.weight: copying a param with shape torch.Size([150, 80, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 80, 1, 1]).
size mismatch for conv_last_deepsup.bias: copying a param with shape torch.Size([150]) from checkpoint, the shape in current model is torch.Size([6]).
The text was updated successfully, but these errors were encountered: