You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for all your work and help with all the issues I have been reading !
Unfortunately, I can't find a solution to my problem, it largely overcomes my skills
No one cares, but here's a a bit of my life :
I am training on a french voice having used piper recording studio earlier, i encountered the same stuff as issue #745 which I overcame by using an US checkpoint even though I train a french voice (issue #108 )
I fiddled around with the batch size, until the training doesn't get killed by the rest of the computer.
So now I have "RuntimeError: expected scalar type BFloat16 but found Float" error
It appears on other issues over the internet but it isn't about piper, and i can't manage to understand the problem
I am using data from piper recording studio and a valid checkpoint from huggingface, so my data must be in the right type
Here's what I have in my console :
(.venv) bu@PC2STEPH:/training/piper/src/python$ python3 -m piper_train --dataset-dir /training/train-me --accelerator 'cpu' --batch-size 8 --validation-split 0.0 --num-test-examples 0 --max_epochs 6000 --resume_from_checkpoint "/training/piper/src/python/epoch=2164-step=1355540.ckpt" --checkpoint-epochs 1 --precision 16 --max-phoneme-ids 400 --quality medium
DEBUG:piper_train:Namespace(dataset_dir='/home/bu/training/train-me', checkpoint_epochs=1, quality='medium', resume_from_single_speaker_checkpoint=None, logger=True, enable_checkpointing=True, default_root_dir=None, gradient_clip_val=None, gradient_clip_algorithm=None, num_nodes=1, num_processes=None, devices=None, gpus=None, auto_select_gpus=False, tpu_cores=None, ipus=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=None, max_epochs=6000, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, val_check_interval=None, log_every_n_steps=50, accelerator='cpu', strategy=None, sync_batchnorm=False, precision=16, enable_model_summary=True, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint='/training/piper/src/python/epoch=2164-step=1355540.ckpt', profiler=None, benchmark=None, deterministic=None, reload_dataloaders_every_n_epochs=0, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, plugins=None, amp_backend='native', amp_level=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', batch_size=8, validation_split=0.0, num_test_examples=0, max_phoneme_ids=400, hidden_channels=192, inter_channels=192, filter_channels=768, n_layers=6, n_heads=2, seed=1234)
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:718: UserWarning: You passed Trainer(accelerator='cpu', precision=16) but native AMP is not supported on CPU. Using precision='bf16' instead.
rank_zero_warn(
Using bfloat16 Automatic Mixed Precision (AMP)
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:52: LightningDeprecationWarning: Setting Trainer(resume_from_checkpoint=) is deprecated in v1.5 and will be removed in v1.7. Please pass Trainer.fit(ckpt_path=) directly instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
DEBUG:piper_train:Checkpoints will be saved every 1 epoch(s)
DEBUG:vits.dataset:Loading dataset: /home/bu/training/train-me/dataset.jsonl
WARNING:vits.dataset:Skipped 1 utterance(s)
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:731: LightningDeprecationWarning: trainer.resume_from_checkpoint is deprecated in v1.5 and will be removed in v2.0. Specify the fit checkpoint path with trainer.fit(ckpt_path=) instead.
ckpt_path = ckpt_path or self.resume_from_checkpoint
Restoring states from the checkpoint path at ~/training/piper/src/python/epoch=2164-step=1355540.ckpt
DEBUG:fsspec.local:open file: /home/bu/training/piper/src/python/epoch=2164-step=1355540.ckpt
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1659: UserWarning: Be aware that when using ckpt_path, callbacks used to create the checkpoint need to be provided during Trainer instantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': None, 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None}"].
rank_zero_warn(
DEBUG:fsspec.local:open file: /home/bu/training/train-me/lightning_logs/version_8/hparams.yaml
Restored all states from the checkpoint file at ~/training/piper/src/python/epoch=2164-step=1355540.ckpt
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:153: UserWarning: Total length of DataLoader across ranks is zero. Please make sure this was your intention.
rank_zero_warn(
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/bu/training/piper/src/python/piper_train/main.py", line 147, in
main()
File "/home/bu/training/piper/src/python/piper_train/main.py", line 124, in main
trainer.fit(model)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit
self._call_and_handle_interrupt(
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
results = self._run_stage()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
return self._run_train()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
self.fit_loop.run()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance
batch_output = self.batch_loop.run(kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance
outputs = self.optimizer_loop.run(optimizers, kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance
result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization
self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step
self.trainer._call_lightning_module_hook(
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step
return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 80, in optimizer_step
return super().optimizer_step(model, optimizer, optimizer_idx, closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 153, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 140, in wrapper
out = func(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 120, in step
loss = closure()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 138, in _wrap_closure
closure_result = closure()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in call
self._result = self.closure(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 132, in closure
step_output = self._step_fn()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 407, in _training_step
training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 358, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/lightning.py", line 191, in training_step
return self.training_step_g(batch)
File "/home/bu/training/piper/src/python/piper_train/vits/lightning.py", line 214, in training_step_g
) = self.model_g(x, x_lengths, spec, spec_lengths, speaker_ids)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/models.py", line 654, in forward
l_length = self.dp(x, x_mask, w, g=g)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/models.py", line 69, in forward
x = self.convs(x, x_mask)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/modules.py", line 122, in forward
y = self.norms_1i
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/modules.py", line 25, in forward
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type BFloat16 but found Float
I hope you can help me, I'll be glad to answer questions about versions of my stuff or hardware if needed
Please forgive my utterly profund ignorance and my bad english (as a frog person)
The text was updated successfully, but these errors were encountered:
Hello all !
Thanks for all your work and help with all the issues I have been reading !
Unfortunately, I can't find a solution to my problem, it largely overcomes my skills
No one cares, but here's a a bit of my life :
I am training on a french voice having used piper recording studio earlier, i encountered the same stuff as issue #745 which I overcame by using an US checkpoint even though I train a french voice (issue #108 )
I fiddled around with the batch size, until the training doesn't get killed by the rest of the computer.
So now I have "RuntimeError: expected scalar type BFloat16 but found Float" error
It appears on other issues over the internet but it isn't about piper, and i can't manage to understand the problem
I am using data from piper recording studio and a valid checkpoint from huggingface, so my data must be in the right type
Here's what I have in my console :
(.venv) bu@PC2STEPH:
/training/piper/src/python$ python3 -m piper_train --dataset-dir/training/piper/src/python/epoch=2164-step=1355540.ckpt', profiler=None, benchmark=None, deterministic=None, reload_dataloaders_every_n_epochs=0, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, plugins=None, amp_backend='native', amp_level=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', batch_size=8, validation_split=0.0, num_test_examples=0, max_phoneme_ids=400, hidden_channels=192, inter_channels=192, filter_channels=768, n_layers=6, n_heads=2, seed=1234)/training/train-me --accelerator 'cpu' --batch-size 8 --validation-split 0.0 --num-test-examples 0 --max_epochs 6000 --resume_from_checkpoint "/training/piper/src/python/epoch=2164-step=1355540.ckpt" --checkpoint-epochs 1 --precision 16 --max-phoneme-ids 400 --quality mediumDEBUG:piper_train:Namespace(dataset_dir='/home/bu/training/train-me', checkpoint_epochs=1, quality='medium', resume_from_single_speaker_checkpoint=None, logger=True, enable_checkpointing=True, default_root_dir=None, gradient_clip_val=None, gradient_clip_algorithm=None, num_nodes=1, num_processes=None, devices=None, gpus=None, auto_select_gpus=False, tpu_cores=None, ipus=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=None, max_epochs=6000, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, val_check_interval=None, log_every_n_steps=50, accelerator='cpu', strategy=None, sync_batchnorm=False, precision=16, enable_model_summary=True, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint='
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:718: UserWarning: You passed
Trainer(accelerator='cpu', precision=16)
but native AMP is not supported on CPU. Usingprecision='bf16'
instead.rank_zero_warn(
Using bfloat16 Automatic Mixed Precision (AMP)
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:52: LightningDeprecationWarning: Setting
Trainer(resume_from_checkpoint=)
is deprecated in v1.5 and will be removed in v1.7. Please passTrainer.fit(ckpt_path=)
directly instead.rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
DEBUG:piper_train:Checkpoints will be saved every 1 epoch(s)
DEBUG:vits.dataset:Loading dataset: /home/bu/training/train-me/dataset.jsonl
WARNING:vits.dataset:Skipped 1 utterance(s)
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:731: LightningDeprecationWarning:
trainer.resume_from_checkpoint
is deprecated in v1.5 and will be removed in v2.0. Specify the fit checkpoint path withtrainer.fit(ckpt_path=)
instead.ckpt_path = ckpt_path or self.resume_from_checkpoint
Restoring states from the checkpoint path at ~/training/piper/src/python/epoch=2164-step=1355540.ckpt
DEBUG:fsspec.local:open file: /home/bu/training/piper/src/python/epoch=2164-step=1355540.ckpt
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1659: UserWarning: Be aware that when using
ckpt_path
, callbacks used to create the checkpoint need to be provided duringTrainer
instantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': None, 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None}"].rank_zero_warn(
DEBUG:fsspec.local:open file: /home/bu/training/train-me/lightning_logs/version_8/hparams.yaml
Restored all states from the checkpoint file at ~/training/piper/src/python/epoch=2164-step=1355540.ckpt
/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:153: UserWarning: Total length of
DataLoader
across ranks is zero. Please make sure this was your intention.rank_zero_warn(
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/bu/training/piper/src/python/piper_train/main.py", line 147, in
main()
File "/home/bu/training/piper/src/python/piper_train/main.py", line 124, in main
trainer.fit(model)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit
self._call_and_handle_interrupt(
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
results = self._run_stage()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
return self._run_train()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
self.fit_loop.run()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance
batch_output = self.batch_loop.run(kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance
outputs = self.optimizer_loop.run(optimizers, kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance
result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization
self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step
self.trainer._call_lightning_module_hook(
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step
return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 80, in optimizer_step
return super().optimizer_step(model, optimizer, optimizer_idx, closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 153, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 140, in wrapper
out = func(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 120, in step
loss = closure()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 138, in _wrap_closure
closure_result = closure()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in call
self._result = self.closure(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 132, in closure
step_output = self._step_fn()
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 407, in _training_step
training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 358, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/lightning.py", line 191, in training_step
return self.training_step_g(batch)
File "/home/bu/training/piper/src/python/piper_train/vits/lightning.py", line 214, in training_step_g
) = self.model_g(x, x_lengths, spec, spec_lengths, speaker_ids)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/models.py", line 654, in forward
l_length = self.dp(x, x_mask, w, g=g)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/models.py", line 69, in forward
x = self.convs(x, x_mask)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/modules.py", line 122, in forward
y = self.norms_1i
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/home/bu/training/piper/src/python/piper_train/vits/modules.py", line 25, in forward
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
File "/home/bu/training/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2515, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: expected scalar type BFloat16 but found Float
I hope you can help me, I'll be glad to answer questions about versions of my stuff or hardware if needed
Please forgive my utterly profund ignorance and my bad english (as a frog person)
The text was updated successfully, but these errors were encountered: