-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low step count? 200 steps per 50 epochs. #478
Comments
It also randomly stops working and I need to restart the wsl environment. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey guys,
I'm pretty new here just trying to figure all this out.
Finally managed to get my first finetuning running. But I'm kinda confused.
I'm using the thomas - medium model (german) for finetuning.
It starts an 3135 epochs with 2702056 steps which is like 43000 steps per 50 epochs.
But my run only increases by like 200 steps per 50 epochs.
Am I doing something significantly wrong?
My command to start the finetuning:
I've got a 4090 in my PC which works with like 70% (3D and VRAM)
python3 -m piper_train \ --dataset-dir /home/tts/dataset/output/ \ --accelerator 'gpu' \ --devices 1 \ --batch-size 24 \ --validation-split 0.2 \ --num-test-examples 1 \ --max_epochs 10000 \ --resume_from_checkpoint /home/tts/dataset/output/lightning_logs/version_2/checkpoints/epoch\=3249-step\=2702512.ckpt \ --checkpoint-epochs 50 \ --precision 32 \ --max-phoneme-ids 400
Kind regards.
The text was updated successfully, but these errors were encountered: