Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于SyncNet训练的问题 #89

Open
IzayaZ opened this issue Jan 15, 2025 · 2 comments
Open

关于SyncNet训练的问题 #89

IzayaZ opened this issue Jan 15, 2025 · 2 comments

Comments

@IzayaZ
Copy link

IzayaZ commented Jan 15, 2025

你好!请问训练SyncNet真的需要400_0000 steps吗?
我使用您的代码处理了lrs3数据集,直到训练了170000+steps,sync_loss仍在0.4以上,请问这是正常的吗?
sync_loss的值降低到多少的时候,才可以进行下一步的训练呢?
非常感谢,期待您的回复。

base_config:

  • egs/egs_bases/syncnet/base.yaml

init_from_ckpt: ''
binary_data_dir: data/binary/th1kh
task_cls: tasks.os_avatar.audio_lm3d_syncnet.SyncNetTask
use_kv_dataset: true
num_workers: 8 # 4

syncnet_num_clip_pairs: 8192
max_sentences_per_batch: 1024
max_tokens_per_batch: 20000
sample_min_length: 64
max_updates: 400_0000

@Prsaro
Copy link

Prsaro commented Feb 6, 2025

我使用的CMLR数据集训练的,syncnet的loss最低降到了0.3左右,最后的vae训练的时候,基本20000步左右loss就不再降了,出来的表情系数效果很差。我不知道是不是CMLR数据本身质量就比较差导致的,我还得再试试

@IzayaZ
Copy link
Author

IzayaZ commented Feb 20, 2025

我使用的CMLR数据集训练的,syncnet的loss最低降到了0.3左右,最后的vae训练的时候,基本20000步左右loss就不再降了,出来的表情系数效果很差。我不知道是不是CMLR数据本身质量就比较差导致的,我还得再试试

这可咋办呀,硬train嘛,请问您解决了嘛

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants