Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High-pitched noise in the background when using old GPUs #13

Open
danielmsu opened this issue Oct 11, 2023 · 7 comments
Open

High-pitched noise in the background when using old GPUs #13

danielmsu opened this issue Oct 11, 2023 · 7 comments
Labels
bug Something isn't working

Comments

@danielmsu
Copy link
Contributor

Previously discussed here: #1 (comment)

The model produces some high-pitched noise in the background when I use my old GPU for inference (NVIDIA Quadro P5000, Driver Version: 515.105.01, CUDA Version: 11.7)

Audio examples:

I solved this problem by switching to CPU device, so this issue is just for reference, as asked by the author.

Thank you for your work!

@yl4579 yl4579 pinned this issue Oct 11, 2023
@yl4579
Copy link
Owner

yl4579 commented Oct 11, 2023

I'm gonna pin this in case someone else has similar problems. I don't know how to deal with this because I can't reproduce this problem with the oldest GPU I have access now (GTX 1080 Ti).

@ruby11dog
Copy link

ruby11dog commented Nov 1, 2023

The same problem happens to me, my GPU is A100. And using the CPU to infer cannot help for me, the noise is still there by using the CPU.

@yl4579
Copy link
Owner

yl4579 commented Nov 2, 2023

@ruby11dog could you please share the details to reproduce this problem? It seems it’s not related to the version of GPUs then?

@ruby11dog
Copy link

@ruby11dog could you please share the details to reproduce this problem? It seems it’s not related to the version of GPUs then?

The noise will appear when inferencing with your pretrained model "epoch_2nd_00100.pth". But the noise seems to fade away with model i trained myself along with epoch increase in second stage.
Here is my related python package version: torch:2.1.0

@yl4579
Copy link
Owner

yl4579 commented Nov 3, 2023

So weird, I tried in Colab (T4, V100 and A100) without specifying any version of the libraries it works perfectly fine: https://colab.research.google.com/drive/1k5OqSp8a-x-27xlaWr2kZh_9F9aBh39K
I'm really wondering what is the reason behind this problem. It doesn't seem like it's just the GPU version though.

@yl4579 yl4579 added the bug Something isn't working label Nov 20, 2023
Akito-UzukiP pushed a commit to Akito-UzukiP/StyleTTS2 that referenced this issue Jan 13, 2024
feat: drop cython monotonic_align
Akito-UzukiP pushed a commit to Akito-UzukiP/StyleTTS2 that referenced this issue Jan 13, 2024
dataset.sliceの引数指定を制御してリポジトリルートからの再帰取得を防ぐ
@MarkH1994
Copy link

MarkH1994 commented Apr 11, 2024

I have just ran into the same problem after training the model on the cloud using 4xA40 GPU's. I did inference both locally (RTX 3060 + CPU) and via the cloud computer (4xA40 + CPU). Doing inference on the cloud works fine without any background pitches. However, running it locally produces the background pitches (also when running the models on CPU, both on the cloud and locally).

After doing some investigation, it seems that the sampler is the culprit. Since it produces random output and my test is running either on the cloud or locally, it's not possible to set a seed producing similar outputs within the sampler.

A quick solution for testing the bug was to obtain the output of the sampler in the cloud and copy this output to my local pc. This output is then used at the local pc for further inference. Hereafter, the background pitch was gone and the sound was exactly produced as it should.

s_pred = sampler(noise, 
              embedding=bert_dur[0].unsqueeze(0), num_steps=diffusion_steps,
              embedding_scale=embedding_scale).squeeze(0)

# Save torch tensor on the cloud
# torch.save(s_pred, 'sampled_tensor.pt')

# Obtain the tensor via rsync and load it locally
# s_pred = torch.load('sampled_tensor.pt')

At the moment, I don't have time to look into the sampler, but I think closer inspection of the sampler could lead to fixing this bug.

@MarkH1994
Copy link

After more investigation, I have solved the problem for myself. The problem for me was the sigma_data in the config. In the default config, it is set to 0.2. During training, this variable changes and is written to a new config file, which is stored in your Models folder. Doing inference with the default config file, I obtained the high pitch. Using the config file which is obtained during training, the pitch was gone and the sound was good. So this is the solution that works for me.

Btw, @yl4579 Thanks you for your great work, it's really awesome that you produced this and made your code open source!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants