Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libgomp: Thread creation failed: Resource temporarily unavailable #1798

Open
loxs123 opened this issue Jun 11, 2024 · 1 comment
Open

libgomp: Thread creation failed: Resource temporarily unavailable #1798

loxs123 opened this issue Jun 11, 2024 · 1 comment
Labels
question Further information is requested

Comments

@loxs123
Copy link

loxs123 commented Jun 11, 2024

What is your question?

使用多次调用同一个模型报错,首次调用generate并不会报错,后2次或者3次就会报错

libgomp: Thread creation failed: Resource temporarily unavailable

libgomp: 
libgomp: 
libgomp: Thread creation failed: Resource temporarily unavailableThread creation failed: Resource temporarily unavailableThread creation failed: Resource temporarily unavailable

libgomp: 
libgomp: Thread creation failed: Resource temporarily unavailableThread creation failed: Resource temporarily unavailable
libgomp: 

Thread creation failed: Resource temporarily unavailable

libgomp: Segmentation fault (core dumped)

Code

def worker_process():
    model= AutoModel(model="paraformer-zh",  vad_model="fsmn-vad",  punc_model="ct-punc", spk_model="cam++")
    wav_files = glob.glob('data/*/*/*.wav')
    for wav_file in wav_files:
        print('processing ',wav_file)
        output_path =f'{wav_file[:-4]}.json'
        start_time = time.time()
        ans = model.generate(input=wav_file,batch_size_s=300,hotword='', )
        os.system(f'rm "{wav_file}"')
        end_time = time.time()
        with open(output_path, 'w') as f:
            json.dump(ans, f, ensure_ascii=False)


if __name__ == '__main__':
    worker_process()

What's your environment?

OS: CentOS Linux release 7.9.2009
FunASR Version: 1.0.27
ModelScope Version: 1.14.0
PyTorch Version: 2.1.2+cu118
How you installed funasr: pip
Python version: 3.8.19
GPU: v100
CUDA/cuDNN version: 11.4
cpu

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    32
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping:              4
CPU MHz:               2394.374
BogoMIPS:              4788.74
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              28160K
NUMA node0 CPU(s):     0-63
@loxs123 loxs123 added the question Further information is requested label Jun 11, 2024
@loxs123
Copy link
Author

loxs123 commented Jun 11, 2024

目前解决方案

def worker_process():
    wav_files = glob.glob('data/*/*/*.wav')
    for wav_file in wav_files:
        print('processing ',wav_file)
        output_path =f'{wav_file[:-4]}.json'
        start_time = time.time()
        model= AutoModel(model="paraformer-zh",  vad_model="fsmn-vad",  punc_model="ct-punc", spk_model="cam++")
        ans = model.generate(input=wav_file,batch_size_s=300,hotword='', )
        os.system(f'rm "{wav_file}"')
        end_time = time.time()
        with open(output_path, 'w') as f:
            json.dump(ans, f, ensure_ascii=False)


if __name__ == '__main__':
    worker_process()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant