Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error while executing the scripts for BEAT #10

Open
VishnuSai87 opened this issue Jan 31, 2024 · 6 comments
Open

error while executing the scripts for BEAT #10

VishnuSai87 opened this issue Jan 31, 2024 · 6 comments

Comments

@VishnuSai87
Copy link

I am getting this error while executing the recently released scripts for BEAT I am not able to find this file in the test folder of the BEAT dataset
self.lmdb_env = lmdb.open(preloaded_dir, readonly=True, lock=False)
lmdb.Error: /home/LivelySpeaker_beat/datasets/BEAT/finaltest/my6d_bvh_rot_2_4_6_8_cache: No such file or directory
Can you tell me if I have to download the whole zip folder of the BEAT dataset or what exctly should I download from the BEAT dataset to run this code.
Thank You

@nehaksheerasagar
Copy link

Even I am facing the same issue can you tell me how to extract speakers 2,4,6,8 from the cache file and create the bvh_rot_2_4_6_8_ cache from bvh_rot_cache.
Thank You.

@fcchit
Copy link

fcchit commented Feb 5, 2024

@VishnuSai87 @nehaksheerasagar I use tmp/process_cache.py to generate the my6d_bvh_rot_2_4_6_8_cache from bvh_rot_cache generated by BEAT preprocessing tools. I'm training the model now.

@nehaksheerasagar
Copy link

I run this command:python tmp/process_cache.py
This file got created my6d_bvh_rot_2_4_6_8_cache but it gave me this output:
train, 0/64
Traceback (most recent call last):
File "tmp/process_cache.py", line 58, in
build_data_with_beat("train")
File "tmp/process_cache.py", line 32, in build_data_with_beat
sample = pyarrow.deserialize(sample)
File "pyarrow/serialization.pxi", line 461, in pyarrow.lib.deserialize
File "pyarrow/serialization.pxi", line 423, in pyarrow.lib.deserialize_from
File "pyarrow/serialization.pxi", line 400, in pyarrow.lib.read_serialized
File "pyarrow/error.pxi", line 87, in pyarrow.lib.check_status
pyarrow.lib.ArrowIOError: Cannot read a negative number of bytes from BufferReader.
And when i run the code for train its running fine but when i run the code for test it says number of samples = 0
Thank You.

@VishnuSai87
Copy link
Author

After training for testing should we keep the my6d_bvh_rot_2_4_6_8_cache file in the test folder in the dataset to execute the test code because i did that and also got number of samples=0

@zyhbili
Copy link
Owner

zyhbili commented Feb 5, 2024

I update the data scripts in data_libs, please see README in the sub folder for details. In short, we mostly follow the original BEAT processing procedure to generate bvh_rot_2_4_6_8_cache. Additionally, we patch it with rot6d and mel and generate my6d_bvh_rot_2_4_6_8_cache.

@zyhbili
Copy link
Owner

zyhbili commented Feb 5, 2024

After training for testing should we keep the my6d_bvh_rot_2_4_6_8_cache file in the test folder in the dataset to execute the test code because i did that and also got number of samples=0

We use the finaltest cache for testing. We split the original long seq test dataset into 34 frames for testing following the same operation in TED. Thus, you should split the test dataset using dataloader first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants