Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisit rotation conversation to csv. It's way too slow! #6

Open
OlafHaag opened this issue Jan 10, 2020 · 1 comment
Open

Revisit rotation conversation to csv. It's way too slow! #6

OlafHaag opened this issue Jan 10, 2020 · 1 comment
Assignees
Labels
enhancement New feature or request

Comments

@OlafHaag
Copy link
Owner

The conversion is walking the BVH tree for each frame. But the file already has all the rotation values and it just needs to be filtered.

Maybe the values still need conversion because of rotation order or something else. Should check.
There's definitely much room for performance impovement.

@OlafHaag OlafHaag added the enhancement New feature or request label Jan 10, 2020
@OlafHaag OlafHaag self-assigned this Jan 10, 2020
@koya-ken
Copy link

After profiling with cProfile, I found that the majority of the time was spent in the search function of bvh.py.

Upon reviewing the processing during execution, I noticed that the patterns of arguments being input were only a few variations of joint definitions, so the same processing seemed to be repeated in most calls.

def search(self, *items):
found_nodes = []
def check_children(node):
if len(node.value) >= len(items):
failed = False
for index, item in enumerate(items):
if node.value[index] != item:
failed = True
break
if not failed:
found_nodes.append(node)
for child in node:
check_children(child)
check_children(self.root)
return found_nodes

As a result, simply caching this function can significantly improve performance.
The modification only requires adding an import and a decorator.
Please consider applying this fix.

import functools
    @functools.lru_cache(maxsize=None)
    def search(self, *items):
        found_nodes = []

I wanted to use functools.cache, but since the Python version needs to be 3.9 or higher, I used lru_cache(maxsize=None) instead.

https://docs.python.org/3.13/library/functools.html

With this modification, the performance improves as shown below when running the bvh2csv command on the bvh file I have at hand.
The test was performed on the same PC.

Before:

bvh2csv  test.BVH
Converting 1 file...
Processing took: 42.38 seconds

After:

bvh2csv test.BVH
Converting 1 file...
Processing took: 1.43 seconds

The BVH file used for the test has 2130 frames, but the effect becomes more significant as the number of frames increases.

Due to certain circumstances, I cannot upload the BVH file.
However, I believe you can quickly verify the behavior with the files you have on hand.

I am using ChatGPT for the English translation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants