You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have successfully compiled index_max and the module is installed in 'xx/xx/python3.6/site-packages/index_max-0.0.0-py3.6-linux-x86_64.egg/index_max.cpython-36m-x86_64-linux-gnu.so'.
However, when I try to import index_max, it raised up an error:
I have also another question, I am not familiar with Cuda, the index_max seems used to achieve max_pooling along the feature dimension for all point besides one som cluster. Do I understand right? How much a cuda implemention speed up compare with a python implementation?
The text was updated successfully, but these errors were encountered:
Please make sure that the cuda environment is configured properly. For example, proper environment variables. For example, you may consider adding these lines to ~/.bashrc
Run nvcc -V to confirm that cuda is working properly.
After making sure that cuda and pytorch is installed properly, delete folder dist, build, and the file index_max.egg-info, then run python setup install to install the index_max module again.
Finally, please make sure that the index_max folder is not in the python path. There may be linking problems.
For the speed. An alternative implementation is to run for loop in pytorch, to get a max-pooling for each node. In the case that node number is small, e.g., <10, this practice should be fine. However, in most cases, the node number is like 64, 128, which means python for loop will be quite slow.
My environment:
cuda10.0
pytroch1.2/1.4
python3.6
I have successfully compiled index_max and the module is installed in 'xx/xx/python3.6/site-packages/index_max-0.0.0-py3.6-linux-x86_64.egg/index_max.cpython-36m-x86_64-linux-gnu.so'.
However, when I try to import index_max, it raised up an error:
I have also another question, I am not familiar with Cuda, the index_max seems used to achieve max_pooling along the feature dimension for all point besides one som cluster. Do I understand right? How much a cuda implemention speed up compare with a python implementation?
The text was updated successfully, but these errors were encountered: