Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot import index_max module #15

Open
zeal-up opened this issue Mar 9, 2020 · 2 comments
Open

Cannot import index_max module #15

zeal-up opened this issue Mar 9, 2020 · 2 comments

Comments

@zeal-up
Copy link

zeal-up commented Mar 9, 2020

My environment:
cuda10.0
pytroch1.2/1.4
python3.6

I have successfully compiled index_max and the module is installed in 'xx/xx/python3.6/site-packages/index_max-0.0.0-py3.6-linux-x86_64.egg/index_max.cpython-36m-x86_64-linux-gnu.so'.
However, when I try to import index_max, it raised up an error:

python3.6/site-packages/index_max-0.0.0-py3.6-linux-x86_64.egg/index_max.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZTIN3c1021AutogradMetaInterfaceE

I have also another question, I am not familiar with Cuda, the index_max seems used to achieve max_pooling along the feature dimension for all point besides one som cluster. Do I understand right? How much a cuda implemention speed up compare with a python implementation?

@lijx10
Copy link
Owner

lijx10 commented Mar 11, 2020

Please make sure that the cuda environment is configured properly. For example, proper environment variables. For example, you may consider adding these lines to ~/.bashrc

# cuda-10.1
export CUDA_HOME=/usr/local/cuda-10.1
export PATH=/usr/local/cuda-10.1/bin:${PATH}
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/lib64:${LD_LIBRARY_PATH}

Run nvcc -V to confirm that cuda is working properly.

After making sure that cuda and pytorch is installed properly, delete folder dist, build, and the file index_max.egg-info, then run python setup install to install the index_max module again.

Finally, please make sure that the index_max folder is not in the python path. There may be linking problems.

@lijx10
Copy link
Owner

lijx10 commented Mar 11, 2020

For the speed. An alternative implementation is to run for loop in pytorch, to get a max-pooling for each node. In the case that node number is small, e.g., <10, this practice should be fine. However, in most cases, the node number is like 64, 128, which means python for loop will be quite slow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants