-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No attribute "device" when using Numpy backend #545
Comments
Yes, currently the context of a tensor depends on the backend. I did think about making it backend agnostic, I guess that is one option. We then need to normalize the device across all backends then. We could introduce e.g. tl.cpu and tl.gpu if necessary. What do you think @aarmey @cohenjer @MarieRoald @yngvem ? |
I agree @yngvem - actually really excited about the cleaned up API, it's amazing to have such a big project do such a large cleanup! |
Describe the bug
When using Numpy backend, it seems that Tensorly tensors have no attribute "device".
Steps or Code to Reproduce
Expected behavior
It would be nice to have a "device" attribute set to "cpu" when Numpy is used as backend. The reason behind this is that sometimes we may want to create a new tensor, having e.g. similar 'device' and 'dtype' than another tensor. Even if a tensor with Numpy backend will obviously live on the cpu, this would prevent from having to add an if: else: block in the code.
Note also that
A = tl.tensor((3,3),dtype=tl.float64,device="cuda")
aftertl.set_backend('numpy')
raises no warning/error, perhaps it should?Versions
Windows-10-10.0.19045-SP0
Python 3.11.4 | packaged by Anaconda, Inc. | (main, Jul 5 2023, 13:38:37) [MSC v.1916 64 bit (AMD64)]
NumPy 1.26.4
SciPy 1.10.1
TensorLy 0.8.1
PyTorch 2.2.0+cu121
The text was updated successfully, but these errors were encountered: