You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on a project concerning tensor decompositions and recently came across this library in my survey of existing software tools. I am very much interested in using TensorLy in my research, but I have several questions I would like to ask, which are as follows:
I was looking at the different tensor formats supported by the library, and the one I am looking for is the Hierarchical Tucker or H-Tucker format. I see that standard Tucker is supported, but it does not look like H-Tucker is supported. Is there an interest in supporting this format? If new features are developed "as needed" I would be happy to help contribute on this effort.
Does TensorLy support multi-GPU parallelism or distributed memory parallelism? Are these features backend-dependent or do you have a recommended tool for this, e.g., mpi4py or Dask?
Thank you in advance for taking the time to review my questions and providing any clarification of the above topics.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I am working on a project concerning tensor decompositions and recently came across this library in my survey of existing software tools. I am very much interested in using TensorLy in my research, but I have several questions I would like to ask, which are as follows:
I was looking at the different tensor formats supported by the library, and the one I am looking for is the Hierarchical Tucker or H-Tucker format. I see that standard Tucker is supported, but it does not look like H-Tucker is supported. Is there an interest in supporting this format? If new features are developed "as needed" I would be happy to help contribute on this effort.
Does TensorLy support multi-GPU parallelism or distributed memory parallelism? Are these features backend-dependent or do you have a recommended tool for this, e.g., mpi4py or Dask?
Thank you in advance for taking the time to review my questions and providing any clarification of the above topics.
Beta Was this translation helpful? Give feedback.
All reactions