-
Notifications
You must be signed in to change notification settings - Fork 61
Closed
Labels
questionFurther information is requestedFurther information is requested
Description
Dear all,
I have a question regarding the execution of single tasks in different machines running Pydra.
More specifically, I was wondering whether tasks requiring high computational power can be distributed to more powerful (HPC) machines.
Reading the API documentation and following the interactive tutorial, I understood that an experimental distributed execution of single tasks could be achieved by using ConcurrentFutures, SLURM or Dask. However, I was not able to find any specific example about how to use it.
By setting the parameter of the Submitter plugin="dask"
, for example, a DaskWorker can be instantiated. However, is not clear whether this can be useful to achieve my goal.
Thanks in advance for your help!
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested