Skip to content

Distributed single task execution #399

Closed
@amarzullo24

Description

@amarzullo24

Dear all,
I have a question regarding the execution of single tasks in different machines running Pydra.
More specifically, I was wondering whether tasks requiring high computational power can be distributed to more powerful (HPC) machines.

Reading the API documentation and following the interactive tutorial, I understood that an experimental distributed execution of single tasks could be achieved by using ConcurrentFutures, SLURM or Dask. However, I was not able to find any specific example about how to use it.

By setting the parameter of the Submitter plugin="dask", for example, a DaskWorker can be instantiated. However, is not clear whether this can be useful to achieve my goal.

Thanks in advance for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions