-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
evaluator column for future_lapply parallelism? #540
Comments
See #169, #259, and futureverse/future#172. I will reopen #169 if/when future supports this functionality. |
Interesting that there's no general way to tell if a future is both "unresolved" and hasn't somehow failed. Though, it might be exceedingly difficult to tell either way if a future has failed in general. The minimal API would have to allow something like "unresolved and running", "unresolved and failed", "unresolved and hasn't started yet", and "unresolved and I can't tell what's actually happening/what actually happened". Another issue here is that many workflows that might use this approach would end up with some types of workers existing but not active for long periods of time. |
Yeah, it has been hard to work around this limitation.
True. However, for |
Confused by this:
The motivation here isn't load balancing - it's about matching memory / CPU / walltime resources to targets without waste. clustermq doesn't do its own resource management Or do you mean you wouldn't be able to get clustermq to allocate properly? |
Sorry, my point was that I am not sure we will be able to assign targets to specific |
Right now, a worker just reports it's ready and is then assigned the next target. In principle, it could signal ready with the resources it has available and hence only gets fitting targets assigned. However, for now, all worker are homogeneous so there's no real upside of handling this just yet. It's a possible extension. - Please file an issue if you've got a good use case. |
Would be great to be able to specify types of workers for
future_lapply
parallelism. I can imagine having 9 types of workers with low, medium, and high memory and wall time.The text was updated successfully, but these errors were encountered: