Set $GALAXY_MEMORY_GB and allow for lowering values by an overhead.#21735
Set $GALAXY_MEMORY_GB and allow for lowering values by an overhead.#21735natefoo wants to merge 2 commits intogalaxyproject:devfrom
$GALAXY_MEMORY_GB and allow for lowering values by an overhead.#21735Conversation
bernt-matthias
left a comment
There was a problem hiding this comment.
Wondering if am additive or multiplicative overhead is better (or both). In my setup I use -Xmx$JAVAMEM where $GALAXY_MEMORY_MB*2/3 seemed to be a value that always works for me, but I never tested with additive overheads.
| fi | ||
|
|
||
| if [ -n "$GALAXY_MEMORY_MB" -a -n "${GALAXY_MEMORY_MB_OVERHEAD:-}" ]; then | ||
| GALAXY_MEMORY_MB=$(($GALAXY_MEMORY_MB - GALAXY_MEMORY_MB_OVERHEAD)) |
There was a problem hiding this comment.
This will only work for SLURM jobs (for which we only set GALAXY_MEMORY_MB so far) for SGE jobs we only have GALAXY_MEMORY_MB_PER_SLOT.
There was a problem hiding this comment.
I'll reorder - I originally had it below the next bit where we set $GALAXY_MEMORY_MB from $GALAXY_MEMORY_MB_PER_SLOT, but moved it up so $GALAXY_MEMORY_MB_PER_SLOT would take the overhead into account. I can move it back down and then just recalculate $GALAXY_MEMORY_MB_PER_SLOT accordingly.
| fi | ||
| if [ -n "$GALAXY_MEMORY_GB" -a -z "$GALAXY_MEMORY_GB_PER_SLOT" ]; then | ||
| GALAXY_MEMORY_GB_PER_SLOT=$(($GALAXY_MEMORY_GB / $GALAXY_SLOTS)) | ||
| [ "$GALAXY_MEMORY_GB_PER_SLOT" -gt 0 ] || GALAXY_MEMORY_GB_PER_SLOT=1 |
There was a problem hiding this comment.
For GALAXY_SLOTS>1 this would mean that the total memory is larger then the floor value? Why are you checking for greater 0 anyway (Also in the previous if) -- is it for accomodating rounding problems?
There was a problem hiding this comment.
Yeah, the idea here is if $(($GALAXY_MEMORY_GB / $GALAXY_SLOTS)) is less than 1, the division results in 0, so this ensures $GALAXY_MEMORY_GB_PER_SLOT is not set to 0, since I imagine most tools are not going to react well to that. I admit that setting it to 1 is also wrong, but I don't have a better solution for this case.
|
So I did think about additive vs. multiplicative beforehand and the issue I saw with that is that at large memory allocations you waste a lot of memory with multiplicative. I think it's more often the case that you want to leave 1-2 GB overhead than you want to leave 25%, especially when you're giving a job 1 TB. |
Fixes #15952.
How to test the changes?
(Select all options that apply)
$GALAXY_MEMORY_GBis set properly using metrics or inspecting thetool_script.sh$GALAXY_MEMORY_MBas an env var and the others should be set accordingly.License