-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snakemake fails to parse sbatch output to get slurm_jobid #146
Comments
This is the problem when providing a piece of software which should ease the execution on HPC-clusters: The never-ending creativity of admins who tinker with the defaults. Usually, nothing more than Your patch is not production ready. Do you want to submit a pull request? Or should I go forward? Please note: I am terribly busy these days. It is not likely that I get ready before October. Also: In-job submission of snakemake jobs is currently not well-supported. I am working on improvements. |
@kennethabarr Could you share the output of I think admins should not change the behavior of this kind of things ( I think you can create a workaround without changing the executor code, by defining an alias for the sbatch command in your bashrc that only keeps the stdout. Would this work for you? function sbatch() {
sbatch $@ 2> /dev/null
} On the other hand, do we really need to merge stdout and stderr in the executor? |
I still receive the same "sbatch: ..." lines, but they are part of stderr not stdout. My understanding is that the reason to merge stdout and stderr is to propagate errors encountered in the sbatch script. I think some version of your solution is a far better hack than what I have done and I will implement it right away. I don't think I'm experienced enough with python to implement a production-ready fix in the code that can handle our creative cluster admins. |
On my system slurm reports submission info even at the lowest level of verbosity. I.e.
This leads to problems parsing the jobid. This cannot be resolved with --quiet because then the job id is not returned either. The lines beginning with "sbatch: " are printed to stderr, while the jobid is printed to stdout, but in the code these are mixed on init.py:207
It is not an elegant solution, but I have modified my init.py by adding a line to subset the variable out to only the last line. This restores the proper behavior and my pipelines run normally.
The versions I am using are as follows:
snakemake version 8.18.2
slurm executor version 0.10.0
slurm version 20.11.8
The text was updated successfully, but these errors were encountered: