Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent ninja starting new job when physical memory become low #1571

Open
adelplanque opened this issue May 12, 2019 · 8 comments
Open

Prevent ninja starting new job when physical memory become low #1571

adelplanque opened this issue May 12, 2019 · 8 comments
Labels

Comments

@adelplanque
Copy link

I have a big project where

  • 80% of jobs use a little amount of memory ==> jobs must be limited by CPU ressource
  • 20% of jobs use large amount of memory ==> this jobs must be limited by memory ressouce

So like -l flags, it should be nice to have a -m flag :

ninja -m 1G

==> don't start new job if physical available memory is below 1G

@jhasse jhasse added the feature label May 12, 2019
@metti
Copy link

metti commented May 12, 2019

See #660 and #1354 for previous attempts.

@kunaltyagi
Copy link

Any chance of the PR getting merged? Or is the addition of another cmdline flag a blocking change? It'd be great to not have ninja || ninja -j1 as the build step 😄

@jhasse
Copy link
Collaborator

jhasse commented Jun 23, 2019

Any chance of the PR getting merged?

Not any time soon, sorry.

Or is the addition of another cmdline flag a blocking change?

I would prefer something like --keep-free-memory instead of -m.

I also wonder if this could be implemented in a GNU make jobserver.

@cmorty
Copy link

cmorty commented Nov 13, 2020

@jhasse : Any chance to move this forward with --keep-free-memory? I have some ugly template stuff that needs GB, while the rest needs far less than 100MB. Of course it would be perfect if a requirement could be bound to the job/jobpool, but not spawning new jobs below a certain limit should remove most of the pain.

@jhasse
Copy link
Collaborator

jhasse commented Nov 13, 2020

@cmorty I'm not working on this.

Ideally I think this should be fixed by giving the kernel more information about what's happening: If the kernel knew that the processes that ninja spawns don't have to run in parallel, it could swap enough of them out if memory is low.

@chen3feng
Copy link

chen3feng commented Jan 23, 2021

Slow down the speed of job starting when the memory limit is near is also a considerable help.

@greg7mdp
Copy link

This would be great to have! What's the holdup?

@rustyx
Copy link

rustyx commented Mar 3, 2023

This is a must-have when doing LTO with clang on a modern CPU.
Running 32 c++ processes in parallel takes <4 GB RAM, but 32 ld processes at the end take 100 GB RAM, causing my system to lock up :(
And reducing parallelism to -j8 takes forever to compile.
At least need a separate argument to be able to specify compiler and linker parallelism separately (combinations can be computed relative to each other).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants