-
Notifications
You must be signed in to change notification settings - Fork 1.5k
[feat] Add llm args to tune python gc threshold #5141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
7d347c7
to
bebeb32
Compare
Signed-off-by: Yilin Fan <[email protected]>
bebeb32
to
c82d869
Compare
Signed-off-by: Yilin Fan <[email protected]>
c82d869
to
f5ace1a
Compare
@@ -595,6 +601,7 @@ def worker_main( | |||
is_llm_executor: Optional[ | |||
bool] = True, # whether it's the main executor instance | |||
lora_config: Optional[LoraConfig] = None, | |||
garbage_collection_gen0_threshold: Optional[int] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about changing the typing to int
since it gets a value of 20000 as the default value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the default value is None (means no behavioural change for the gc), so we should add Optional
?
@@ -384,7 +384,8 @@ def create_py_executor_instance( | |||
draft_model_engine, | |||
start_worker, | |||
sampler, | |||
lora_config: Optional[LoraConfig] = None) -> PyExecutor: | |||
lora_config: Optional[LoraConfig] = None, | |||
garbage_collection_gen0_threshold: Optional[int] = None) -> PyExecutor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I noticed here the default value is None, while in TorchLlmArgs the default value is 20000, should we unify it to be None? That is, the default behaviour is not modifying the system's GC gen0 threshold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the rationale behind is we do want to change the default GC gen0 threshold because otherwise large GC (gen2 threshold) will be triggered frequently when we are at high concurrency (e.g., >2k). That's why in BaseLlmArgs we set it to be 20000. However, in other occurrence I think it's best to set the default value to None to avoid accidental change of GC behaviour. Do this make sense?
Signed-off-by: Yilin Fan <[email protected]>
/bot run --disable-fail-fast |
PR_Github #8852 [ run ] triggered by Bot |
[feat] Add llm args to tune python gc threshold
Description
Our observation is that during the TRTLLM py_executor’s create_responses stage, Python GC will be invocated multiple times and will at certain point become extremely long (~200ms). The mechanism behind is that Python will invoke GC when it detects num_of_allocation – num_of_deallocation is larger than a certain threshold (default value is 700). In our case, we are creating (allocating) thousands of responses at the end of an iteration before sending them away in bulk, during which GC will be triggered multiple times because we are doing tens of thousands of allocations while almost no deactivations (they will be deferred until responses are out of function scope). Those GCs are pointless because none of those new response objects are dangled, but they take time (from several to hundreds of milliseconds).
Our plan is to increase the GC threshold to a certain heuristic value, say 20k, which should be enough to handle 2k responses, and make it as a configurable llm-api argument in case we need to handle larger responses in the future. To minimize the potential impact on other components (e.g., OOM issue), we first will limit such change in the create_responses stage of py_executor.
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...
Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]
to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]
Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast
(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test
(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"
(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"
(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test
(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test
(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test
(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge
(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"
(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".kill
kill
Kill all running builds associated with pull request.
skip
skip --comment COMMENT
Skip testing for latest commit on pull request.
--comment "Reason for skipping build/test"
is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipeline
Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.