Skip to content

[feat] Add llm args to tune python gc threshold #5141

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

nv-yilinf
Copy link
Collaborator

@nv-yilinf nv-yilinf commented Jun 12, 2025

[feat] Add llm args to tune python gc threshold

Description

Our observation is that during the TRTLLM py_executor’s create_responses stage, Python GC will be invocated multiple times and will at certain point become extremely long (~200ms). The mechanism behind is that Python will invoke GC when it detects num_of_allocation – num_of_deallocation is larger than a certain threshold (default value is 700). In our case, we are creating (allocating) thousands of responses at the end of an iteration before sending them away in bulk, during which GC will be triggered multiple times because we are doing tens of thousands of allocations while almost no deactivations (they will be deferred until responses are out of function scope). Those GCs are pointless because none of those new response objects are dangled, but they take time (from several to hundreds of milliseconds).

Our plan is to increase the GC threshold to a certain heuristic value, say 20k, which should be enough to handle 2k responses, and make it as a configurable llm-api argument in case we need to handle larger responses in the future. To minimize the potential impact on other components (e.g., OOM issue), we first will limit such change in the create_responses stage of py_executor.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@nv-yilinf nv-yilinf requested a review from kaiyux June 12, 2025 01:54
@nv-yilinf nv-yilinf force-pushed the debug-python-gc branch 4 times, most recently from 7d347c7 to bebeb32 Compare June 13, 2025 03:48
@nv-yilinf nv-yilinf marked this pull request as ready for review June 13, 2025 03:53
@nv-yilinf nv-yilinf requested a review from a team as a code owner June 13, 2025 03:53
@@ -595,6 +601,7 @@ def worker_main(
is_llm_executor: Optional[
bool] = True, # whether it's the main executor instance
lora_config: Optional[LoraConfig] = None,
garbage_collection_gen0_threshold: Optional[int] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about changing the typing to int since it gets a value of 20000 as the default value.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the default value is None (means no behavioural change for the gc), so we should add Optional?

@@ -384,7 +384,8 @@ def create_py_executor_instance(
draft_model_engine,
start_worker,
sampler,
lora_config: Optional[LoraConfig] = None) -> PyExecutor:
lora_config: Optional[LoraConfig] = None,
garbage_collection_gen0_threshold: Optional[int] = None) -> PyExecutor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed here the default value is None, while in TorchLlmArgs the default value is 20000, should we unify it to be None? That is, the default behaviour is not modifying the system's GC gen0 threshold.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the rationale behind is we do want to change the default GC gen0 threshold because otherwise large GC (gen2 threshold) will be triggered frequently when we are at high concurrency (e.g., >2k). That's why in BaseLlmArgs we set it to be 20000. However, in other occurrence I think it's best to set the default value to None to avoid accidental change of GC behaviour. Do this make sense?

Signed-off-by: Yilin Fan <[email protected]>
@nv-yilinf
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #8852 [ run ] triggered by Bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants