Skip to content

[SB] relax constraint on min number of new tokens #322

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

yannicks1
Copy link
Collaborator

[SB] relax constraint on min number of new tokens

this is relaxing an old constraint on the number of requested new tokens having to be a min of 3. Turns out it is only important that during the warmup there is at least one decode forward pass. Requesting 1 token runs prefill only during warmup -> compiler crashes (I guess it is expecting two graphs, prefill and decode) . Requesting 2+ tokens does at least one decode during warmup and thus produces a decode graph too -> things run smoothly for 2+ tokens ...

Copy link

👋 Hi! Thank you for contributing to vLLM support on Spyre.
Just a reminder: Make sure that your code passes all the linting checks, otherwise your PR won't be able to be merged. To do so, first install the linting requirements, then run format.sh and commit the changes. This can be done with uv directly:

uv sync --frozen --group lint --active --inexact

Or this can be done with pip:

uv pip compile --group lint > requirements-lint.txt
pip install -r requirements-lint.txt
bash format.sh

Now you are good to go 🚀

@yannicks1
Copy link
Collaborator Author

To my knowledge we previously didn't have a test for the case num_decode_tokens=3, hence I didn't add one for num_decode_tokens=2. Do we want such a test?

@yannicks1 yannicks1 requested a review from joerunde July 17, 2025 11:59
Copy link
Collaborator

@sducouedic sducouedic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for fixing this

Edit 1: check code change suggestion before merging
Edit 2: feel free to add such a test, it wouldn't hurt to have one. I guess it would belong to test_spyre_warmup_shapes.py?

Comment on lines +166 to +170
SamplingParams(max_tokens=max_new_tokens[i],
min_tokens=max_new_tokens[i],
temperature=0,
ignore_eos=True,
logprobs=0) for i in range(len(max_new_tokens))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
SamplingParams(max_tokens=max_new_tokens[i],
min_tokens=max_new_tokens[i],
temperature=0,
ignore_eos=True,
logprobs=0) for i in range(len(max_new_tokens))
SamplingParams(max_tokens=max_tokens_i,
min_tokens=max_tokens_i,
temperature=0,
ignore_eos=True,
logprobs=0) for max_tokens_i in max_new_tokens)

Copy link
Collaborator

@joerunde joerunde left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lpgtm!

This was confusing to at least one user already who thought that meant that you also had to request at least 3 tokens in each api call, but I don't think we should focus too much on this anyway since continuous batching is almost ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants