Skip to content

Enable Quantize KV Cache for Mistral Model#35042

Merged
zucchini-nlp merged 1 commit intohuggingface:mainfrom
Bojun-Feng:enh/enable_quant_kv_mistral
May 20, 2025
Merged

Enable Quantize KV Cache for Mistral Model#35042
zucchini-nlp merged 1 commit intohuggingface:mainfrom
Bojun-Feng:enh/enable_quant_kv_mistral

Conversation

@Bojun-Feng
Copy link
Copy Markdown
Contributor

@Bojun-Feng Bojun-Feng commented Dec 2, 2024

What does this PR do?

Fixes #35041

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

I followed the format of #30483, and don't think new documentation or tests are necessary for enabling KV quantization on a single model. Please let me know if I'm wrong.

Who can review?

@zucchini-nlp

Copy link
Copy Markdown
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect thanks!

@zucchini-nlp
Copy link
Copy Markdown
Member

Don;t think we need to wait for core maintainer's review for this tiny change, so maybe @Rocketknight1 and we'll merge

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@Bojun-Feng
Copy link
Copy Markdown
Contributor Author

@Rocketknight1 Can we get this merged please?

@zucchini-nlp
Copy link
Copy Markdown
Member

oops sorry, merging

@zucchini-nlp zucchini-nlp merged commit 9661896 into huggingface:main May 20, 2025
faaany pushed a commit to faaany/transformers that referenced this pull request May 21, 2025
xvyv99 pushed a commit to xvyv99/transformers that referenced this pull request May 21, 2025
RituAddepalli pushed a commit to RituAddepalli/transformers that referenced this pull request Dec 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Enable Quantize KV Cache for Mistral Model

3 participants