Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Phi and Phi-2 support #496

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from
Draft

Conversation

vigarov
Copy link

@vigarov vigarov commented Jun 10, 2024

This PR aims to add support for Phi and Phi-2 models (both based on the PhiModel architecture).
I see in, #32

Phi 1.5 support has been attempted, but they have a very unusual model definition. Until it's been standardized, I am not sure I will support it.

Any particular reason for this? Phi's architecture is indeed unlike llama&co, but nothing too out of the ordinary imo

@casper-hansen
Copy link
Owner

Hi @vigarov, it's honestly been to long for me to remember what the issue was back then when I commented on it. However, it's good to see you come around to help implement the other Phi models.

Can you please run pip install -e .[eval] and benchmark the perplexity before and after using the script at examples/eval.py?

@vigarov
Copy link
Author

vigarov commented Jun 11, 2024

Didn't realize the script was there, sorry would've done it in the first place!
Here are some results with phi-2:

Ppl (wikitext) Size
(Phi-2 base) 9.705 ~5.6 GB
Quantization group size
G2 3761.297 4.6 GB
G8 3255.700 2.4 GB
G128 2893.224 1.8 GB
G512 5451.693 1.7 GB

Perplexity looks pretty bad :/ @casper-hansen any tips/pointers on what could usually be a symptom of this ?
I used GEMM, and zero_point=True for the rest of the config.

@casper-hansen
Copy link
Owner

@vigarov this is probably what I meant by being hard to support. To effectively quantize a model with AWQ means to observe how the inputs change as they pass through the layers of the models. Sometimes it can be hard when models have very unique model definitions.

@vigarov
Copy link
Author

vigarov commented Jun 14, 2024

Debugged the quantization and everything seems to be running normally, following the paper/merlin's zero_point optimization.
For reference, BnB on phi-2 leads a PPL of 11.252 w/ 8-bit quantization and 11.692 w/ 4-bit

Marking this as draft until I can find where the issue comes from.

@vigarov vigarov marked this pull request as draft June 14, 2024 01:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants