Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposing To Add Naming Convention For GGUF files in documents #820

Open
mofosyne opened this issue May 13, 2024 · 3 comments
Open

Proposing To Add Naming Convention For GGUF files in documents #820

mofosyne opened this issue May 13, 2024 · 3 comments

Comments

@mofosyne
Copy link
Contributor

Merged in ggerganov/llama.cpp#7165 in llama.cpp which also includes changes to how default filenames are generated.

However I wasn't too sure where to place the proposed "GGUF Naming Convention". I think I should place it in https://github.com/ggerganov/ggml/blob/master/docs/gguf.md but happy to hear otherwise.

In short the naming convention I want to document is <Model>-<Version>-<ExpertsCount>x<Parameters>-<Quantization>.gguf (details of the proposal in ggerganov/llama.cpp#4858)

@mofosyne
Copy link
Contributor Author

On a side note... does it make sense to also standardize between internal KV form and JSON conversion and back? Doesn't seem to be an issue for huggingface, but it's something to consider.

@ThiloteE
Copy link

ThiloteE commented May 28, 2024

I am not sure, if <BaseModel>-<Version>-<Model>-<Version>-<ExpertsCount>x<Parameters>-<Quantization>.gguf or <Model>-<Version>-<BaseModel>-<Version>-<ExpertsCount>x<Parameters>-<Quantization>.gguf or something like it wouldn't be better. I am having a really hard time to find finetunes of mistral-7b-v0.3 at the Huggingface Open LLM Leaderboard, because many model authors do not seem to adhere to a standardized naming scheme and fail to mention the name of the base model. Both the mistral-7b-v0.1 and v.03 have the same parameter count and therefore differentiating by that property doesn't help either. The leaderboard is simply too cluttered, the search feature is insufficient and model authors fail to provide relevant info.

Things need to change.

Edit: I also created a discussion at https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard/discussions/761

@mofosyne
Copy link
Contributor Author

mofosyne commented May 28, 2024

@ThiloteE yeah currently doing some extra refactoring in ggerganov/llama.cpp#7499 and got some extra thoughts so maybe have a look at my last comment there and see if you got your 5c (At the moment, for that refactoring, was trying to figure out how to do auto estimation of model size)

If we need to, we can adjust the naming convention as needed to encourage best practice, but will need some guidance from the wider huggingface community what they expect. E.g. do we need a variant field (e.g. -instruct) and should version code actually go to the back of the filename?

(Also do let us know if the internal KV store is missing any fields that would be handy to have in terms of keeping track of model specs and naming)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants