-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Issues: huggingface/peft
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Incorrect Magnitude Calculation for DoRA Linear Layers (Violates DoRA Paper Methodology)
#2348
opened Jan 26, 2025 by
arcteryox
4 tasks
Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error
#2339
opened Jan 21, 2025 by
incchar
2 of 4 tasks
After using peft, the performance indicators decreased.
#2336
opened Jan 20, 2025 by
KQDtianxiaK
2 of 4 tasks
Request to intergrate Structure Sparsity-based PEFT (S2FT)
#2329
opened Jan 14, 2025 by
Hanyuezhuohua
[Warning]
Merge lora module to 4-bit linear may get different generations
#2321
opened Jan 11, 2025 by
steveepreston
1 of 4 tasks
Prefix Tuning dimension error with Qwen2 and missing vocab_size for PaliGemma2
#2315
opened Jan 8, 2025 by
Florian-Dreyer
2 of 4 tasks
Comparison of Different Fine-Tuning Techniques for Conversational AI
contributions-welcome
good first issue
Good for newcomers
help wanted
Extra attention is needed
#2310
opened Jan 7, 2025 by
ImamaDev
The provided
peft_type
'PROMPT_TUNING' is not compatible with the PeftMixedModel
.
#2307
opened Jan 7, 2025 by
Radu1999
a question about input_ids and attention_mask after prefix-tuning
#2304
opened Jan 6, 2025 by
MaTengSYSU
4 tasks done
Bug in
get_peft_model_state_dict
when using vblora
#2302
opened Dec 31, 2024 by
KaiyangLi1992
1 of 4 tasks
How to pass in an attention _ mask that is one dimension more than input _ ids
#2301
opened Dec 31, 2024 by
Chinesehou97
2 of 4 tasks
Error of load_adapter of Target module is not supported when using Qwen2-VL
#2296
opened Dec 24, 2024 by
bigmouthbabyguo-530
1 of 4 tasks
Cannot import name 'EncoderDecoderCache' from 'transformers'
#2292
opened Dec 21, 2024 by
Huang-jia-xuan
4 tasks
Incompatibility of X-LoRA and MistralForSequenceClassification
#2281
opened Dec 13, 2024 by
cyx96
2 of 4 tasks
Xlora cannot reload model from last checkpoint by using trainer.train(resume_from_checkpoint="checkpp")
#2185
opened Oct 29, 2024 by
SongHanKen
3 of 4 tasks
Key mismatch when trying to load a LORA adapter into an XLORA model
#2132
opened Oct 5, 2024 by
p4arth
2 of 4 tasks
Inference with different LoRA adapters in the same batch does not use the correct module_to_save classifier
contributions-welcome
wip
#1960
opened Jul 26, 2024 by
saeid93
2 of 4 tasks
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.