You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add prompt_to_lora_id_mapping adjustment in fix_prompts() (#242)
This is regarding the issue reported in
[issue#251](#251)
The finite lorax feature failed to execute when the number of prompts
provided is less than the full batch size. The solution involves
applying the same adjustment strategy for `prompt_to_lora_id_mapping` as
used for `prompt` in the `fix_prompts()` function located in
`QEfficient/generation/text_generation_inference.py`.
Signed-off-by: Jou-An Chen <[email protected]>
0 commit comments