Adapters v1.2.0 #811
calpt
announced in
Announcements
Replies: 1 comment
-
Awesome work! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Blog post: https://adapterhub.ml/blog/2025/05/adapters-for-any-transformer
This version is built for Hugging Face Transformers v4.51.x.
New
Adapter Model Plugin Interface (@calpt via #738; @lenglaender via #797)
The new adapter model interface makes it easy to plug most adapter features into any new or custom Transformer model. Check out our release blog post for details. Also see https://docs.adapterhub.ml/plugin_interface.html.
Multi-Task composition with MTL-LoRA (@FrLdy via #792)
MTL-LoRA (Yang et al., 2024) is a new adapter composition method leveraging LoRA for multi-task learning. See https://docs.adapterhub.ml/multi_task_methods.html#mtl-lora.
VeRA - parameter-efficient LoRA variant (@julian-fong via #763)
VeRA (Kopiczko et al., 2024) is a LoRA adapter variant that requires even less trainable parameters. See https://docs.adapterhub.ml/methods.html#vera.
New Models (via new interface)
A couple of new models are supported out-of-the-box via the new adapter model plugin interface:
More
init_weights_seed
adapter config attribute to initialize adapters with identical weights (@TimoImhof via Option for initializing adapters with identical weights #786)ForwardContext
(@calpt via Wrap ForwardContext around full model forward #789)Changed
This discussion was created from the release Adapters v1.2.0.
Beta Was this translation helpful? Give feedback.
All reactions