-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The detail about manual prompts #10
Comments
Dear @sinan106,
The only difference between WARP8 and WARPinit is that the original WARP8 are initialized randomly (or with a fixed vector) and WARPinit parameters are initialized with actual word embeddings of Thus, the WARPinit is as good as zero-shot classifiers before any training steps.
(This paragraph is only about WARPinit. ) I hope this was helpful! |
Thanks a lot for your answer!! In conclusion
I hope my understanding is correct, thanks again for your answer! |
Hi,
Thank you for your work and for releasing the code!
After reading your paper, I am confused about the manual prompts.
"""
In addition to the regular models where we initialize with [MASK] tokens,
we performed a run on the GLUE datasets with the same prompt
[CLS] "S1"? [MASK]. "S2"! [SEP] for all the tasks
"""
In the manual prompts, I want to know where to insert the prompts. Wouldn't the original WARP also have [CLS], [SEP] and [MASK] special tokens? What is the difference between WARPinit and WARP8 in the insertion position of prompts?
I don't know much about this field, thank you very much for answering my question
The text was updated successfully, but these errors were encountered: