-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you add our recent work to your survey? #25
Comments
We appreciate your suggestion. It will be incorporated into our survey in the next version. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I have read your insightful paper and found it to be a valuable contribution to the field.
I would like to kindly suggest adding our recent work to your survey.
馃搫 Paper: Ask Again, Then Fail: Large Language Models' Vacillations in Judgement
This paper uncovers that the judgement consistency of LLM dramatically decreases when confronted with disruptions like questioning, negation, or misleading, even though its previous judgments were correct. It also explores several prompting methods to mitigate this issue and demonstrates their effectiveness.
Thank you for your consideration! :)
The text was updated successfully, but these errors were encountered: