Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you add our recent work to your survey? #25

Open
grayground opened this issue Nov 4, 2023 · 1 comment
Open

Can you add our recent work to your survey? #25

grayground opened this issue Nov 4, 2023 · 1 comment

Comments

@grayground
Copy link

Hi,

I have read your insightful paper and found it to be a valuable contribution to the field.

I would like to kindly suggest adding our recent work to your survey.

馃搫 Paper: Ask Again, Then Fail: Large Language Models' Vacillations in Judgement

This paper uncovers that the judgement consistency of LLM dramatically decreases when confronted with disruptions like questioning, negation, or misleading, even though its previous judgments were correct. It also explores several prompting methods to mitigate this issue and demonstrates their effectiveness.

Thank you for your consideration! :)

@YuanWu3
Copy link
Collaborator

YuanWu3 commented Dec 5, 2023

We appreciate your suggestion. It will be incorporated into our survey in the next version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants