Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing Alfworld Results #35

Open
ai-nikolai opened this issue Jan 15, 2024 · 9 comments
Open

Reproducing Alfworld Results #35

ai-nikolai opened this issue Jan 15, 2024 · 9 comments

Comments

@ai-nikolai
Copy link

ai-nikolai commented Jan 15, 2024

Hi,

Thanks for the great work. Unfortunately, we are unable to reproduce your results for ReAct / Reflexion on Alfworld.

E.g. Env0 & Env1 are successful for you, however, we always get failures on our end. (Other Envs are successful though, so it does work sometimes).

@noahshinn

@noahshinn
Copy link
Owner

Hi @ai-nikolai , what model are you using?

@ai-nikolai
Copy link
Author

ai-nikolai commented Jan 16, 2024

Thanks. The model used: gpt-3.5-turbo @noahshinn

@ai-nikolai
Copy link
Author

@noahshinn would it also be possible to upload the actual game logs for alfworld as well?

@noahshinn
Copy link
Owner

The model gpt-3.5-turbo is not the same model used during the paper's time (Feb 2023). We used text-davinci-002. I'd expect that the mistakes you see result from the inferred action not matching any of the actions in the action space. We followed ReAct's implementation for AlfWorld results to stay consistent with their work.

To aid this, I would advise you to display the action space to the model to eliminate parsing errors. I can add a side implementation for this if it would be helpful for you. Also, I will dig to see if I can find the original log files from the text-davinci-002 runs.

@ai-nikolai
Copy link
Author

Thank you @noahshinn.

Please let us know, if there was any luck finding the original logs using text-davinci-002. This would be a really big help. Thank you.

@dong-river
Copy link

I had the same issue with got-3.5-turbo. The success rate seems much much lower. The first trial success rate for me on a subset of tasks is only around 17% which is consistent with the report from Agentbench paper. So if you could provide the original log would be really helpful

@ai-nikolai
Copy link
Author

ai-nikolai commented Mar 8, 2024

Hi all,

A couple of comments to follow-up on this:

  1. The results you report are very hard to reproduce. (The model you used text-davinci-002 is deprecated, the two alternatives davinci-002 and gpt-3.5-turbo both have an accuracy of 0.3 on a subset, while your reported results have 0.7). Could you provide the traces, or tell us how we could produce your results.
  2. Secondly, please see attached the screenshot from AgentBench. The relevant column is HH, where you can see that only GPT-4 achieves comparable results to your ReAct results. While text-davinci-002 (which is the model your code shows, only achieves 16%, which is in-line with our reproducibility experiments).
  3. Finally, the original ReAct paper implemented the success condition using info["won"]==True, while you use done==True. This is referenced in the original alfworld repository as an issue Success Condition(s): done[0] is not equal to info["won"][0] alfworld/alfworld#51

Concrete Actions / Questions:

  1. Please clarify how to get the results you get? (With the weaker models, or were stronger models used, or do you have traces)
  2. Please clarify if we mis-understand your results or whether they are actually 70+% or more closer to 30%?

@noahshinn @ysymyth @becklabs
Screenshot 2024-03-08 at 15 02 50

@ai-nikolai
Copy link
Author

@noahshinn - any updates on the above?

@CSUN1997
Copy link

CSUN1997 commented May 30, 2024

Hi @ai-nikolai,
I am also trying to reproduce the results. The performance was bad in the beginning. After adding these lines to parse the action, the performance went back to normal:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants