Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the rehearsal buffer? #6

Open
zyuh opened this issue Feb 1, 2023 · 1 comment
Open

How to use the rehearsal buffer? #6

zyuh opened this issue Feb 1, 2023 · 1 comment

Comments

@zyuh
Copy link

zyuh commented Feb 1, 2023

Hi, amazing work!

I noticed in the paper that l2p can use the rehearsal buffer to further improve performance, but the repository doesn't seem to include code implementation for this part. I have a few questions about the implementation of this part:
(1) Random sampling or herding sampling?
(2) In addition to having old samples in dataloader, are any other operations used, such as distillation, balanced fine-tuning, etc.
(3) Will the official implementation of this part be added to the code later?
(4) Whether the rehearsal buffer can further improve the performance of "dualprompt" also?

Looking forward to your reply, best wishes!

@JH-LEE-KR
Copy link
Owner

Hi,
thanks for your comment.

Unfortunately, it has not been implemented yet.
It is difficult to verify that the code that pytorch version is implemented correctly because the config for the rehearsal buffer is not opened in the official repo.

  • (1) Looks like a random (np.random.choice) sampling.
  • (2) I don't understand the intent of your question.
  • (3) I've already implemented it.
    However, there is no official config, so I cannot verify whether it is implemented correctly. Issue
    If only the config is released, it will be updated to pytorch implementation.
  • (4) In my experience, if you apply it naïve, the performance will decrease.

Best,
Jaeho Lee.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants