Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] GPT-3 #72

Open
leejason opened this issue May 29, 2020 · 0 comments
Open

[Discussion] GPT-3 #72

leejason opened this issue May 29, 2020 · 0 comments

Comments

@leejason
Copy link

leejason commented May 29, 2020

Thank you for great work. The Appendix B of the GPT-3 paper mentions the following. I'm wondering whether the idea has been implemented in gpt2-ml. If not yet, what would you advise regarding how to implement it?

Appendix B.

....
During training we always train on sequences of the full nctx = 2048 token context window, packing multiple documents into a single sequence when documents are shorter than 2048, in order to increase computational efficiency. Sequences with multiple documents are not masked in any special way but instead documents within a sequence are delimited with a special end of text token, giving the language model the information necessary to infer that context separated by the end of text token is unrelated. This allows for efficient training without need for any special sequence-specific masking.
....

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant