Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training speed is not as stated in README #23

Open
katie-cathy-hunt opened this issue Dec 14, 2019 · 1 comment
Open

Training speed is not as stated in README #23

katie-cathy-hunt opened this issue Dec 14, 2019 · 1 comment

Comments

@katie-cathy-hunt
Copy link

Hi! I ran the training script on 130 million training instances and I got the following training speed:

1 V100 GPU, FP16 O2, ~14k tokens/sec, ~100 hours
8 V100 GPUs, FP16 O2, ~70k tokens/sec, ~20 hours

However, in the readme, the training speed was much much faster:

image

What am I missing? Please help!

@dreasysnail
Copy link
Contributor

Thanks for the feedback! We need to double check the epoch time and get back to you on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants