Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What should I update if I want to do distributed training? #168

Open
xiuzhilu opened this issue Oct 10, 2022 · 1 comment
Open

What should I update if I want to do distributed training? #168

xiuzhilu opened this issue Oct 10, 2022 · 1 comment

Comments

@xiuzhilu
Copy link

Hi, dear. Thank you for your sharing. According to the code you gave when I used multi-GPU training, it is equivalent to torch.nn. data_parallel. If I want to achieve distributed training to achieve torch.distributed effect. What changes do I need to make. @skurzhanskyi @komelianchuk

@skurzhanskyi
Copy link
Collaborator

As the repository uses AllenNLP 0.8.4, we are limited with the functionality of the library

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants