-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the batchsize with the gradcache #13
Comments
You can define your own loss function and pass it to the |
By the way,the chunks_size in your code is same with in paaer ? |
Oh, I just realized that it is SimCLR that you are talking about. It is a little different from what the example you put here shows. With SimCLR you will have only one encoder and a loss function defined over a single batch of encodings. As for |
Ok,Thanks, I Maybe seems to run it. |
If you fail with a large batch, there must be something wrong. (Unless it is million size large, in which case you would probably need to do some off-loading.) |
Does the million size refer to the size of the dataset or batch size? |
Size of the mini batch for a gradient update. Very Rarely will this be a problem. |
Dear writer,
Your work is very good to me,
I want to mix the SimCLR,but I don't know how to do because I find the gradcache without batchsize, but the SimCLR compute the loss function need the batchsize, So I don't how to deal the probelum.
please give me some solutions or some tips if you are free.
Thanks advance! Anyway,thanks your work, it solve me a difficulty!
The text was updated successfully, but these errors were encountered: