Very bad training accuracy/Miscalculation for accuracy? #133
Replies: 2 comments
-
Hey @teohongwei898, I'd start to step back through your code and see what the outputs are at each level. This will give you more of an idea of what's happening. E.g. what's coming out of the final layer of your model? Then what's going into your loss function? Then how are your Then what's going into your Then what's your accuracy function producing based on the code you've written? |
Beta Was this translation helpful? Give feedback.
-
I think that problem occurs on this line:
In this case
As you keep forward the summation, Each epoch has it's own accuracy and the last epoch defines the accuracy of your model, not the sum of all accuracy results. It is my own opinion and if you disagree, please let me know. I am ready for battle! :) |
Beta Was this translation helpful? Give feedback.
-
Hello everyone!
Thanks to this course, I have begun working on individual projects, starting with the Cat vs. Dog one. I'm unsure if I can ask for help here since this wasn't covered in the course, but here goes anyway:
In my training/validation epoch, this was defined:
Where accuracy is a function as follows:
The idea is to see if the prediction matches the actual 'y' value. If so, it should return 1, else 0.
However, I noticed that my accuracy after training the model seemed weird.
After dividing val_acc with the length of my train dataset, the value can sometimes go above one and even some ridiculous value like 18, which is virtually impossible.
It is also worth noting that when using the model to predict a batch of data, my results come out as all of the first indexes of the class list, a.k.a cats, which is also wrong. But this is another problem I will tackle after solving the accuracy problem.
Any help will be greatly appreciated. New to Pytorch here. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions