Replies: 2 comments 1 reply
-
Here is he discussion where it is explained why softmax doesn't change the result : 314 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Explained here #875 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone!
I hope my question hasn't already been answered. I'm wondering why in part 04 of the course, when we create the train and test functions, we're not consistent in the following steps:
Train fn:
# Calculate and accumulate accuracy metric across all batches
y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1)
train_acc += (y_pred_class == y).sum().item()/len(y_pred)
Test fn:
# Calculate and accumulate accuracy
test_pred_labels = test_pred_logits.argmax(dim=1)
test_acc += ((test_pred_labels == y).sum().item()/len(test_pred_labels))
I've seen online that softmax doesn't change the result, but why is there a difference here?
Beta Was this translation helpful? Give feedback.
All reactions