Why do we need to add an extra dimension to the prediction probabilities when finding the maximum value in PyTorch? #1087
Unanswered
magnifiques
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Course video/timestamp: Video number 294: Creating a Function to Make and Time Predictions with Our Model on Udemy, timestamp 12:26)
Question:
Why do we use unsqueeze(0) to find the maximum value in the prediction probabilities in PyTorch, and what might go wrong if we don't use it?
Context:
In the video, we convert the prediction logits to probabilities using softmax and get the maximum probability value.
pred_prob = torch.softmax(pred_logit, dim=1)
Then to store the maximum value of the prediction probabilities, We use unsqueeze(0) to add an extra dimension to it and then find the maximum value:
pred_dict["pred_prob"] = round(pred_prob.unsqueeze(0).max().cpu().item(), 4)
I'm curious:
Why is unsqueeze(0) needed here?
What could happen if we don't use it and just use the
max
function onpred_prob
?Beta Was this translation helpful? Give feedback.
All reactions