-
I'm having a hard time understanding how is the loss function contributing in the loop and at what point the loss value is used and where. I do understand what its purpose is and what the algorithm does, but looking at the code it seems like it has just been declared and called and that's it, the function itself is not being used in the model or the optimiser. Can anyone explain this part to me? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, so imagine a feedfoward network. Initially the weights and biases are randomized and the input is given. Based in these, the output will be created/predicted. Now comes the loss function. Here loss function gives you a metric of how off the current prediction is from the actual output. Now the process of backprobagation starts, where you calculate the gradient of the loss function and the optimizer gives you the new parameters (weights and biases). Again this process will be repeated until some criterion is satisfied. |
Beta Was this translation helpful? Give feedback.
Hi thanks for the response. I understand what you are saying how the logic works but what I'm not getting is how is the loss function and the optimizer connected to one another. Anyways I found the answer to my question here . Also the question might be more clear and close to what I was trying to say.