Leverage torchinfo
to easily use targeted hooks?
#307
Replies: 2 comments
-
Also, this would mean being able to setup hooks at runtime, out of the net definition, which looks very attractive to me. |
Beta Was this translation helpful? Give feedback.
-
I think I got a solution, but it looks weak / not very robust, and I would appreciate any feedback. summ = torchinfo.summary(net, input_shape) Than, I turn to >>> summ.summary_list
[Silly: 0, Linear: 1, ReLU: 1, Linear: 1, ReLU: 1, Linear: 1, Softmax: 1] To be clear (correct me if I'm wrong), every From this, imagine I would like to save the activations of But is this approach robust?? Can I always assume, no matter the type of Also, is Thank you in advance for helping me out on this one. |
Beta Was this translation helpful? Give feedback.
-
Dear TylerYep, all,
First of all, thanks for your work, itlooks amazing and is very easy to use :)
I have a question regarding hooks. Imagine I have the following silly network.
It is summarized as follows.
Now, imagine I would like to hook forward the output of
Linear: 1-5
. Is there a simple way to do so using your package?The reason why I'm thinking about leveraging your package because it seems that directly using
torch.register_forward_hook
won't work. Specifically, if I hookSilly().fc
directly, then it will execute 3 times for every call ofSilly()
, which is not wanted. On the other hand, it feels like your package does most of the work by understanding the true architecture of a forward call and differentiating betweenLinear: 1-1
,Linear: 1-3
andLinear: 1-5
. This seems useful if I want to target a specific layer based on the summary output.Thanks in advance!
Cheers.
Beta Was this translation helpful? Give feedback.
All reactions