Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Given input size: (1536x5x5). Calculated output size: (1536x0x0). Output size is too small #6

Open
JinhaSong opened this issue Nov 8, 2020 · 2 comments

Comments

@JinhaSong
Copy link

I tried to run a test that trains only 20 of VGGFace2's train data.
And I tried to use inceptionresnet v2 as the network architecture, but it does not run with the following error.

RuntimeError: Given input size: (1536x5x5). Calculated output size: (1536x0x0). Output size is too small

Please let me know if there is any guessing or wrong part of the reason.

@tamerthamoqa
Copy link
Owner

Hello JinhaSong,

Did you specify the input image to be 299x299? I think the minimum should be 150x150 but I am not sure on that to be honest.

For reference, I have imported the Inception-ResNet-V2 implementation from Cadene's repository here:
https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/inceptionresnetv2.py

@JoannisAretakis
Copy link

Hi JinhaSong,
I had similar issue and I managed to fix it by setting AvgPool2d kernel_size to 5. Since the
VGGFace2 images are smaller the current network at that layer ends up with tensor 1536x5x5 instead of 1536x8x8.
Applying pooling with kernel_size 8 which is larger that what is provided by the previous layer causes this issue. Hope that helps !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants