Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Size layer #5021

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open

Size layer #5021

wants to merge 19 commits into from

Conversation

brightening-eyes
Copy link
Contributor

this is Size layer, equivalent to size operator in onnx

@codecov-commenter
Copy link

codecov-commenter commented Sep 10, 2023

Codecov Report

Merging #5021 (37bcae6) into master (fdf2c48) will decrease coverage by 5.26%.
Report is 3 commits behind head on master.
The diff coverage is 90.00%.

@@             Coverage Diff             @@
##           master    #5021       +/-   ##
===========================================
- Coverage   94.72%   89.47%    -5.26%     
===========================================
  Files         765      307      -458     
  Lines      229654    89796   -139858     
===========================================
- Hits       217551    80344   -137207     
+ Misses      12103     9452     -2651     
Files Changed Coverage Δ
src/layer/size.cpp 90.00% <90.00%> (ø)

... and 643 files with indirect coverage changes

@nihui
Copy link
Member

nihui commented Sep 12, 2023

tools/onnx/onnx2ncnn.cpp.orig accidently committed ?

@brightening-eyes
Copy link
Contributor Author

done.
should I add it to list of ops?

@nihui
Copy link
Member

nihui commented Sep 12, 2023

done. should I add it to list of ops?

yeah, add to operators doc

@brightening-eyes
Copy link
Contributor Author

done as well.

src/layer/size.cpp Outdated Show resolved Hide resolved
src/layer/size.cpp Outdated Show resolved Hide resolved
src/layer/size.cpp Outdated Show resolved Hide resolved
tests/test_size.cpp Outdated Show resolved Hide resolved
tests/test_size.cpp Outdated Show resolved Hide resolved
print the additional dimension into test which was forgotten.

Co-authored-by: nihui <[email protected]>
@brightening-eyes
Copy link
Contributor Author

thanks. I forgot to add the additional dimension to the test when printing.

if (top_blob.empty())
return -100;

top_blob[0] = bottom_blob.w * bottom_blob.h * bottom_blob.d * bottom_blob.c;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

onnx size outputs an integer tensor
The code implemented here by ncnn size will return a float tensor
For very large numbers, there is a loss of precision when using float
Will this cause difficulty in subsequent layer processing? For example, perform other addition, subtraction or modulo division operations on the result of size.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for things like rms norm etc, it might have a little impact. but, the issue is that the size of batches (which ncnn does not have by itself), will have this. other than devisions for normalizations, I dont think it has other impacts. but if necessary (which is better), I might change the types from float to int. should anything else be done to make it compatible with integers?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm worried that your original intention was to return integer, but you accidentally returned float tensor. Now it seems that I have unnecessary worries. :D

Regarding batch, the process of converting the model to ncnn will require that the batch is always 1. There may be special cases, but that will also be solved in the converter, such as intepret nchw to cdhw in ncnn as a workaround. The layer itself doesn't care about it

@brightening-eyes
Copy link
Contributor Author

how would I assign it as an integer? the types are all ints, and multiplications will return an integer.
ncnn::Mat has an operator t* which can accept ints as well!. isn't it?
I assume as well that we should have integers instead of floats because size always returns an integer scalar.
your comment also brought to my attention that we need to consider having integer as the output size.

@brightening-eyes
Copy link
Contributor Author

@nihui
what about doing memcpy on output tensor rather than assigning it to its first element?
does it make it better?

@nihui
Copy link
Member

nihui commented Sep 21, 2023

@nihui what about doing memcpy on output tensor rather than assigning it to its first element? does it make it better?

// access by channel / depth
int* p = mat.channel(i);
int* p = mat.depth(i);

// access by row
int* p = mat.row<int>(i);

// access by element
int* p = mat;

// set value at 1,2,3,4 to 567
mat.channel(1).depth(2).row<int>(3)[4] = 567;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants