Skip to content

Doesn't seem to work if input is a single dimension #148

Open
@Mikestriken

Description

@Mikestriken

Batches seem to be constructed from the input_res here

def get_flops_pytorch(model, input_res,
                      print_per_layer_stat=True,
                      input_constructor=None, ost=sys.stdout,
                      verbose=False, ignore_modules=[],
                      custom_modules_hooks={},
                      output_precision=2,
                      flops_units: Optional[str] = 'GMac',
                      param_units: Optional[str] = 'M',
                      extra_config: Dict = {}) -> Tuple[Union[int, None],
                                                        Union[int, None]]:
    global CUSTOM_MODULES_MAPPING
    CUSTOM_MODULES_MAPPING = custom_modules_hooks
    flops_model = add_flops_counting_methods(model)
    flops_model.eval()
    flops_model.start_flops_count(ost=ost, verbose=verbose,
                                  ignore_list=ignore_modules)
    if input_constructor:
        batch = input_constructor(input_res)
    else:
        try:
            batch = torch.ones(()).new_empty((1, *input_res), # ← BATCH CONSTRUCTED HERE
                                             dtype=next(flops_model.parameters()).dtype,
                                             device=next(flops_model.parameters()).device)

My model is a very simple LSTM character predictor and it uses the nn.Embedding to encode the input character into an input vector.

Thus my input shape is has 32,10 (batch size is 32, 10 character in an input sequence)

so ideally my input_res should be (10)

but (10) is not a tuple, and (10,) has nothing in its second index.

so this line torch.ones(()).new_empty((1, *input_res)

becomes torch.ones(()).new_empty((1, *(10)) or torch.ones(()).new_empty((1, *(10,1)) or
torch.ones(()).new_empty((1, *(10,))

Code: https://pastebin.com/5mxn1zxP

Test Cases:

from ptflops import get_model_complexity_info

macs, params = get_model_complexity_info(model, tuple(10), as_strings=True, backend='pytorch',
                                           print_per_layer_stat=True, verbose=True) # TypeError: 'int' object is not iterable
from ptflops import get_model_complexity_info

macs, params = get_model_complexity_info(model, (10), as_strings=True, backend='pytorch',
                                           print_per_layer_stat=True, verbose=True)
"""
---> [88](file:///F:/Desktop/School_Stuff/Programming/AI/.venv/Lib/site-packages/ptflops/flops_counter.py:88)     assert type(input_res) is tuple
     [89](file:///F:/Desktop/School_Stuff/Programming/AI/.venv/Lib/site-packages/ptflops/flops_counter.py:89)     assert len(input_res) >= 1
     [90](file:///F:/Desktop/School_Stuff/Programming/AI/.venv/Lib/site-packages/ptflops/flops_counter.py:90)     assert isinstance(model, nn.Module)

AssertionError:
"""
from ptflops import get_model_complexity_info

macs, params = get_model_complexity_info(model, (10,1), as_strings=True, backend='pytorch',
                                           print_per_layer_stat=True, verbose=True)
"""
Warning: module Embedding is treated as a zero-op.
Warning: module LSTMNet is treated as a zero-op.
Flops estimation was not finished successfully because of the following exception:
<class 'RuntimeError'> : Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
Computational complexity: None
Number of parameters: None
Total:0 Mac
Module:  Global

Flops estimation was not finished successfully because of the following exception:
<class 'RuntimeError'> : Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
Computational complexity: None
Number of parameters: None
Total Num Params in loaded model: 143404
Traceback (most recent call last):
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\ptflops\pytorch_engine.py", line 68, in get_flops_pytorch
    _ = flops_model(batch)
        ^^^^^^^^^^^^^^^^^^
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1844, in _call_impl
    return inner()
           ^^^^^^^
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1790, in inner
    result = forward_call(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<ipython-input-3-05186ac86b6c>", line 113, in forward
    self.charEmbeddingLayer(x)
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\modules\sparse.py", line 190, in forward
    return F.embedding(
           ^^^^^^^^^^^^
  File "f:\Desktop\School_Stuff\Programming\AI\.venv\Lib\site-packages\torch\nn\functional.py", line 2551, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"""

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions