Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shape of input array in Conv1D operation #213

Open
matteodonati opened this issue Sep 22, 2020 · 25 comments
Open

Shape of input array in Conv1D operation #213

matteodonati opened this issue Sep 22, 2020 · 25 comments

Comments

@matteodonati
Copy link

matteodonati commented Sep 22, 2020

Hello everyone,

I'm currently trying to import a 1D CNN from Tensorflow using Mbed. In particular, I have data from six different sensors stored as follow:

input_data[150][6] = { {x1, y1, z1, x2, y2, z2}, {x1, y1, z1, x2, y2, z2}, {x1, y1, z1, x2, y2, z2}, ... }

so basically, in each batch, I have 150 timesteps, and in each timestep I have six values (channels).

In my C++ code I created a new input Tensor as follow:

Tensor input = new RomTensor({1, 150, 6}, flt, input_data);

The problem is, when I try to compile the application I get the following errors:

Compile [ 5.0%]: main.cpp
[Warning] quantizationPrimitives.hpp@24,35: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] utensor_string.hpp@15,23: comparison between signed and unsigned integer expressions [-Wsign-compare]
[Warning] Convolution.hpp@29,33: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@30,32: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@31,38: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@32,39: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@49,33: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@50,32: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@51,38: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@52,39: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@69,33: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@70,32: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@71,38: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@72,39: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@91,33: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@92,32: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@93,38: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] Convolution.hpp@94,39: type qualifiers ignored on function return type [-Wignored-qualifiers]
[Warning] main.cpp@179,43: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]
[Warning] main.cpp@375,20: comparison between signed and unsigned integer expressions [-Wsign-compare]
[Warning] Matrix_kernels.hpp@44,22: unused variable 'input_shape' [-Wunused-variable]
[Warning] Matrix_kernels.hpp@44,22: unused variable 'input_shape' [-Wunused-variable]
[Warning] TensorMap.hpp@41,9: unused variable 'i' [-Wunused-variable]
[Warning] TensorMap.hpp@87,23: comparison between signed and unsigned integer expressions [-Wsign-compare]
[Warning] arenaAllocator.hpp@229,12: comparison between signed and unsigned integer expressions [-Wsign-compare]
[Warning] arenaAllocator.hpp@229,12: comparison between signed and unsigned integer expressions [-Wsign-compare]
[Error] Convolution_kernels.hpp@104,49: conversion from 'int' to 'IntegralValue' is ambiguous
[Warning] Convolution_kernels.hpp@23,17: unused variable 'out_depth' [-Wunused-variable]
[Error] Convolution_kernels.hpp@104,49: conversion from 'int' to 'IntegralValue' is ambiguous
[Warning] Convolution_kernels.hpp@23,17: unused variable 'out_depth' [-Wunused-variable]
[ERROR] In file included from ./uTensor/src/uTensor/core/tensorBase.hpp:3:0,
from ./uTensor/src/uTensor/core/tensor.hpp:5,
from ./uTensor/src/uTensor/core/TensorMap.hpp:7,
from ./uTensor/src/uTensor/core/modelBase.hpp:3,
from ./uTensor/src/uTensor.h:6,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/core/quantizationPrimitives.hpp:24:35: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int num_channels() const { return _num_channels; };
^~~~~
In file included from ./uTensor/src/uTensor/core/tensor.hpp:6:0,
from ./uTensor/src/uTensor/core/TensorMap.hpp:7,
from ./uTensor/src/uTensor/core/modelBase.hpp:3,
from ./uTensor/src/uTensor.h:6,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/core/utensor_string.hpp: In member function 'uint32_t uTensor::string::hash(const char*)':
./uTensor/src/uTensor/core/utensor_string.hpp:15:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < strlen(c); i++) {
~~^~~~~~~~~~~
In file included from ./uTensor/src/uTensor.h:21:0,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/ops/Convolution.hpp: At global scope:
./uTensor/src/uTensor/ops/Convolution.hpp:29:33: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t height() const { return filter->get_shape()[1]; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:30:32: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t width() const { return filter->get_shape()[2]; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:31:38: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t in_channels() const { return filter->get_shape()[3]; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:32:39: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t out_channels() const { return filter->get_shape()[0]; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:49:33: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t height() const { return h; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:50:32: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t width() const { return w; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:51:38: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t in_channels() const { return 1; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:52:39: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t out_channels() const { return c; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:69:33: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t height() const { return h; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:70:32: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t width() const { return w; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:71:38: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t in_channels() const { return 1; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:72:39: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t out_channels() const { return c; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:91:33: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t height() const { return h; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:92:32: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t width() const { return w; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:93:38: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t in_channels() const { return 1; }
^~~~~
./uTensor/src/uTensor/ops/Convolution.hpp:94:39: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
inline const int16_t out_channels() const { return c; }
^~~~~
.\main.cpp: In function 'void init()':
.\main.cpp:179:43: warning: ISO C++ forbids converting a string constant to 'char*' [-Wwrite-strings]
print_text("Initializing", 10, 59, 76, 15);
^
.\main.cpp: In function 'void set_activity(const uTensor::Tensor&)':
.\main.cpp:375:20: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < num_elems; i++)
~~^~~~~~~~~~~
In file included from ./uTensor/src/uTensor/ops/Matrix.hpp:6:0,
from ./uTensor/src/uTensor.h:22,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/ops/Matrix_kernels.hpp: In instantiation of 'void uTensor::matrix_mult_kernel_v2(uTensor::Tensor&, const uTensor::Tensor&, const uTensor::Tensor&, Bias, uTensor::Fuseable::Activation) [with T = float; Bias = uTensor::ReferenceOperators::MatrixMultOperatorV2::wBias; uTensor::Fuseable::Activation = std::function<float(float)>]':
./uTensor/src/uTensor/ops/Matrix.hpp:75:55: required from here
./uTensor/src/uTensor/ops/Matrix_kernels.hpp:44:22: warning: unused variable 'input_shape' [-Wunused-variable]
const TensorShape& input_shape = input->get_shape();
^~~~~~~~~~~
./uTensor/src/uTensor/ops/Matrix_kernels.hpp: In instantiation of 'void uTensor::matrix_mult_kernel_v2(uTensor::Tensor&, const uTensor::Tensor&, const uTensor::Tensor&, Bias, uTensor::Fuseable::Activation) [with T = float; Bias = uTensor::ReferenceOperators::MatrixMultOperatorV2::NoBias; uTensor::Fuseable::Activation = std::function<float(float)>]':
./uTensor/src/uTensor/ops/Matrix.hpp:80:56: required from here
./uTensor/src/uTensor/ops/Matrix_kernels.hpp:44:22: warning: unused variable 'input_shape' [-Wunused-variable]
In file included from ./uTensor/src/uTensor/core/modelBase.hpp:3:0,
from ./uTensor/src/uTensor.h:6,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/core/TensorMap.hpp: In instantiation of 'uTensor::FixedTensorMap::FixedTensorMap(std::initializer_listuTensor::SimpleNamedTensor) [with unsigned int size = 1u]':
.\main.cpp:343:47: required from here
./uTensor/src/uTensor/core/TensorMap.hpp:41:9: warning: unused variable 'i' [-Wunused-variable]
int i = 0;
^
./uTensor/src/uTensor/core/TensorMap.hpp: In instantiation of 'uTensor::FixedTensorMap& uTensor::FixedTensorMap::operator=(const uTensor::FixedTensorMap&) [with unsigned int size = 1u]':
./uTensor/src/uTensor/core/modelBase.hpp:46:12: required from 'uTensor::ModelInterface<num_inputs, num_outputs>& uTensor::ModelInterface<num_inputs, num_outputs>::set_inputs(uTensor::FixedTensorMap<num_inputs>&&) [with unsigned int num_inputs = 1u; unsigned int num_outputs = 1u]'
.\main.cpp:343:47: required from here
./uTensor/src/uTensor/core/TensorMap.hpp:87:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
for (int i = 0; i < size; i++) _map[i] = that._map[i];
~~^~~~~~
In file included from ./uTensor/src/uTensor.h:12:0,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/allocators/arenaAllocator.hpp: In instantiation of 'void* uTensor::localCircularArenaAllocator<size, T>::_allocate(size_t) [with unsigned int size = 1920u; T = short unsigned int; size_t = unsigned int]':
.\main.cpp:543:1: required from here
./uTensor/src/uTensor/allocators/arenaAllocator.hpp:229:12: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if (sz > (end() - reinterpret_cast<uint8_t*>(cursor))){
~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
./uTensor/src/uTensor/allocators/arenaAllocator.hpp: In instantiation of 'void* uTensor::localCircularArenaAllocator<size, T>::_allocate(size_t) [with unsigned int size = 7736u; T = short unsigned int; size_t = unsigned int]':
.\main.cpp:543:1: required from here
./uTensor/src/uTensor/allocators/arenaAllocator.hpp:229:12: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
In file included from ./uTensor/src/uTensor/ops/Convolution.hpp:6:0,
from ./uTensor/src/uTensor.h:21,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/ops/Convolution_kernels.hpp: In instantiation of 'void uTensor::generic_convolution_kernel(uTensor::Tensor&, const uTensor::Tensor&, Filter, Bias, uTensor::Padding, const uint16_t (&)[4]) [with T = signed char; Filter = uTensor::ReferenceOperators::ConvFilter; Bias = uTensor::ReferenceOperators::wBias; uint16_t = short unsigned int]':
./uTensor/src/uTensor/ops/Convolution.hpp:132:51: required from 'void uTensor::ReferenceOperators::Conv2dOperator::compute() [with T = signed char]'
.\main.cpp:543:1: required from here
./uTensor/src/uTensor/ops/Convolution_kernels.hpp:104:49: error: conversion from 'int' to 'IntegralValue' is ambiguous
out(batch, out_y, out_x, out_channel) = filter.finalize() + bias(out_channel);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from ./uTensor/src/uTensor/core/quantizationPrimitives.hpp:3:0,
from ./uTensor/src/uTensor/core/tensorBase.hpp:3,
from ./uTensor/src/uTensor/core/tensor.hpp:5,
from ./uTensor/src/uTensor/core/TensorMap.hpp:7,
from ./uTensor/src/uTensor/core/modelBase.hpp:3,
from ./uTensor/src/uTensor.h:6,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/core/types.hpp:102:3: note: candidate: IntegralValue::IntegralValue(float&&)
IntegralValue(float&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:101:3: note: candidate: IntegralValue::IntegralValue(int32_t&&)
IntegralValue(int32_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:100:3: note: candidate: IntegralValue::IntegralValue(uint32_t&&)
IntegralValue(uint32_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:99:3: note: candidate: IntegralValue::IntegralValue(int16_t&&)
IntegralValue(int16_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:98:3: note: candidate: IntegralValue::IntegralValue(uint16_t&&)
IntegralValue(uint16_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:97:3: note: candidate: IntegralValue::IntegralValue(int8_t&&)
IntegralValue(int8_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:96:3: note: candidate: IntegralValue::IntegralValue(uint8_t&&)
IntegralValue(uint8_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:94:3: note: candidate: IntegralValue::IntegralValue(const float&)
IntegralValue(const float& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:93:3: note: candidate: IntegralValue::IntegralValue(const int32_t&)
IntegralValue(const int32_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:92:3: note: candidate: IntegralValue::IntegralValue(const uint32_t&)
IntegralValue(const uint32_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:91:3: note: candidate: IntegralValue::IntegralValue(const int16_t&)
IntegralValue(const int16_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:90:3: note: candidate: IntegralValue::IntegralValue(const uint16_t&)
IntegralValue(const uint16_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:89:3: note: candidate: IntegralValue::IntegralValue(const int8_t&)
IntegralValue(const int8_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:88:3: note: candidate: IntegralValue::IntegralValue(const uint8_t&)
IntegralValue(const uint8_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:106:18: note: initializing argument 1 of 'IntegralValue& IntegralValue::operator=(IntegralValue&&)'
IntegralValue& operator=(IntegralValue&& that);
^~~~~~~~
In file included from ./uTensor/src/uTensor/ops/Convolution.hpp:6:0,
from ./uTensor/src/uTensor.h:21,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/ops/Convolution_kernels.hpp:23:17: warning: unused variable 'out_depth' [-Wunused-variable]
const int16_t out_depth = filter.out_channels();
^~~~~~~~~
./uTensor/src/uTensor/ops/Convolution_kernels.hpp: In instantiation of 'void uTensor::generic_convolution_kernel(uTensor::Tensor&, const uTensor::Tensor&, Filter, Bias, uTensor::Padding, const uint16_t (&)[4]) [with T = signed char; Filter = uTensor::ReferenceOperators::ConvFilter; Bias = uTensor::ReferenceOperators::NoBias; uint16_t = short unsigned int]':
./uTensor/src/uTensor/ops/Convolution.hpp:136:51: required from 'void uTensor::ReferenceOperators::Conv2dOperator::compute() [with T = signed char]'
.\main.cpp:543:1: required from here
./uTensor/src/uTensor/ops/Convolution_kernels.hpp:104:49: error: conversion from 'int' to 'IntegralValue' is ambiguous
out(batch, out_y, out_x, out_channel) = filter.finalize() + bias(out_channel);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from ./uTensor/src/uTensor/core/quantizationPrimitives.hpp:3:0,
from ./uTensor/src/uTensor/core/tensorBase.hpp:3,
from ./uTensor/src/uTensor/core/tensor.hpp:5,
from ./uTensor/src/uTensor/core/TensorMap.hpp:7,
from ./uTensor/src/uTensor/core/modelBase.hpp:3,
from ./uTensor/src/uTensor.h:6,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/core/types.hpp:102:3: note: candidate: IntegralValue::IntegralValue(float&&)
IntegralValue(float&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:101:3: note: candidate: IntegralValue::IntegralValue(int32_t&&)
IntegralValue(int32_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:100:3: note: candidate: IntegralValue::IntegralValue(uint32_t&&)
IntegralValue(uint32_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:99:3: note: candidate: IntegralValue::IntegralValue(int16_t&&)
IntegralValue(int16_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:98:3: note: candidate: IntegralValue::IntegralValue(uint16_t&&)
IntegralValue(uint16_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:97:3: note: candidate: IntegralValue::IntegralValue(int8_t&&)
IntegralValue(int8_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:96:3: note: candidate: IntegralValue::IntegralValue(uint8_t&&)
IntegralValue(uint8_t&& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:94:3: note: candidate: IntegralValue::IntegralValue(const float&)
IntegralValue(const float& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:93:3: note: candidate: IntegralValue::IntegralValue(const int32_t&)
IntegralValue(const int32_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:92:3: note: candidate: IntegralValue::IntegralValue(const uint32_t&)
IntegralValue(const uint32_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:91:3: note: candidate: IntegralValue::IntegralValue(const int16_t&)
IntegralValue(const int16_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:90:3: note: candidate: IntegralValue::IntegralValue(const uint16_t&)
IntegralValue(const uint16_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:89:3: note: candidate: IntegralValue::IntegralValue(const int8_t&)
IntegralValue(const int8_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:88:3: note: candidate: IntegralValue::IntegralValue(const uint8_t&)
IntegralValue(const uint8_t& u);
^~~~~~~~~~~~~
./uTensor/src/uTensor/core/types.hpp:106:18: note: initializing argument 1 of 'IntegralValue& IntegralValue::operator=(IntegralValue&&)'
IntegralValue& operator=(IntegralValue&& that);
^~~~~~~~
In file included from ./uTensor/src/uTensor/ops/Convolution.hpp:6:0,
from ./uTensor/src/uTensor.h:21,
from .\models/my_model/my_model.hpp:4,
from .\main.cpp:24:
./uTensor/src/uTensor/ops/Convolution_kernels.hpp:23:17: warning: unused variable 'out_depth' [-Wunused-variable]
const int16_t out_depth = filter.out_channels();
^~~~~~~~~

So the problem should be here:

[Error] Convolution_kernels.hpp@104,49: conversion from 'int' to 'IntegralValue' is ambiguous

Is the shape of input_data correct? How can I solve the issue?

Thank you.

@mbartling
Copy link
Member

Hey @matteodonati, yes the input sizes are correct here but the expected types in the model are not. It looks like the convolutional layer is expecting a symmetric quantized input (signed char) but you are passing a float to it.

./uTensor/src/uTensor/ops/Convolution_kernels.hpp: In instantiation of 'void uTensor::generic_convolution_kernel(uTensor::Tensor&, const uTensor::Tensor&, Filter, Bias, uTensor::Padding, const uint16_t (&)[4]) 
[with T = signed char; // Operator type
Filter = uTensor::ReferenceOperators::ConvFilter; Bias = uTensor::ReferenceOperators::wBias; uint16_t = short unsigned int]':

Is your input data a float? if so I recommend stitching in a symmetric quantize operator. This ops quantization params should be configured by the codegen (either directly in your model code or in the model params header file). If that doesn't work, you can get the zero point and scale directly from your tflite model via netron.

https://github.com/uTensor/uTensor/blob/master/src/uTensor/ops/symmetric_quantization/QuantizeOps.hpp#L75-L86

  TflmSymQuantOps::QuantizeOperator<int8_t, float> quantOp;
  Tensor qInput = new RamTensor({1, 150, 6}, i8);
  int32_t qInput_zp = -128;
  float     qInput_scale = 1.0;
  PerTensorQuantizationParams qInput_quant_params(
      qInput_zp, qInput_scale);
  qInput->set_quantization_params(qInput_quant_params);

  quantOp
      .set_inputs({
          {TflmSymQuantOps::QuantizeOperator<int8_t, float>::input, input},
      })
      .set_outputs({{TflmSymQuantOps::QuantizeOperator<int8_t, float>::output,
                     qInput}})
      .eval();

One final note: Your input is a RomTensor, which is meant for static readonly data. You should switch to a BufferTensor which is meant for userspace managed data. It has the same signature as RomTensor, so you just need to change the type name: https://github.com/uTensor/uTensor/blob/master/src/uTensor/tensors/BufferTensor.hpp#L15

Let me know if you have any questions

@matteodonati
Copy link
Author

Thank you for the quick response @mbartling!

I confirm that input_data is a float array but looking at "model.hpp" and "model.cpp" I don't see any mention to the type signed char.

I used Keras to define, train and evaluate my model. Below is the code used to create it:

model = tf.keras.Sequential([
    tf.keras.layers.Conv1D(filters = 16, kernel_size = 3, activation = 'relu', input_shape = (N_TIMESTEPS, N_FEATURES)),
    tf.keras.layers.MaxPooling1D(pool_size = 3),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(32, activation = 'relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(32, activation = 'relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(N_CLASSES)
  ])

where N_TIMESTEPS = 150, N_FEATURES = 6, N_CLASSES = 4.

And this is the code used to generate the C++ model:

num_calibration_steps = 128

calibration_dtype = tf.float32

def representative_dataset_gen():
    for _ in range(num_calibration_steps):
        sample = test_features[best_index][np.random.randint(0, test_features[best_index].shape[0] - 1)]
        sample = sample[tf.newaxis, ...]
        sample = tf.cast(sample, dtype = calibration_dtype)
        yield [sample]

tflm_keras_export(models[best_index], representive_dataset = representative_dataset_gen, model_name = 'my_model', target = 'utensor')

where test_features[best_index] is an array of shape (1600, 150, 6) and models[best_index] is an object of type tensorflow.python.keras.engine.sequential.Sequential.

This is the output generated when I try to create the C++ model:

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
INFO:tensorflow:Assets written to: /tmp/utensor_6e4qc3u8/saved_model/assets
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[WARNING quantize.py <module> @ 12] trying to import deprecated quantization transformer
[INFO transformer.py transform @ 23] Transforming graph: my_model
[INFO transformer.py transform @ 24] Transform pipeline: dropout(name_pattern=r'(dropout[_\w\d]*)/.*') -> linear_reorder -> inline -> biasAdd -> remove_id_op -> fake_gather_v2 -> refcnt
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[WARNING ns_transformer.py transform @ 243] enabling fake_gather_v2 will force replacing GatherV2 with Gather
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 1, 150, 6](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 148, 1, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 49, 16](<class 'list'>)
[WARNING base.py __attrs_post_init__ @ 300] cannot convert new_shape to generic value: [1, 784](<class 'list'>)
[INFO transformer.py transform @ 31] Graph transormation done
[INFO generic_graph_lower.py apply @ 56] topo ordered tensor life span analysis done
[INFO utils.py wrapped @ 469] collapsed time of calling apply: 0.0016 seconds
[INFO generic_graph_lower.py _solve_space_alloc @ 204] optimal tensor allocation plan solved, total memory required: 4736 bytes
[INFO generic_graph_lower.py _solve_space_alloc @ 205] number of tensors allocated: 12
[INFO utils.py wrapped @ 469] collapsed time of calling _solve_space_alloc: 0.0153 seconds
[INFO _code_generator.py _time_slot_generate_files @ 245] model parameters header file generated: constants/my_model/params_my_model.hpp
[INFO _code_generator.py _time_slot_generate_files @ 266] model header file generated: models/my_model/my_model.hpp
[INFO _code_generator.py _time_slot_generate_files @ 286] model cpp file generated: models/my_model/my_model.cpp

I also attached the three generated files (my_model.zip): my_model.zip

Is the representative_dataset_gen function incorrect? Should those warnings appear during the generation of the .hpp and .cpp files?

Thank you very much.

@mbartling
Copy link
Member

Ah confirmed, the model looks correct! Turns out we added a bunch more quantized operators recently, including quantized Conv2D, and those are waiting for the next minor release. You can get them working by switching the uTensor branch from master to develop. Develop has been pretty stable lately, but let me know if you have any trouble with this.

FYI, the operators maintain a mostly common interface/bit accurate computation across namespaces, so things like Conv2D can be interchanged just by changing the namespace in the model header. For example, the default reference quantized Conv2D stores weights as floats, but does accumulation with floats for clarity (slow on embedded systems), whereas the TflmSymQuantOps namespace uses integer accumulation (faster).

@matteodonati
Copy link
Author

Thank you!

I tried to switch uTensor to develop and now I get a different error:

[Error] ReduceFunc.hpp@25,27: no match for 'operator[]' (operand types are 'uTensor::FixedTensorMap<2u>' and 'uTensor::Tensor')
[Error] ReduceFunc.hpp@26,29: no match for 'operator[]' (operand types are 'uTensor::FixedTensorMap<1u>' and 'uTensor::Tensor')
[Error] ReduceFunc.hpp@37,13: no match for 'operator[]' (operand types are 'uTensor::Tensor' and 'uint32_t {aka long unsigned int}')
[ERROR] In file included from ./uTensor/src/uTensor/core/tensorBase.hpp:3:0,
                 from ./uTensor/src/uTensor/core/tensor.hpp:5,
                 from ./uTensor/src/uTensor/core/TensorMap.hpp:7,
                 from ./uTensor/src/uTensor/core/operatorBase.hpp:3,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.hpp:1,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:
./uTensor/src/uTensor/core/quantizationPrimitives.hpp:24:35: warning: type qualifiers ignored on function return type [-Wignored-qualifiers]
   inline const int num_channels() const { return _num_channels; };
                                   ^~~~~
In file included from ./uTensor/src/uTensor/core/tensor.hpp:6:0,
                 from ./uTensor/src/uTensor/core/TensorMap.hpp:7,
                 from ./uTensor/src/uTensor/core/operatorBase.hpp:3,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.hpp:1,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:
./uTensor/src/uTensor/core/utensor_string.hpp: In member function 'uint32_t uTensor::string::hash(const char*)':
./uTensor/src/uTensor/core/utensor_string.hpp:15:23: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     for (int i = 0; i < strlen(c); i++) {
                     ~~^~~~~~~~~~~
In file included from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:0:
.\uTensor\src\uTensor\ops\ReduceFunc.hpp: In member function 'void uTensor::ReferenceOperators::ReduceMeanOperator<T>::compute()':
.\uTensor\src\uTensor\ops\ReduceFunc.hpp:25:27: error: no match for 'operator[]' (operand types are 'uTensor::FixedTensorMap<2u>' and 'uTensor::Tensor')
     Tensor& input = inputs[input].tensor();
                           ^
In file included from ./uTensor/src/uTensor/core/operatorBase.hpp:3:0,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.hpp:1,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:
./uTensor/src/uTensor/core/TensorMap.hpp:60:30: note: candidate: uTensor::SimpleNamedTensor& uTensor::FixedTensorMap<size>::operator[](const uTensor::string&) [with unsigned int size = 2u]
   virtual SimpleNamedTensor& operator[](const uTensor::string& name) override {
                              ^~~~~~~~
./uTensor/src/uTensor/core/TensorMap.hpp:60:30: note:   no known conversion for argument 1 from 'uTensor::Tensor' to 'const uTensor::string&'
./uTensor/src/uTensor/core/TensorMap.hpp:66:36: note: candidate: const uTensor::SimpleNamedTensor& uTensor::FixedTensorMap<size>::operator[](const uTensor::string&) const [with unsigned int size = 2u]
   virtual const SimpleNamedTensor& operator[](
                                    ^~~~~~~~
./uTensor/src/uTensor/core/TensorMap.hpp:66:36: note:   no known conversion for argument 1 from 'uTensor::Tensor' to 'const uTensor::string&'
In file included from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:0:
.\uTensor\src\uTensor\ops\ReduceFunc.hpp:26:29: error: no match for 'operator[]' (operand types are 'uTensor::FixedTensorMap<1u>' and 'uTensor::Tensor')
     Tensor& output = outputs[output].tensor();
                             ^
In file included from ./uTensor/src/uTensor/core/operatorBase.hpp:3:0,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.hpp:1,
                 from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:
./uTensor/src/uTensor/core/TensorMap.hpp:60:30: note: candidate: uTensor::SimpleNamedTensor& uTensor::FixedTensorMap<size>::operator[](const uTensor::string&) [with unsigned int size = 1u]
   virtual SimpleNamedTensor& operator[](const uTensor::string& name) override {
                              ^~~~~~~~
./uTensor/src/uTensor/core/TensorMap.hpp:60:30: note:   no known conversion for argument 1 from 'uTensor::Tensor' to 'const uTensor::string&'
./uTensor/src/uTensor/core/TensorMap.hpp:66:36: note: candidate: const uTensor::SimpleNamedTensor& uTensor::FixedTensorMap<size>::operator[](const uTensor::string&) const [with unsigned int size = 1u]
   virtual const SimpleNamedTensor& operator[](
                                    ^~~~~~~~
./uTensor/src/uTensor/core/TensorMap.hpp:66:36: note:   no known conversion for argument 1 from 'uTensor::Tensor' to 'const uTensor::string&'
In file included from .\uTensor\src\uTensor\ops\ReduceFunc.cpp:1:0:
.\uTensor\src\uTensor\ops\ReduceFunc.hpp:37:13: error: no match for 'operator[]' (operand types are 'uTensor::Tensor' and 'uint32_t {aka long unsigned int}')
       output[new_offset] += value;
             ^

I'm sorry to bother you this much.

@mbartling
Copy link
Member

Cool I fixed a small issue with duplicate names in that operator (using input as both a Tensor and an enum). If you pull develop it should be fixed.

@dboyliao can you add a test for this op?

@dboyliao
Copy link
Member

You mean Conv1D?
Sure thing.
I can probably do it this weekend.

@dboyliao
Copy link
Member

@matteodonati which branch do you compile against?
From the log, I see output[new_offset] and it is not correct syntax.
But I can't find that line of code on develop branch.

FYI, output is a Tensor not ordinary array so you should not use [] to pass offset.
You should use () operator instead.

@mbartling
Copy link
Member

@dboyliao I updated the [] to () with the name collision bugfix above.

@dboyliao
Copy link
Member

Oh, I see.

@mbartling
Copy link
Member

Hey @matteodonati, sorry for the inconvenience, but ReduceMean is one of the few untested ops in uTensor. We will work on adding a test and letting you know when it's done. For now I will go ahead and mark it with the correct tag.

@dboyliao
Copy link
Member

FYI, this is the relevant PR:
#209

@matteodonati
Copy link
Author

matteodonati commented Sep 23, 2020

Thank you everyone!

I was able to compile the application. The problem now is that for every different input_data I provide I get always the same output value: output_log.txt.

This is the code used in main.cpp:

Tensor input = new BufferTensor({1, 150, 6}, flt, input_data);
	
Tensor output = new RamTensor({1, 4}, flt);

model.set_inputs({{My_model::input_0, input}}).set_outputs({{My_model::output_0, output}}).eval();
	
...
	
input.free();
	
output.free();

@mbartling
Copy link
Member

Yeah my guess is this has to do with the reduce mean. Do you have access to a debugger? If so can you check that the inputs to reduce mean are always different?

@matteodonati
Copy link
Author

Yeah my guess is this has to do with the reduce mean. Do you have access to a debugger? If so can you check that the inputs to reduce mean are always different?

Unfortunately I don't, I'm sorry.

@mbartling
Copy link
Member

Hey @matteodonati i think I know what the issue is with reduce mean, I will work on a fix for it sometime this afternoon and let you know.

@matteodonati
Copy link
Author

Thank you very much, I'll wait!

@mbartling
Copy link
Member

Update: I am dumb and misread MaxPool as ReduceMean, so that's not the issue. @dboyliao @matteodonati do you know how weights are stored in kera's Conv1D? I know TFLM does some shuffling of the weight tensor, and may be that only affects Conv2D.

@dboyliao
Copy link
Member

@mbartling
The Keras Conv source code:
https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/keras/layers/convolutional.py#L194-L204

It looks like the filter shape is of (*kernel_size, in_channel, out_channel).
At least it's true for Keras, not sure if it's the same for TFLM.

@dboyliao
Copy link
Member

@dboyliao
Copy link
Member

It basically create a variable tensor and return it.

@matteodonati
Copy link
Author

If it can help, I just tried to use Conv2D instead of Conv1D in Keras:

  model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(filters = 32, kernel_size = (1, 3), activation = 'relu', input_shape = (1, N_TIMESTEPS, N_FEATURES), data_format = "channels_last"),
    tf.keras.layers.MaxPooling2D(pool_size = (1, 3), data_format = "channels_last"),
    tf.keras.layers.Conv2D(filters = 16, kernel_size = (1, 2), activation = 'relu'),
    tf.keras.layers.MaxPooling2D(pool_size = (1, 2)),
    tf.keras.layers.Dropout(0.3),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(448, activation = 'relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(N_CLASSES)
  ])

so now input_data has shape (1, 150, 6).

I tried to compile the application using the same version of uTensor used in utensor-helloworld and this gave me the same error posted here.
The problem is that the "utensor-helloworld" example works just fine and uses exactly the same operators: Conv2D and MaxPooling2D.

@mbartling
Copy link
Member

Just double checking, are you on the develop branch of uTensor? Also can you post which convolution2D operator get's generated? Is it in the ReferenceOperators namespace, or is it in the TflmSymQuantOps namespace?

Sorry for any confusion/roundabout debugging

@matteodonati
Copy link
Author

Just double checking, are you on the develop branch of uTensor? Also can you post which convolution2D operator get's generated? Is it in the ReferenceOperators namespace, or is it in the TflmSymQuantOps namespace?

Sorry for any confusion/roundabout debugging

When I saw that Conv2D was not compiling even with the same version of uTensor used in utensor-helloworld I switch to the develop branch, like I did before, but I get always the wrong output when I run the inference (the output is always the same).

The operators generated are:

// Operators
  ReferenceOperators::MaxPoolOperator<int8_t> op_MaxPoolOperator_000;

  TflmSymQuantOps::DequantizeOperator<float, int8_t> op_DequantizeOperator_001;

  TflmSymQuantOps::FullyConnectedOperator<int8_t> op_FullyConnectedOperator_002;

  TflmSymQuantOps::FullyConnectedOperator<int8_t> op_FullyConnectedOperator_003;

  ReferenceOperators::MaxPoolOperator<int8_t> op_MaxPoolOperator_004;

  ReferenceOperators::ReLUOperator<int8_t> op_ReLUOperator_005;

  ReferenceOperators::ReshapeOperator<int8_t> op_ReshapeOperator_006;

  TflmSymQuantOps::QuantizeOperator<int8_t, float> op_QuantizeOperator_007;

  ReferenceOperators::Conv2dOperator<int8_t> op_Conv2dOperator_008;

so the Conv2D is in ReferenceOperators.

@mbartling
Copy link
Member

mbartling commented Sep 24, 2020

@matteodonati can you DM me on the utensor slack? I want to set up a virtual debug session, it seems really odd that you are always getting the same output and I think it will help us narrow down the bug faster

@matteodonati
Copy link
Author

Just sent you a message, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants