Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tflite Export Failure with INT8 Quantization and Dimension Mismatch #12689

Open
1 of 2 tasks
hamedgorji opened this issue May 14, 2024 · 3 comments
Open
1 of 2 tasks

Tflite Export Failure with INT8 Quantization and Dimension Mismatch #12689

hamedgorji opened this issue May 14, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@hamedgorji
Copy link

hamedgorji commented May 14, 2024

Search before asking

  • I have searched the YOLOv8 issues and found no similar bug report.

YOLOv8 Component

Export

Bug

Hi,

I'm encountering an issue when attempting to convert a YOLOv8 segmentation model to tflite format with INT8 quantization. The process fails with an error indicating a dimension mismatch during the permutation step. This issue did not occur with the older version of YOLOv8 and has only appeared after upgrading to the latest version.
@glenn-jocher I would appreciate it if you could help me to fix this issue.

Thanks

model = YOLO("Yolov8_Seg\YOLOv8n-Seg-SGD-1000epochs-batch16-imgsz480-data_23719\weights/best.pt")
model.export(format="tflite", data = 'Data/export_data.yaml',imgsz=480, int8=True)

Ultralytics YOLOv8.2.15 🚀 Python-3.10.13 torch-2.1.0 CPU (11th Gen Intel Core(TM) i9-11900KF 3.50GHz)
YOLOv8n-seg summary (fused): 195 layers, 3258454 parameters, 0 gradients, 12.0 GFLOPs
PyTorch: starting from 'Yolov8_Seg\YOLOv8n-Seg-SGD-1000epochs-batch16-imgsz480-data_23719\weights\best.pt' with input shape (1, 3, 480, 480) BCHW and output shape(s) ((1, 38, 4725), (1, 32, 120, 120)) (6.6 MB)
TensorFlow SavedModel: starting export with tensorflow 2.14.0...
WARNING ⚠️ tensorflow<=2.13.1 is required, but tensorflow==2.14.0 is currently installed #5161
ONNX: starting export with onnx 1.14.1 opset 17...
ONNX: simplifying with onnxsim 0.4.35...
ONNX: export success ✅ 1.3s, saved as 'Yolov8_Seg\YOLOv8n-Seg-SGD-1000epochs-batch16-imgsz480-data_23719\weights\best.onnx' (12.6 MB)
TensorFlow SavedModel: collecting INT8 calibration images from 'data=Data/export_data.yaml'
Scanning D:\H\Project\Code\yolov8\Data\calibration... 0 images, 340 backgrounds, 0 corrupt: 100%|██████████| 340/340 [00:00<00:00, 7435.55it/s]
WARNING ⚠️ No labels found in D:\H\Project\Code\yolov8\Data\calibration.cache. See https://docs.ultralytics.com/datasets/detect for dataset formatting guidance.
New cache created: D:\H\Project\Code\yolov8\Data\calibration.cache
WARNING ⚠️ No labels found in D:\H\Project\Code\yolov8\Data\calibration.cache, training may not work correctly. See https://docs.ultralytics.com/datasets/detect for dataset formatting guidance.
TensorFlow SavedModel: export failure ❌ 1.4s: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 4 is not equal to len(dims) = 3
Traceback (most recent call last):
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\IPython\core\interactiveshell.py", line 3526, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
model.export(format="tflite", data = 'Data/export_data.yaml',imgsz=480, int8=True)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\model.py", line 602, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 305, in call
f[5], keras_model = self.export_saved_model()
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 142, in outer_func
raise e
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 137, in outer_func
f, model = inner_func(*args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 866, in export_saved_model
im = batch["img"].permute(1, 2, 0)[None] # list to nparray, CHW to BHWC
RuntimeError: permute(sparse_coo): number of dimensions in the tensor input does not match the length of the desired ordering of dimensions i.e. input.dim() = 4 is not equal to len(dims) = 3

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@hamedgorji hamedgorji added the bug Something isn't working label May 14, 2024
@hamedgorji hamedgorji changed the title TensorFlow SavedModel Export Failure with INT8 Quantization and Dimension Mismatch Tflite Export Failure with INT8 Quantization and Dimension Mismatch May 14, 2024
@hamedgorji
Copy link
Author

@glenn-jocher I was able to resolve the dimension mismatch error by adding the following lines of code (middle section):

    # Export to TF
    tmp_file = f / "tmp_tflite_int8_calibration_images.npy"  # int8 calibration images file
    np_data = None
    if self.args.int8:
        verbosity = "info"
        if self.args.data:
            # Generate calibration data for integer quantization
            dataloader = self.get_int8_calibration_dataloader(prefix)
            images = []
            for i, batch in enumerate(dataloader):
                if i >= 100:  # maximum number of calibration images
                    break
                    if batch["img"].dim() == 4:
                        im = batch["img"].permute(0, 2, 3, 1)  # From NCHW to NHWC
                    elif batch["img"].dim() == 3:
                        im = batch["img"].permute(1, 2, 0)[None]  # From CHW to NHWC
                images.append(im)
            f.mkdir()
            images = torch.cat(images, 0).float()
            # mean = images.view(-1, 3).mean(0)  # imagenet mean [123.675, 116.28, 103.53]
            # std = images.view(-1, 3).std(0)  # imagenet std [58.395, 57.12, 57.375]
            np.save(str(tmp_file), images.numpy())  # BHWC
            np_data = [["images", tmp_file, [[[[0, 0, 0]]]], [[[[255, 255, 255]]]]]]
    else:
        verbosity = "error"

Now I am facing a new error. The error message is: "Cannot set tensor: Got value of type FLOAT64 but expected type FLOAT32 for input 0, name: serving_default_images:0".

TensorFlow SavedModel: starting TFLite export with onnx2tf 1.17.5...
Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!
Model loaded ========================================================================
Model conversion started ============================================================
saved_model output started ==========================================================
saved_model output complete!
Float32 tflite output complete!
Float16 tflite output complete!
Input signature information for quantization
signature_name: serving_default
input_name.0: images shape: (1, 480, 480, 3) dtype: <dtype: 'float32'>
Dynamic Range Quantization tflite output complete!
TensorFlow SavedModel: export failure ❌ 29.8s: Cannot set tensor: Got value of type FLOAT64 but expected type FLOAT32 for input 0, name: serving_default_images:0
Traceback (most recent call last):
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\IPython\core\interactiveshell.py", line 3526, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
model.export(format="tflite", data = 'Data/export_data.yaml',imgsz=480, int8=True)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\model.py", line 602, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 305, in call
f[5], keras_model = self.export_saved_model()
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 142, in outer_func
raise e
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 137, in outer_func
f, model = inner_func(*args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\ultralytics\engine\exporter.py", line 881, in export_saved_model
onnx2tf.convert(
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\onnx2tf\onnx2tf.py", line 1423, in convert
tflite_model = converter.convert()
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\lite.py", line 1125, in wrapper
return self._convert_and_export_metrics(convert_func, *args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\lite.py", line 1079, in _convert_and_export_metrics
result = convert_func(self, *args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\lite.py", line 1451, in convert
return self._convert_from_saved_model(graph_def)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\lite.py", line 1318, in _convert_from_saved_model
return self._optimize_tflite_model(
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper
raise error from None # Re-throws the exception.
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\lite.py", line 1023, in _optimize_tflite_model
model = self._quantize(
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\lite.py", line 722, in _quantize
calibrated = calibrate_quantize.calibrate(
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 215, in wrapper
raise error from None # Re-throws the exception.
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\convert_phase.py", line 205, in wrapper
return func(*args, **kwargs)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 254, in calibrate
self._feed_tensors(dataset_gen, resize_input=True)
File "C:\Users\Hamed\miniconda3\envs\yolov8\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py", line 152, in _feed_tensors
self._calibrator.FeedTensor(input_array)
ValueError: Cannot set tensor: Got value of type FLOAT64 but expected type FLOAT32 for input 0, name: serving_default_images:0

@hamedgorji
Copy link
Author

Update: I deleted the latest version and used Ultralytics YOLOv8.0.200. This version works. I would appreciate it if you could resolve the issue in the latest version.

@glenn-jocher
Copy link
Member

Hello!

Thanks for the update and for identifying a workaround by reverting to an older version of YOLOv8. We appreciate your feedback as it helps us improve. We'll investigate the issue with the latest version and aim to resolve it in upcoming updates. Your patience and contributions to making YOLOv8 better are greatly valued! 🚀

If you encounter any other issues or have further insights, please feel free to share.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants