diff --git a/README.md b/README.md index fbc753f..5ed09fe 100644 --- a/README.md +++ b/README.md @@ -1,59 +1,117 @@ -# AI Image Signal Processing and ISPs +# AI Image Signal Processing and Computational Photography +## Deep learning for low-level computer vision and imaging -[![arXiv](https://img.shields.io/badge/arXiv-Paper-.svg)](https://arxiv.org/abs/2201.03210) +[![isp](https://img.shields.io/badge/ISP-paper-lightgreen)](https://arxiv.org/abs/2201.03210) +[![lpienet](https://img.shields.io/badge/LPIENet-paper-lightpink)](https://arxiv.org/abs/2210.13552) +[![bokeh](https://img.shields.io/badge/Bokeh-paper-9cf)](https://openaccess.thecvf.com/content/CVPR2023W/NTIRE/papers/Seizinger_Efficient_Multi-Lens_Bokeh_Effect_Rendering_and_Transformation_CVPRW_2023_paper.pdf) +[![ntire23](https://img.shields.io/badge/NTIRE-CVPR23-lightcyan)](https://cvlai.net/ntire/2023/) ![visitors](https://visitor-badge.glitch.me/badge?page_id=mv-lab/AISP) -[Marcos V. Conde](https://scholar.google.com/citations?user=NtB1kjYAAAAJ&hl=en), [Radu Timofte](https://scholar.google.com/citations?user=u3MwH5kAAAAJ&hl=en) +**[Marcos V. Conde](https://scholar.google.com/citations?user=NtB1kjYAAAAJ&hl=en), [Radu Timofte](https://scholar.google.com/citations?user=u3MwH5kAAAAJ&hl=en)** -[Computer Vision Lab, CAIDAS, University of Würzburg](https://www.informatik.uni-wuerzburg.de/computervision/home/) +[Computer Vision Lab, CAIDAS, University of Würzburg](https://www.informatik.uni-wuerzburg.de/computervision/home/) --------------------------------------------------- +> **Topics** This repository contains material for RAW image processing, RAW image reconstruction and synthesis, learned Image Signal Processing (ISP), Image Enhancement and Restoration (denoising, deblurring), Multi-lense Bokeh effect rendering, and much more! 📷 + +
+ #### Official repository for the following works: +1. **[Efficient Multi-Lens Bokeh Effect Rendering and Transformation](https://openaccess.thecvf.com/content/CVPR2023W/NTIRE/papers/Seizinger_Efficient_Multi-Lens_Bokeh_Effect_Rendering_and_Transformation_CVPRW_2023_paper.pdf)** at **CVPR NTIRE 2023**. 1. **[Perceptual Image Enhancement for Smartphone Real-Time Applications](https://arxiv.org/abs/2210.13552) (LPIENet) at WACV 2023.** 1. **[Reversed Image Signal Processing and RAW Reconstruction. AIM 2022 Challenge Report](aim22-reverseisp/) ECCV, AIM 2022** -1. **[Model-Based Image Signal Processors via Learnable Dictionaries](https://ojs.aaai.org/index.php/AAAI/article/view/19926) AAAI 2022 Oral** +1. **[Model-Based Image Signal Processors via Learnable Dictionaries](https://arxiv.org/abs/2201.03210) AAAI 2022 Oral** 1. [MAI 2022 Learned ISP Challenge](#mai-2022-learned-isp-challenge) Complete Baseline solution -1. [Citation and Acknowledgement](#citation-and-acknowledgement) | [Contact](#contact) +1. [Citation and Acknowledgement](#citation-and-acknowledgement) | [Contact](#contact) for any inquiries. **News 🚀🚀** -- [11/2022] LPIENet release soon! +- will try to keep the repo updated on a monthly basis ✏️ +- [06/2023] Lens-to-lens bokeh effect transformation and NTIRE 2023 material coming soon. +- [01/202] LPIENet material is out - [10/2022] Reversed ISP and RAW Reconstruction material presented at AIM workshop ECCV 2022 is now available! [check here](aim22-reverseisp/) ---------------------------------------------------- +| | | | | +|:--- |:--- |:--- |:---| +| | | | +| | | | | -## [AIM 2022 Reversed ISP Challenge](aim22-reverseisp/) +------ -### [Track 1 - S7](https://codalab.lisn.upsaclay.fr/competitions/5079) | [Track 2 - P20](https://codalab.lisn.upsaclay.fr/competitions/5080) +## [Perceptual Image Enhancement for Smartphone Real-Time Applications](https://arxiv.org/abs/2210.13552) (WACV '23) -aim-challenge-teaser +*This work was presented at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023.* -In this challenge, we look for solutions to recover RAW readings from the camera using only the corresponding RGB images processed by the in-camera ISP. Successful solutions should generate plausible RAW images, and by doing this, other downstream tasks like Denoising, Super-resolution or Colour Constancy can benefit from such synthetic data generation. Click [here to read more information](aim22-reverseisp/README.md) about the challenge. +> Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images e.g., noise, diffraction artifacts, blur, and HDR overexposure. +We propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. -### Starter guide and code 🔥 +The code is available at **[lpienet](lpienet/)** including versions in Pytorch and Tensorflow. We also include the model conversion to TFLite, so you can generate the corresponding `.tflite` file and run the model using the `AI Benchmark` app on android devices. +In *[lpienet-tflite.ipynb](lpienet/lpienet-tflite.ipynb)* you can find a complete tutorial to transform the model to tflite. -- **[aim-starter-code.ipynb](aim22-reverseisp/official-starter-code.ipynb)** - Simple dataloading and visualization of RGB-RAW pairs + other utils. -- **[aim-baseline.ipynb](aim22-reverseisp/official-baseline.ipynb)** - End-to-end guide to load the data, train a simple UNet model and make your first submission! +**Contributions** +- The model can process 4K images under 1s on commercial smartphones. +- We achieve competitive results in comparison to SOTA methods in relevant benchmarks for denoising, deblurring and HDR correction. For example the SIDD benchmark. +- We reduce NAFNet number of MACs (or FLOPs) by 50 times. + +
+ Click here to read the abstract +

Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images. The most common unpleasant effects are noise artifacts, diffraction artifacts, blur, and HDR overexposure. Deep learning methods for image restoration can successfully remove these artifacts. However, most approaches are not suitable for real-time applications on mobile devices due to their heavy computation and memory requirements. + + In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones. +
+

+
+
+ + + +lpienet + + +| | | +| :--- | :--- | +| | | +| | | +
------ -## [Model-Based Image Signal Processors via Learnable Dictionaries](https://ojs.aaai.org/index.php/AAAI/article/view/19926) (AAAI '22 Oral) +## [Model-Based Image Signal Processors via Learnable Dictionaries](https://mv-lab.github.io/model-isp22/) (AAAI '22 Oral) + +*This work was presented at the 36th AAAI Conference on Artificial Intelligence, Spotlight (15%)* [Project website](https://mv-lab.github.io/model-isp22/) where you can find the poster, presentation and more information. > Hybrid model-based and data-driven approach for modelling ISPs using learnable dictionaries. We explore RAW image reconstruction and improve downstream tasks like RAW Image Denoising via raw data augmentation-synthesis. -mbdlisp +mbdlisp + + +If you have implementation questions or you need qualitative samples for comparison, please contact me. You can download the figure/illustration of our method in [mbispld](mbispld/mbispld.pdf). + +
+ +------ + +## [AIM 2022 Reversed ISP Challenge](aim22-reverseisp/) + +This work was presented at the European Conference on Computer Vision (ECCV) 2022, AIM workshop. + +### [Track 1 - S7](https://codalab.lisn.upsaclay.fr/competitions/5079) | [Track 2 - P20](https://codalab.lisn.upsaclay.fr/competitions/5080) + +aim-challenge-teaser +In this challenge, we look for solutions to recover RAW readings from the camera using only the corresponding RGB images processed by the in-camera ISP. Successful solutions should generate plausible RAW images, and by doing this, other downstream tasks like Denoising, Super-resolution or Colour Constancy can benefit from such synthetic data generation. Click [here to read more information](aim22-reverseisp/README.md) about the challenge. -The code will be released soon. If you have implementation questions or you need qualitative samples for comparison, please contact me. +### Starter guide and code 🔥 -We provide the figure/illustration of our method in [mbispld](mbispld/mbispld.pdf). +- **[aim-starter-code.ipynb](aim22-reverseisp/official-starter-code.ipynb)** - Simple dataloading and visualization of RGB-RAW pairs + other utils. +- **[aim-baseline.ipynb](aim22-reverseisp/official-baseline.ipynb)** - End-to-end guide to load the data, train a simple UNet model and make your first submission! ------ @@ -94,4 +152,4 @@ We test the model on AI Benchmark. The model average latency is 60ms using a inp ## Contact -Marcos Conde (marcos.conde-osorio@uni-wuerzburg.de) and Radu Timofte (radu.timofte@uni-wuerzburg.de) are the contact persons and direct managers of the AIM challenge. Please add in the email subject "AIM22 Reverse ISP Challenge" or "AISP" +Marcos Conde (marcos.conde@uni-wuerzburg.de) is the contact persons and co-organizer of NTIRE and AIM challenges. diff --git a/lpienet/lpienet-app.png b/lpienet/lpienet-app.png new file mode 100644 index 0000000..de096eb Binary files /dev/null and b/lpienet/lpienet-app.png differ diff --git a/lpienet/lpienet-plot.png b/lpienet/lpienet-plot.png new file mode 100644 index 0000000..33d07e6 Binary files /dev/null and b/lpienet/lpienet-plot.png differ diff --git a/lpienet/lpienet-pytorch.py b/lpienet/lpienet-pytorch.py new file mode 100644 index 0000000..1ac18c8 --- /dev/null +++ b/lpienet/lpienet-pytorch.py @@ -0,0 +1,175 @@ +""" +Experiment options: +- Clip input range?! +- Sequential or parallel attention, which order? +- Spatial attention options (see CBAM paper) +- Which down and up sampling method? Pool, Conv, Shuffle, Interpolation +- Add vs. concat skips +- Add FMEN-like Unshuffle/Shuffle +""" + +import torch +import torch.nn as nn +import torch.nn.functional as F +from typing import List + + +class AttentionBlock(nn.Module): + def __init__(self, dim: int): + super(AttentionBlock, self).__init__() + self._spatial_attention_conv = nn.Conv2d(2, dim, kernel_size=3, padding=1) + + # Channel attention MLP + self._channel_attention_conv0 = nn.Conv2d(1, dim, kernel_size=1, padding=0) + self._channel_attention_conv1 = nn.Conv2d(dim, dim, kernel_size=1, padding=0) + + self._out_conv = nn.Conv2d(2 * dim, dim, kernel_size=1, padding=0) + + def forward(self, x: torch.Tensor): + if len(x.shape) != 4: + raise ValueError(f"Expected [B, C, H, W] input, got {x.shape}.") + + # Spatial attention + mean = torch.mean(x, dim=1, keepdim=True) # Mean/Max on C axis + max, _ = torch.max(x, dim=1, keepdim=True) + spatial_attention = torch.cat([mean, max], dim=1) # [B, 2, H, W] + spatial_attention = self._spatial_attention_conv(spatial_attention) + spatial_attention = torch.sigmoid(spatial_attention) * x + + # Channel attention. TODO: Correct that it only uses average pool contrary to CBAM? + # NOTE/TODO: This differs from CBAM as it uses Channel pooling, not spatial pooling! + # In a way, this is 2x spatial attention + channel_attention = torch.relu(self._channel_attention_conv0(mean)) + channel_attention = self._channel_attention_conv1(channel_attention) + channel_attention = torch.sigmoid(channel_attention) * x + + attention = torch.cat([spatial_attention, channel_attention], dim=1) # [B, 2*dim, H, W] + attention = self._out_conv(attention) + return x + attention + + +# TODO: This is not named in the paper right? +# It is sort of the InverseResidualBlock but w/o the Channel and Spatial Attentions and without another Conv after ReLU +class InverseBlock(nn.Module): + def __init__(self, input_channels: int, channels: int): + super(InverseBlock, self).__init__() + + self._conv0 = nn.Conv2d(input_channels, channels, kernel_size=1) + self._dw_conv = nn.Conv2d(channels, channels, kernel_size=3, padding=1, groups=channels) + self._conv1 = nn.Conv2d(channels, channels, kernel_size=1) + self._conv2 = nn.Conv2d(input_channels, channels, kernel_size=1) + + def forward(self, x: torch.Tensor): + features = self._conv0(x) + features = F.elu(self._dw_conv(features)) # TODO: Paper is ReLU, authors do ELU + features = self._conv1(features) + + # TODO: The BaseBlock has residuals and one path of convolutions, not 2 separate paths - is this different on purpose? + x = torch.relu(self._conv2(x)) + return x + features + + +class BaseBlock(nn.Module): + def __init__(self, channels: int): + super(BaseBlock, self).__init__() + + self._conv0 = nn.Conv2d(channels, channels, kernel_size=1) + self._dw_conv = nn.Conv2d(channels, channels, kernel_size=3, padding=1, groups=channels) + self._conv1 = nn.Conv2d(channels, channels, kernel_size=1) + + self._conv2 = nn.Conv2d(channels, channels, kernel_size=1) + self._conv3 = nn.Conv2d(channels, channels, kernel_size=1) + + def forward(self, x: torch.Tensor): + features = self._conv0(x) + features = F.elu(self._dw_conv(features)) # TODO: ELU or ReLU? + features = self._conv1(features) + x = x + features + + features = F.elu(self._conv2(x)) + features = self._conv3(features) + return x + features + + +class AttentionTail(nn.Module): + def __init__(self, channels: int): + super(AttentionTail, self).__init__() + + self._conv0 = nn.Conv2d(channels, channels, kernel_size=7, padding=3) + self._conv1 = nn.Conv2d(channels, channels, kernel_size=5, padding=2) + self._conv2 = nn.Conv2d(channels, channels, kernel_size=3, padding=1) + + def forward(self, x: torch.Tensor): + attention = torch.relu(self._conv0(x)) + attention = torch.relu(self._conv1(attention)) + attention = torch.sigmoid(self._conv2(attention)) + return x * attention + + +class LPIENet(nn.Module): + def __init__(self, input_channels: int, output_channels: int, encoder_dims: List[int], decoder_dims: List[int]): + super(LPIENet, self).__init__() + + if len(encoder_dims) != len(decoder_dims) + 1 or len(decoder_dims) < 1: + raise ValueError(f"Unexpected encoder and decoder dims: {encoder_dims}, {decoder_dims}.") + + if input_channels != output_channels: + raise NotImplementedError() + + # TODO: We will need an explicit decoder head, consider Unshuffle & Shuffle + + encoders = [] + for i, encoder_dim in enumerate(encoder_dims): + input_dim = input_channels if i == 0 else encoder_dims[i - 1] + encoders.append( + nn.Sequential( + nn.Conv2d(input_dim, encoder_dim, kernel_size=3, padding=1), + BaseBlock(encoder_dim), # TODO: one or two base blocks? + BaseBlock(encoder_dim), + AttentionBlock(encoder_dim), + ) + ) + self._encoders = nn.ModuleList(encoders) + + decoders = [] + for i, decoder_dim in enumerate(decoder_dims): + input_dim = encoder_dims[-1] if i == 0 else decoder_dims[i - 1] + encoder_dims[-i - 1] + decoders.append( + nn.Sequential( + nn.Conv2d(input_dim, decoder_dim, kernel_size=3, padding=1), + BaseBlock(decoder_dim), + BaseBlock(decoder_dim), + AttentionBlock(decoder_dim), + ) + ) + self._decoders = nn.ModuleList(decoders) + + self._inverse_bock = InverseBlock(encoder_dims[0] + decoder_dims[-1], output_channels) + self._attention_tail = AttentionTail(output_channels) + + def forward(self, x: torch.Tensor): + if len(x.shape) != 4: + raise ValueError(f"Expected [B, C, H, W] input, got {x.shape}.") + global_residual = x + + encoder_outputs = [] + for i, encoder in enumerate(self._encoders): + x = encoder(x) + if i != len(self._encoders) - 1: + encoder_outputs.append(x) + x = F.max_pool2d(x, kernel_size=2) + + for i, decoder in enumerate(self._decoders): + x = decoder(x) + x = F.interpolate(x, scale_factor=2, mode="bilinear") + x = torch.cat([x, encoder_outputs.pop()], dim=1) + + x = self._inverse_bock(x) + x = self._attention_tail(x) + return x + global_residual + + +model = LPIENet(3, 3, [4, 8, 16], [8, 4]) +x = torch.rand(1, 3, 16, 16) +out = model(x) +print(out.shape) diff --git a/lpienet/lpienet-tflite.ipynb b/lpienet/lpienet-tflite.ipynb new file mode 100644 index 0000000..5ca7c5d --- /dev/null +++ b/lpienet/lpienet-tflite.ipynb @@ -0,0 +1 @@ +{"metadata":{"kernelspec":{"language":"python","display_name":"Python 3","name":"python3"},"language_info":{"name":"python","version":"3.7.12","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat_minor":4,"nbformat":4,"cells":[{"cell_type":"code","source":"!pip -q install keras-flops","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:54:04.173831Z","iopub.execute_input":"2023-04-05T18:54:04.174871Z","iopub.status.idle":"2023-04-05T18:54:15.044401Z","shell.execute_reply.started":"2023-04-05T18:54:04.174761Z","shell.execute_reply":"2023-04-05T18:54:15.043000Z"},"trusted":true},"execution_count":1,"outputs":[{"name":"stdout","text":"\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\u001b[33m\n\u001b[0m","output_type":"stream"}]},{"cell_type":"code","source":"import os\nimport numpy as np\nimport gc\n\n#import keras\nimport tensorflow as tf\nfrom tensorflow.keras import backend as K\nfrom tensorflow.keras.callbacks import ModelCheckpoint,CSVLogger\nfrom tensorflow.keras import layers as L\nfrom tensorflow.keras.models import Sequential , Model\nfrom tensorflow.keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, Reshape, Dense, multiply, Permute, Concatenate, Conv2D, Add, Activation, Lambda\nfrom tensorflow.keras.layers import *\nimport tensorflow_addons as tfa\nimport keras_flops\nfrom keras_flops import get_flops\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:54:15.050542Z","iopub.execute_input":"2023-04-05T18:54:15.050859Z","iopub.status.idle":"2023-04-05T18:54:17.159720Z","shell.execute_reply.started":"2023-04-05T18:54:15.050822Z","shell.execute_reply":"2023-04-05T18:54:17.158587Z"},"trusted":true},"execution_count":2,"outputs":[]},{"cell_type":"markdown","source":"# Models","metadata":{}},{"cell_type":"code","source":"def keras2tflite (model, name, fp16=False):\n print ('converting...')\n converter = tf.lite.TFLiteConverter.from_keras_model(model)\n if fp16:\n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n converter.target_spec.supported_types = [tf.float16, tf.int8]\n\n # Be very careful here:\n # \"experimental_new_converter\" is enabled by default in TensorFlow 2.2+. However, using the new MLIR TFLite\n # converter might result in corrupted / incorrect TFLite models for some particular architectures. Therefore, the\n # best option is to perform the conversion using both the new and old converter and check the results in each case:\n #converter.target_ops= [TFLITE_BUILTINS,SELECT_TF_OPS]\n #converter.target_spec.supported_ops = [\n # tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.\n # tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.\n #]\n \n converter.experimental_new_converter = False\n tflite_model = converter.convert()\n open(f\"{name}\", \"wb\").write(tflite_model)\n print ('saved!')","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:54:32.924600Z","iopub.execute_input":"2023-04-05T18:54:32.925431Z","iopub.status.idle":"2023-04-05T18:54:32.933173Z","shell.execute_reply.started":"2023-04-05T18:54:32.925386Z","shell.execute_reply":"2023-04-05T18:54:32.932054Z"},"trusted":true},"execution_count":3,"outputs":[]},{"cell_type":"code","source":"IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS = 400, 400, 3\nHR_HEIGHT, HR_WIDTH, HR_CHANNELS = 1080, 1920, 3\nLR_HEIGHT, LR_WIDTH, LR_CHANNELS = 256, 256, 3\n\nconv_activation = tf.keras.layers.LeakyReLU()\nconv_activation = tf.keras.activations.elu\n\ndef convolution_block(x, filters, size, strides=(1,1), padding='same', activation=True, bn=False, dilation=1):\n x = tf.keras.layers.Conv2D(filters, size, strides=strides, padding=padding, dilation_rate=dilation)(x) #name='{}_conv'.format(name)\n if bn:\n x = tf.keras.layers.BatchNormalization(axis=3)(x)\n if activation:\n x = conv_activation(x)\n return x\n\ndef residual_subblock(blockInput,num_filters):\n x = convolution_block(blockInput, num_filters, (3,3) ,activation=True)\n x = convolution_block(x, num_filters, (3,3), activation=True)\n x = tf.keras.layers.Add()([x, blockInput])\n #x = conv_activation(x)\n return x\n\ndef inverted_linear_residual_block(x, expand=64, squeeze=16):\n m = Conv2D(expand, (1,1), activation='elu', strides=(1,1), padding='same')(x)\n m = DepthwiseConv2D((3,3), activation='elu', strides=(1,1), padding='same')(m)\n m = Conv2D(squeeze, (1,1))(m)\n return Add()([m, x])\n\ndef inverted_proj_block(x, proj=16):\n m = DepthwiseConv2D((3,3), activation='elu', strides=(1,1), padding='same')(x)\n m = Conv2D(proj, (1,1))(m)\n return m\n\ndef CALayer(blockInput,num_filters):\n '''\n Dilated Attention Block (DAB)\n '''\n y = blockInput\n filtersCount = blockInput.shape[-1]\n x0 = convolution_block(y,num_filters,(3,3),activation=True,dilation=1)\n x1 = convolution_block(y,num_filters,(3,3),activation=True,dilation=2)\n x2 = convolution_block(y,num_filters,(3,3),activation=True,dilation=4)\n out = Concatenate(axis=3)([x0,x1,x2])\n\n sg = tf.keras.layers.Conv2D(filtersCount, (3,3), strides=1, padding=\"same\")(out)\n sg = Activation(\"sigmoid\")(sg)\n return tf.keras.layers.Multiply()([blockInput,sg])\n\n\ndef residual_dense_attention(blockInput, num_filters=16):\n '''\n RDB block with attention DAB\n '''\n count = 3\n li = [blockInput]\n pas= convolution_block(blockInput, num_filters,size=(3,3),strides=(1,1))\n for i in range(2 , count+1):\n li.append(pas)\n out = tf.keras.layers.Concatenate(axis = 3)(li) # conctenated out put\n pas = convolution_block(out,num_filters,size=(3,3),strides=(1,1))\n pas = residual_subblock(pas,num_filters)\n #pas = inverted_linear_residual_block(pas, expand=num_filters*2, squeeze=num_filters)\n \n li.append(pas)\n out = Concatenate(axis=3)(li)\n out = tf.keras.layers.Conv2D(num_filters, (3,3), strides=(1,1), padding=\"same\", activation='relu')(out)\n return out\n\ndef DAB (x, dim=64):\n \n inputs = x\n \n for i in range(2):\n x = tf.keras.layers.Conv2D(dim, (3,3), strides=(1,1), padding=\"same\", activation='relu')(x)\n \n shortcut = x\n \n gap = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(x)\n gmp = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(x)\n \n ## spatial attention\n gap_gmp = Concatenate(axis=3)([gap, gmp])\n gap_gmp = tf.keras.layers.Conv2D(dim, (3,3), strides=(1,1), \n padding=\"same\", \n activation='sigmoid')(gap_gmp)\n \n spatial_attention = multiply([shortcut, gap_gmp])\n \n ## channel attention\n x1 = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n padding=\"same\", \n activation='relu')(gap)\n x1 = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n padding=\"same\", \n activation='sigmoid')(x1)\n \n channel_attention = multiply([shortcut, x1])\n \n \n attention = Concatenate(axis=3)([spatial_attention, channel_attention])\n x2 = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n padding=\"same\", \n activation='relu')(attention)\n \n #input_project = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n # padding=\"same\", \n # activation='relu')(inputs)\n \n out = Add()([inputs, x2])\n return out\n \n\ndef RRG(x, kernel_size, reduction, n_feats=64, num_dab=8):\n '''Recursive Residual Group\n source: https://github.com/swz30/CycleISP'''\n shortcut = x\n for _ in range(num_dab):\n x = DAB (x,dim=n_feats)\n \n x = tf.keras.layers.Conv2D(n_feats, (3,3), strides=(1,1), padding=\"same\", activation='relu')(x)\n out = out = Add()([shortcut, x])\n return out\n\ndef basic_encoder(blockInput,num_filters,activation=True):\n x = convolution_block(blockInput, num_filters, (3,3) ,activation=True)\n x = convolution_block(x, num_filters, (3,3), activation=True)\n x = tf.keras.layers.Add()([x, convolution_block(blockInput, num_filters, (3,3), activation=True)])\n if activation:\n x = tf.keras.layers.LeakyReLU()(x)\n return x","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:54:40.922938Z","iopub.execute_input":"2023-04-05T18:54:40.923395Z","iopub.status.idle":"2023-04-05T18:54:41.132283Z","shell.execute_reply.started":"2023-04-05T18:54:40.923355Z","shell.execute_reply":"2023-04-05T18:54:41.131127Z"},"trusted":true},"execution_count":5,"outputs":[]},{"cell_type":"markdown","source":"### Main Blocks","metadata":{}},{"cell_type":"code","source":"gelu = tf.keras.activations.gelu\nselu = tf.keras.activations.selu\nelu = tf.keras.activations.elu\n\ndef attention_block (x, dim=16):\n \n inputs = x\n \n #x = tf.keras.layers.Conv2D(dim, (3,3), strides=(1,1), \n # padding=\"same\", \n # activation='relu')(x)\n \n shortcut = x\n \n gap = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(x)\n gmp = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(x)\n \n ## spatial attention\n gap_gmp = Concatenate(axis=3)([gap, gmp])\n gap_gmp = tf.keras.layers.Conv2D(dim, (3,3), strides=(1,1), \n padding=\"same\", \n activation='sigmoid')(gap_gmp)\n \n spatial_attention = multiply([shortcut, gap_gmp])\n \n ## channel attention\n x1 = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n padding=\"same\", \n activation='relu')(gap)\n x1 = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n padding=\"same\", \n activation='sigmoid')(x1)\n \n channel_attention = multiply([shortcut, x1])\n \n \n attention = Concatenate(axis=3)([spatial_attention, channel_attention])\n x2 = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n padding=\"same\", \n activation=None)(attention)\n \n #input_project = tf.keras.layers.Conv2D(dim, (1,1), strides=(1,1), \n # padding=\"same\", \n # activation=None)(inputs)\n \n out = Add()([inputs, x2])\n return out\n \n\ndef RRG(x,dim=16):\n '''Recursive Residual Group\n source: https://github.com/swz30/CycleISP'''\n x = attention_block(x,dim)\n return out\n\ndef flatten(x) :\n return tf.layers.flatten(x)\n\ndef hw_flatten(x) :\n return tf.reshape(x, shape=[x.shape[0], -1, x.shape[-1]])\n\ndef sagan_block(x, channels):\n f = Conv2D(channels, (1,1), activation=None, strides=(1,1), padding='same')(x)\n g = Conv2D(channels, (1,1), activation=None, strides=(1,1), padding='same')(x)\n h = Conv2D(channels, (1,1), activation=None, strides=(1,1), padding='same')(x)\n\n f = tf.transpose(f)\n att_map = f*g\n att_map = tf.keras.activations.softmax (att_map)\n fe = att_map * h\n fe = Conv2D(channels, (1,1), activation='sigmoid', strides=(1,1), padding='same')(fe)\n return fe\n\ndef sat(x, channels=3):\n f = Conv2D(channels, (7,7), activation='relu', strides=(1,1), padding='same')(x)\n f = Conv2D(channels, (5,5), activation='relu', strides=(1,1), padding='same')(f)\n f = Conv2D(channels, (3,3), activation='sigmoid', strides=(1,1), padding='same')(f)\n return x * f\n\ndef inv_block(x, channels=3):\n m = x\n m = Conv2D(channels, (1,1), activation =None, strides=(1,1), padding='same')(m)\n m = DepthwiseConv2D((3,3), activation=None, strides=(1,1), padding='same')(m)\n m = elu(m)\n m = Conv2D(channels, (1,1))(m)\n \n x = Conv2D(channels, (1,1), activation ='relu', strides=(1,1), padding='same')(x)\n y = Add()([m, x])\n return y\n \ndef baseblock(x, channels=32):\n #m = LayerNormalization()(x)\n m = x\n m = Conv2D(channels, (1,1), activation =None, strides=(1,1), padding='same')(m)\n m = DepthwiseConv2D((3,3), activation=None, strides=(1,1), padding='same')(m)\n m = elu(m)\n m = Conv2D(channels, (1,1))(m)\n y = Add()([m, x])\n #m = LayerNormalization()(m)\n m = Conv2D(channels, (1,1), activation= None, strides=(1,1), padding='same')(m)\n m = elu(m)\n m = Conv2D(channels, (1,1), activation= None, strides=(1,1), padding='same')(m)\n m = Add()([m, y])\n return m","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:54:41.982449Z","iopub.execute_input":"2023-04-05T18:54:41.982863Z","iopub.status.idle":"2023-04-05T18:54:42.006944Z","shell.execute_reply.started":"2023-04-05T18:54:41.982827Z","shell.execute_reply":"2023-04-05T18:54:42.005898Z"},"trusted":true},"execution_count":6,"outputs":[]},{"cell_type":"code","source":"def build_ours(input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS),learning_rate=0.001):\n\n encoder_dim = [16, 32, 64]\n encoder_fes = []\n decoder_dim = [32, 16]\n enc_dec_cat = [1, 0]\n \n inputs = tf.keras.layers.Input(input_shape)\n x = inputs\n \n for e in range(len(encoder_dim)):\n x = tf.keras.layers.Conv2D(encoder_dim[e], (3, 3), activation=\"relu\", padding=\"same\")(x)\n x = baseblock(x,encoder_dim[e])\n x = baseblock(x,encoder_dim[e])\n x = attention_block(x,encoder_dim[e])\n encoder_fes.append(x)\n print ('e', e, x.shape)\n if e != (len(encoder_dim)-1):\n x = tf.keras.layers.MaxPooling2D((2, 2))(x)\n\n for d in range(len(decoder_dim)):\n x = tf.keras.layers.Conv2D(decoder_dim[d], (3, 3), activation=\"relu\", padding=\"same\")(x)\n x = baseblock(x,decoder_dim[d])\n x = baseblock(x,decoder_dim[d])\n x = attention_block(x,decoder_dim[d])\n print ('d', d, x.shape)\n x = tf.keras.layers.UpSampling2D(size=(2,2),interpolation='bilinear')(x)\n #x = Conv2D(x.shape[-1], (3,3), activation =None, strides=(1,1), padding='same')(x)\n x = tf.keras.layers.Concatenate()([x, encoder_fes[enc_dec_cat[d]]])\n \n x = inv_block(x,3)\n x = sat(x)\n x = x + inputs\n \n model = tf.keras.models.Model(inputs=[inputs], outputs=[x])\n model.compile(optimizer=tf.keras.optimizers.Adam(lr=learning_rate), loss='mse')\n return model","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:54:43.025473Z","iopub.execute_input":"2023-04-05T18:54:43.026418Z","iopub.status.idle":"2023-04-05T18:54:43.039313Z","shell.execute_reply.started":"2023-04-05T18:54:43.026374Z","shell.execute_reply":"2023-04-05T18:54:43.038006Z"},"trusted":true},"execution_count":7,"outputs":[]},{"cell_type":"code","source":"tf.keras.backend.clear_session()\n\nmodel = build_ours(input_shape=(3840 , 2160, 3 ))\nprint (model.count_params() / 1_000_000, 'M params.')\n#model.summary()\n\nflops = get_flops(model, batch_size=1)\nprint(f\"FLOPS: {flops / 1e9} G\")\n\nMODEL_NAME = 'wours_4k.tflite'\nkeras2tflite(model, name=MODEL_NAME, fp16=True)","metadata":{"execution":{"iopub.status.busy":"2023-04-05T18:55:02.787052Z","iopub.execute_input":"2023-04-05T18:55:02.787507Z","iopub.status.idle":"2023-04-05T18:55:25.559245Z","shell.execute_reply.started":"2023-04-05T18:55:02.787467Z","shell.execute_reply":"2023-04-05T18:55:25.557842Z"},"trusted":true},"execution_count":8,"outputs":[{"name":"stdout","text":"e 0 (None, 3840, 2160, 16)\ne 1 (None, 1920, 1080, 32)\ne 2 (None, 960, 540, 64)\nd 0 (None, 960, 540, 32)\nd 1 (None, 1920, 1080, 16)\n0.133652 M params.\n\n=========================Options=============================\n-max_depth 10000\n-min_bytes 0\n-min_peak_bytes 0\n-min_residual_bytes 0\n-min_output_bytes 0\n-min_micros 0\n-min_accelerator_micros 0\n-min_cpu_micros 0\n-min_params 0\n-min_float_ops 1\n-min_occurrence 0\n-step -1\n-order_by float_ops\n-account_type_regexes .*\n-start_name_regexes .*\n-trim_name_regexes \n-show_name_regexes .*\n-hide_name_regexes \n-account_displayed_op_only true\n-select float_ops\n-output stdout:\n\n==================Model Analysis Report======================\n\nDoc:\nscope: The nodes in the model graph are organized by their names, which is hierarchical like filesystem.\nflops: Number of float operations. Note: Please read the implementation for the math behind it.\n\nProfile:\nnode name | # float_ops\n_TFProfRoot (--/310.46b flops)\n model/conv2d_52/Conv2D (38.22b/38.22b flops)\n model/conv2d_39/Conv2D (19.11b/19.11b flops)\n model/conv2d_13/Conv2D (19.11b/19.11b flops)\n model/conv2d_26/Conv2D (19.11b/19.11b flops)\n model/conv2d_38/Conv2D (8.49b/8.49b flops)\n model/conv2d_25/Conv2D (8.49b/8.49b flops)\n model/conv2d_12/Conv2D (8.49b/8.49b flops)\n model/conv2d_68/Conv2D (7.32b/7.32b flops)\n model/conv2d/Conv2D (7.17b/7.17b flops)\n model/conv2d_9/Conv2D (4.78b/4.78b flops)\n model/conv2d_14/Conv2D (4.25b/4.25b flops)\n model/conv2d_1/Conv2D (4.25b/4.25b flops)\n model/conv2d_29/Conv2D (4.25b/4.25b flops)\n model/conv2d_6/Conv2D (4.25b/4.25b flops)\n model/conv2d_11/Conv2D (4.25b/4.25b flops)\n model/conv2d_3/Conv2D (4.25b/4.25b flops)\n model/conv2d_30/Conv2D (4.25b/4.25b flops)\n model/conv2d_5/Conv2D (4.25b/4.25b flops)\n model/conv2d_28/Conv2D (4.25b/4.25b flops)\n model/conv2d_15/Conv2D (4.25b/4.25b flops)\n model/conv2d_16/Conv2D (4.25b/4.25b flops)\n model/conv2d_27/Conv2D (4.25b/4.25b flops)\n model/conv2d_17/Conv2D (4.25b/4.25b flops)\n model/conv2d_18/Conv2D (4.25b/4.25b flops)\n model/conv2d_19/Conv2D (4.25b/4.25b flops)\n model/conv2d_2/Conv2D (4.25b/4.25b flops)\n model/conv2d_24/Conv2D (4.25b/4.25b flops)\n model/conv2d_20/Conv2D (4.25b/4.25b flops)\n model/conv2d_21/Conv2D (4.25b/4.25b flops)\n model/conv2d_37/Conv2D (4.25b/4.25b flops)\n model/conv2d_4/Conv2D (4.25b/4.25b flops)\n model/conv2d_31/Conv2D (4.25b/4.25b flops)\n model/conv2d_7/Conv2D (4.25b/4.25b flops)\n model/conv2d_34/Conv2D (4.25b/4.25b flops)\n model/conv2d_32/Conv2D (4.25b/4.25b flops)\n model/conv2d_8/Conv2D (4.25b/4.25b flops)\n model/conv2d_33/Conv2D (4.25b/4.25b flops)\n model/conv2d_69/Conv2D (3.73b/3.73b flops)\n model/conv2d_22/Conv2D (2.39b/2.39b flops)\n model/depthwise_conv2d_1/depthwise (2.39b/2.39b flops)\n model/depthwise_conv2d/depthwise (2.39b/2.39b flops)\n model/conv2d_64/Conv2D (2.12b/2.12b flops)\n model/conv2d_51/Conv2D (2.12b/2.12b flops)\n model/conv2d_65/Conv2D (1.59b/1.59b flops)\n model/conv2d_67/Conv2D (1.59b/1.59b flops)\n model/conv2d_70/Conv2D (1.34b/1.34b flops)\n model/depthwise_conv2d_2/depthwise (1.19b/1.19b flops)\n model/depthwise_conv2d_3/depthwise (1.19b/1.19b flops)\n model/conv2d_35/Conv2D (1.19b/1.19b flops)\n model/conv2d_61/Conv2D (1.19b/1.19b flops)\n model/conv2d_46/Conv2D (1.06b/1.06b flops)\n model/conv2d_42/Conv2D (1.06b/1.06b flops)\n model/conv2d_47/Conv2D (1.06b/1.06b flops)\n model/conv2d_43/Conv2D (1.06b/1.06b flops)\n model/conv2d_41/Conv2D (1.06b/1.06b flops)\n model/conv2d_40/Conv2D (1.06b/1.06b flops)\n model/conv2d_45/Conv2D (1.06b/1.06b flops)\n model/conv2d_44/Conv2D (1.06b/1.06b flops)\n model/conv2d_56/Conv2D (1.06b/1.06b flops)\n model/conv2d_50/Conv2D (1.06b/1.06b flops)\n model/conv2d_63/Conv2D (1.06b/1.06b flops)\n model/conv2d_60/Conv2D (1.06b/1.06b flops)\n model/conv2d_59/Conv2D (1.06b/1.06b flops)\n model/conv2d_58/Conv2D (1.06b/1.06b flops)\n model/conv2d_57/Conv2D (1.06b/1.06b flops)\n model/conv2d_55/Conv2D (1.06b/1.06b flops)\n model/conv2d_54/Conv2D (1.06b/1.06b flops)\n model/conv2d_53/Conv2D (1.06b/1.06b flops)\n model/depthwise_conv2d_8/depthwise (597.20m/597.20m flops)\n model/depthwise_conv2d_9/depthwise (597.20m/597.20m flops)\n model/conv2d_48/Conv2D (597.20m/597.20m flops)\n model/depthwise_conv2d_5/depthwise (597.20m/597.20m flops)\n model/depthwise_conv2d_4/depthwise (597.20m/597.20m flops)\n model/depthwise_conv2d_10/depthwise (447.90m/447.90m flops)\n model/depthwise_conv2d_7/depthwise (298.60m/298.60m flops)\n model/depthwise_conv2d_6/depthwise (298.60m/298.60m flops)\n model/conv2d_10/Conv2D (265.42m/265.42m flops)\n model/conv2d_66/Conv2D (149.30m/149.30m flops)\n model/lambda/Mean (132.71m/132.71m flops)\n model/depthwise_conv2d_1/BiasAdd (132.71m/132.71m flops)\n model/conv2d_4/BiasAdd (132.71m/132.71m flops)\n model/depthwise_conv2d/BiasAdd (132.71m/132.71m flops)\n model/conv2d_9/BiasAdd (132.71m/132.71m flops)\n model/conv2d_8/BiasAdd (132.71m/132.71m flops)\n model/multiply_1/mul (132.71m/132.71m flops)\n model/conv2d_7/BiasAdd (132.71m/132.71m flops)\n model/multiply/mul (132.71m/132.71m flops)\n model/conv2d_6/BiasAdd (132.71m/132.71m flops)\n model/add/add (132.71m/132.71m flops)\n model/add_1/add (132.71m/132.71m flops)\n model/max_pooling2d/MaxPool (132.71m/132.71m flops)\n model/conv2d_5/BiasAdd (132.71m/132.71m flops)\n model/conv2d_11/BiasAdd (132.71m/132.71m flops)\n model/conv2d_2/BiasAdd (132.71m/132.71m flops)\n model/conv2d_23/Conv2D (132.71m/132.71m flops)\n model/add_2/add (132.71m/132.71m flops)\n model/add_3/add (132.71m/132.71m flops)\n model/conv2d_10/BiasAdd (132.71m/132.71m flops)\n model/add_4/add (132.71m/132.71m flops)\n model/conv2d_1/BiasAdd (132.71m/132.71m flops)\n model/conv2d_12/BiasAdd (132.71m/132.71m flops)\n model/conv2d/BiasAdd (132.71m/132.71m flops)\n model/conv2d_3/BiasAdd (132.71m/132.71m flops)\n model/lambda_1/Max (124.42m/124.42m flops)\n model/conv2d_14/BiasAdd (66.36m/66.36m flops)\n model/conv2d_24/BiasAdd (66.36m/66.36m flops)\n model/conv2d_23/BiasAdd (66.36m/66.36m flops)\n model/conv2d_13/BiasAdd (66.36m/66.36m flops)\n model/conv2d_22/BiasAdd (66.36m/66.36m flops)\n model/conv2d_18/BiasAdd (66.36m/66.36m flops)\n model/conv2d_21/BiasAdd (66.36m/66.36m flops)\n model/conv2d_15/BiasAdd (66.36m/66.36m flops)\n model/conv2d_20/BiasAdd (66.36m/66.36m flops)\n model/conv2d_16/BiasAdd (66.36m/66.36m flops)\n model/conv2d_17/BiasAdd (66.36m/66.36m flops)\n model/conv2d_19/BiasAdd (66.36m/66.36m flops)\n model/conv2d_62/Conv2D (66.36m/66.36m flops)\n model/conv2d_25/BiasAdd (66.36m/66.36m flops)\n model/max_pooling2d_1/MaxPool (66.36m/66.36m flops)\n model/add_9/add (66.36m/66.36m flops)\n model/add_8/add (66.36m/66.36m flops)\n model/add_7/add (66.36m/66.36m flops)\n model/add_6/add (66.36m/66.36m flops)\n model/add_5/add (66.36m/66.36m flops)\n model/multiply_2/mul (66.36m/66.36m flops)\n model/multiply_3/mul (66.36m/66.36m flops)\n model/depthwise_conv2d_2/BiasAdd (66.36m/66.36m flops)\n model/depthwise_conv2d_3/BiasAdd (66.36m/66.36m flops)\n model/conv2d_36/Conv2D (66.36m/66.36m flops)\n model/lambda_2/Mean (66.36m/66.36m flops)\n model/lambda_3/Max (64.28m/64.28m flops)\n model/add_21/add (33.18m/33.18m flops)\n model/conv2d_31/BiasAdd (33.18m/33.18m flops)\n model/depthwise_conv2d_8/BiasAdd (33.18m/33.18m flops)\n model/conv2d_32/BiasAdd (33.18m/33.18m flops)\n model/conv2d_33/BiasAdd (33.18m/33.18m flops)\n model/conv2d_34/BiasAdd (33.18m/33.18m flops)\n model/conv2d_56/BiasAdd (33.18m/33.18m flops)\n model/conv2d_35/BiasAdd (33.18m/33.18m flops)\n model/add_12/add (33.18m/33.18m flops)\n model/add_24/add (33.18m/33.18m flops)\n model/add_23/add (33.18m/33.18m flops)\n model/add_22/add (33.18m/33.18m flops)\n model/add_11/add (33.18m/33.18m flops)\n model/add_20/add (33.18m/33.18m flops)\n model/multiply_4/mul (33.18m/33.18m flops)\n model/lambda_4/Mean (33.18m/33.18m flops)\n model/add_13/add (33.18m/33.18m flops)\n model/multiply_8/mul (33.18m/33.18m flops)\n model/conv2d_36/BiasAdd (33.18m/33.18m flops)\n model/multiply_9/mul (33.18m/33.18m flops)\n model/multiply_5/mul (33.18m/33.18m flops)\n model/depthwise_conv2d_4/BiasAdd (33.18m/33.18m flops)\n model/add_14/add (33.18m/33.18m flops)\n model/depthwise_conv2d_5/BiasAdd (33.18m/33.18m flops)\n model/conv2d_49/Conv2D (33.18m/33.18m flops)\n model/conv2d_57/BiasAdd (33.18m/33.18m flops)\n model/conv2d_55/BiasAdd (33.18m/33.18m flops)\n model/conv2d_58/BiasAdd (33.18m/33.18m flops)\n model/conv2d_54/BiasAdd (33.18m/33.18m flops)\n model/conv2d_59/BiasAdd (33.18m/33.18m flops)\n model/conv2d_53/BiasAdd (33.18m/33.18m flops)\n model/conv2d_52/BiasAdd (33.18m/33.18m flops)\n model/conv2d_38/BiasAdd (33.18m/33.18m flops)\n model/conv2d_60/BiasAdd (33.18m/33.18m flops)\n model/lambda_8/Mean (33.18m/33.18m flops)\n model/conv2d_61/BiasAdd (33.18m/33.18m flops)\n model/conv2d_30/BiasAdd (33.18m/33.18m flops)\n model/conv2d_62/BiasAdd (33.18m/33.18m flops)\n model/conv2d_63/BiasAdd (33.18m/33.18m flops)\n model/conv2d_26/BiasAdd (33.18m/33.18m flops)\n model/conv2d_64/BiasAdd (33.18m/33.18m flops)\n model/conv2d_27/BiasAdd (33.18m/33.18m flops)\n model/conv2d_37/BiasAdd (33.18m/33.18m flops)\n model/conv2d_28/BiasAdd (33.18m/33.18m flops)\n model/add_10/add (33.18m/33.18m flops)\n model/conv2d_29/BiasAdd (33.18m/33.18m flops)\n model/depthwise_conv2d_9/BiasAdd (33.18m/33.18m flops)\n model/lambda_5/Max (32.66m/32.66m flops)\n model/lambda_9/Max (31.10m/31.10m flops)\n model/conv2d_70/BiasAdd (24.88m/24.88m flops)\n model/tf.math.multiply/Mul (24.88m/24.88m flops)\n model/tf.__operators__.add/AddV2 (24.88m/24.88m flops)\n model/depthwise_conv2d_10/BiasAdd (24.88m/24.88m flops)\n model/add_25/add (24.88m/24.88m flops)\n model/conv2d_69/BiasAdd (24.88m/24.88m flops)\n model/conv2d_68/BiasAdd (24.88m/24.88m flops)\n model/conv2d_67/BiasAdd (24.88m/24.88m flops)\n model/conv2d_66/BiasAdd (24.88m/24.88m flops)\n model/conv2d_65/BiasAdd (24.88m/24.88m flops)\n model/multiply_7/mul (16.59m/16.59m flops)\n model/conv2d_46/BiasAdd (16.59m/16.59m flops)\n model/lambda_6/Mean (16.59m/16.59m flops)\n model/multiply_6/mul (16.59m/16.59m flops)\n model/depthwise_conv2d_6/BiasAdd (16.59m/16.59m flops)\n model/add_15/add (16.59m/16.59m flops)\n model/add_16/add (16.59m/16.59m flops)\n model/add_17/add (16.59m/16.59m flops)\n model/add_18/add (16.59m/16.59m flops)\n model/add_19/add (16.59m/16.59m flops)\n model/conv2d_39/BiasAdd (16.59m/16.59m flops)\n model/conv2d_40/BiasAdd (16.59m/16.59m flops)\n model/conv2d_41/BiasAdd (16.59m/16.59m flops)\n model/conv2d_42/BiasAdd (16.59m/16.59m flops)\n model/conv2d_43/BiasAdd (16.59m/16.59m flops)\n model/conv2d_44/BiasAdd (16.59m/16.59m flops)\n model/conv2d_45/BiasAdd (16.59m/16.59m flops)\n model/conv2d_47/BiasAdd (16.59m/16.59m flops)\n model/conv2d_48/BiasAdd (16.59m/16.59m flops)\n model/conv2d_49/BiasAdd (16.59m/16.59m flops)\n model/conv2d_50/BiasAdd (16.59m/16.59m flops)\n model/conv2d_51/BiasAdd (16.59m/16.59m flops)\n model/depthwise_conv2d_7/BiasAdd (16.59m/16.59m flops)\n model/lambda_7/Max (16.07m/16.07m flops)\n model/up_sampling2d/mul (2/2 flops)\n model/up_sampling2d_1/mul (2/2 flops)\n\n======================End of Report==========================\nFLOPS: 310.462502404 G\nconverting...\nsaved!\n","output_type":"stream"}]},{"cell_type":"code","source":"","metadata":{},"execution_count":null,"outputs":[]}]} \ No newline at end of file diff --git a/media/lpienet.png b/media/lpienet.png new file mode 100644 index 0000000..cd83c19 Binary files /dev/null and b/media/lpienet.png differ diff --git a/media/papers/bokeh-ntire23.png b/media/papers/bokeh-ntire23.png new file mode 100644 index 0000000..ae16975 Binary files /dev/null and b/media/papers/bokeh-ntire23.png differ diff --git a/media/papers/isp-aaai22.png b/media/papers/isp-aaai22.png new file mode 100644 index 0000000..6e2ed66 Binary files /dev/null and b/media/papers/isp-aaai22.png differ diff --git a/media/papers/lpienet-wacv23.png b/media/papers/lpienet-wacv23.png new file mode 100644 index 0000000..b1ba7c0 Binary files /dev/null and b/media/papers/lpienet-wacv23.png differ diff --git a/media/papers/reisp-aim22.png b/media/papers/reisp-aim22.png new file mode 100644 index 0000000..55c31e8 Binary files /dev/null and b/media/papers/reisp-aim22.png differ