Jianyi Wang, Kelvin C.K. Chan, Chen Change Loy
S-Lab, Nanyang Technological University
- Colab demo
-
MMEditing update -
Code release
The same as MMEditing, support the latest version 0.16.1.
# Create a conda environment and activate it
conda create -n clipiqa python=3.8 -y
conda activate clipiqa
# Install PyTorch following official instructions, e.g.
conda install pytorch=1.10 torchvision cudatoolkit=11.3 -c pytorch
# Install pre-built MMCV using MIM.
pip3 install openmim
mim install mmcv-full==1.5.0
# Install CLIP-IQA from the source code.
git clone [email protected]:IceClear/CLIP-IQA.git
cd CLIP-IQA
pip install -r requirements.txt
pip install -e .
Test CLIP-IQA on KonIQ-10k
python demo/clipiqa_koniq_demo.py
Test CLIP-IQA on Live-iWT
python demo/clipiqa_liveiwt_demo.py
# Support dist training as MMEditing
python tools/train.py configs/clipiqa/clipiqa_coop_koniq.py
Test CLIP-IQA+ on KonIQ-10k (Checkpoint)
python demo/clipiqa_koniq_demo.py --config configs/clipiqa/clipiqa_coop_koniq.py --checkpoint ./iter_80000.pth
[Note] You may change prompts for different datasets, please refer to config files for details.
[Note] For testing on a single image, please refer to here for details.
For more evaluation, please refer to our paper for details.
If our work is useful for your research, please consider citing:
@inproceedings{wang2022exploring,
author = {Wang, Jianyi and Chan, Kelvin CK and Loy, Chen Change},
title = {Exploring CLIP for Assessing the Look and Feel of Images},
booktitle = {AAAI},
year = {2023}
}
This project is licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.
This project is based on MMEditing and CLIP. Thanks for their awesome works.
If you have any question, please feel free to reach me out at [email protected]
.