[国内的小伙伴请看更详细的中文说明]This repo contains the official implementation and the new IAA dataset TAD66K of the IJCAI 2022 paper. Our new work on ICCV2023:Link
- We build a large-scale dataset called the Theme and Aesthetics Dataset with 66K images (TAD66K), which is specifically designed for IAA. Specifically, (1) it is a theme-oriented dataset containing 66K images covering 47 popular themes. All images were carefully selected by hand based on the theme. (2) In addition to common aesthetic criteria, we provide 47 criteria for the 47 themes. Images of each theme are annotated independently, and each image contains at least 1200 effective annotations (so far the richest annotations). These high-quality annotations could help to provide deeper insight into the performance of models.
- Download from here google, it contains images with the largest side scaled to 800, and labels categorized by different themes.
- or here baidu, code: 8888
We propose a baseline model, called the Theme and Aesthetics Network (TANet), which can maintain a constant perception of aesthetics to effectively deal with the problem of attention dispersion. Moreover, TANet can adaptively learn the rules for predicting aesthetics according to a recognized theme. By comparing 17 methods with TANet on three representative datasets: AVA, FLICKR-AES and the proposed TAD66K, TANet achieves state-of-the-art performance on all three datasets.
- pandas==0.22.0
- nni==1.8
- requests==2.18.4
- torchvision==0.8.2+cu101
- numpy==1.13.3
- scipy==0.19.1
- tqdm==4.43.0
- torch==1.7.1+cu101
- scikit_learn==1.0.2
- tensorboardX==2.5
- We used the hyperparameter tuning tool nni, maybe you should know how to use this tool first (it will only take a few minutes of your time), because our training and testing will be in this tool.
- Train or test, please run: nnictl create --config config.yml -p 8999
- The Web UI urls are: http://127.0.0.1:8999 or http://172.17.0.3:8999
- Note: nni is not necessary, if you don't want to use this tool, just make simple modifications to our code, such as changing param_group['lr'] to param_group.lr, etc.
- PS: The work of train on the FLICKR-AES dataset may not be made public, because we are currently cooperating with a company, and the relevant model has been embedded into the system, and there are some confidentiality requirements.
@article{herethinking,
title={Rethinking Image Aesthetics Assessment: Models, Datasets and Benchmarks},
author={He, Shuai and Zhang, Yongchang and Xie, Rui and Jiang, Dongxiang and Ming, Anlong},
journal={IJCAI},
year={2022},
}