Skip to content

CyberAgentAILab/Type-R

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

16 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Wataru Shimoda1  Naoto Inoue1  Daichi Haraguchi1 
Hayato Mitani2  Seichi Uchida2  Kota Yamaguchi1 

1CyberAgent, 2Kyushu University

Accepted to CVPR 2025 as a highlight paper

arxiv paper

teaser

The repository is the official implementation of the paper entitled Type-R: Automatically Retouching Typos for Text-to-Image Generation.

Pipeline

The implementation of Type-R in this repository consists of a three-step pipeline:

  • Text-to-image generation
    • Generate images from prompts.
  • Layout correction
    • Performs layout refinement by detecting errors, erasing text, and regenerating the layout.
  • Typo correction.
    • Renders corrected raster text using a text editing model with OCR-based verification

The pipeline is designed to be plug-and-play, with each module configured using Hydra.
All configuration files are located in src/type_r_app/config.

teaser

Requirements

πŸ“˜ Environment

We check the reproducibility under this environment.

  • Ubuntu 24.04
  • Python 3.12
  • CUDA 12.6
  • PyTorch 2.7.0
  • uv 0.7.6

πŸ“˜ Install

This project manages Python runtime via uv.
This project depends on several packages that involve heavy compilation such as Apex, MaskTextSpotterv3, DeepSolo, and Detectron2.

This project assumes that the environment includes a GPU and CUDA support. If your system does not have CUDA installed, you can install the required CUDA components using the following commands:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb

Then, install the required build tools using the command below:

apt-mark unhold $(apt-mark showhold)
apt update
apt -y install \
  libfontconfig1 \
  libglib2.0-0 \
  cuda-nvcc-12-6 \
  cuda-profiler-api-12-6 \
  libcusparse-dev-12-6 \
  libcublas-dev-12-6 \
  libcusolver-dev-12-6 \
  python3-dev \
  libgl1 

For more details, see the Dockerfile.

⚠️ The above command assumes that CUDA 12.6 is already installed.
If you're using a different CUDA version, replacing 12-6 with the appropriate version number should work.

Once the build dependencies are installed, run the following command:

git clone --recursive https://github.com/CyberAgentAILab/Type-R
cd Type-R
./script/apply_patch.sh
uv sync --extra full

⚠️ uv sync may take up to 30 minutes due to building some dependencies. If it completes instantly, your environment might be misconfigured. In that case, refer to the Dockerfile, or try building within a Docker container. You may omit the --extra full option if you do not run the evaluation pipeline to reduce dependencies. ⚠️ This project uses a namespace package, which is currently incompatible with editable installs. Be sure to pass the --no-editable option to uv when syncing dependencies.

To reset the applied patch:

./script/clean_patch.sh

πŸ“˜ Data resources

We provide the data resources via Hugging Face Datasets. You can download them using the following command:

uv run python tools/dl_resources.py

Or, add the --full option to download all resources:

uv run python tools/dl_resources.py --full

These resources include pretrained model weights, font files, and the MarioEval benchmark dataset if specified full.

⚠️ Some resources with stricter licenses must be downloaded manually. Please refer to the link for details.

πŸ“˜ GPU resources

Type-R requires different machine specs for each step:

  • text-to-image generation
    • The text-to-image generation step requires a large amount of VRAMβ€”more than an A100 40GB GPU, especially when using Flux.
    • ⚠️ The run_on_low_vram_gpus option in src/type_r_app/config/t2i/flux.yaml allows the model to run on an L4 machine, but inference may take a few minutes.

  • layout correction
    • Layout correction is relatively lightweight in terms of computational cost compared to the other steps.
  • typo correction.
    • Typo correction requires a GPU with L4-level specifications when using AnyText.

πŸ“˜ Permissions of text-to-image models

Flux requires authentication of your Hugging Face profile in order to download model files. Please see their model card for more information. You must authenticate your Hugging Face account before running the text-to-image models in the text-to-image generation step by executing:

uv run huggingface-cli login

Usage

πŸ“˜ Type-R

πŸ”Ή Demo

Type-R is designed to be plug-and-play, and module selection is managed via Hydra configuration.
We provide a convenient script to try Type-R using a sample prompt.

To run the demo (configured via src/type_r_app/config/demo.yaml):

bash script/demo.sh
  • Default output directory: results/demo
  • Input prompts are read from resources/prompt/example.txt
  • Prompts should be separated by line breaks, with renderable text enclosed in double quotes (")

πŸ”Ή Mario-Eval Benchmark (Trial version)

A script is also provided for running Type-R on the Mario-Eval benchmark using only components with permissive licenses and no paid APIs.

bash script/marioevalbench_trial.sh  

This script is configured to process a subset of 10 images for the ablation study in the MarioEval benchmark.
See src/type_r_app/config/dataset/marioeval_trial.yaml

πŸ”Ή Mario-Eval Benchmark (Best configuration)

This configuration achieves the best results reported in the paper. It uses an external model with a non-commercial license and accesses a paid API.

bash script/marioevalbench_best.sh  

⚠️ Layout correction assumes that the OpenAI API is used. See the usage of the setting from OpenAI API config.
To use Azure OpenAI instead, set use_azure: true in src/type_r_app/config/marioevalbench_best.yaml:

This script is configured to process a subset of all 500 images for the ablation study in the MarioEval benchmark.
See src/type_r_app/config/dataset/marioeval.yamlγ€€γ€€

To run the test set of the MarioEval benchmark, set sub_set: test in src/type_r_app/config/dataset/marioeval.yaml.
Please note that this will process 5,000 images.

πŸ“˜ Evaluation

We provide evaluation scripts in this repository. To run the evaluation scripts on images generated with the best setting:

uv run python -m type_r_app --config-name marioevalbench_best command=evaluation
  • You can change the evaluation target by editing the YAML config.
  • By default, evaluation includes: VLM evaluation, OCR accuracy, FID score, and CLIPScore.

VLM evaluation options.

  • VLM evaluation requires a paid API.
  • By default, the system evaluates graphic design quality using rating_design_quality.
  • To evaluate other criteria, modify the evaluation field in src/type_r_app/config/evaluation.yaml.

⚠️ The VLM evaluation assumes that the OpenAI API is used. See the usage of the setting from OpenAI API config.
To use Azure OpenAI instead, set use_azure: true in src/type_r_app/config/evaluation.yaml:

πŸ“˜ Prompt augmentation

We provide both the data and the code for prompt augmentation. This process requires a paid API.

uv run python -m type_r_app --config-name demo command=prompt-augmentation

⚠️ Prompt augmentation assumes that the OpenAI API is used. See the usage of the setting from OpenAI API config.

To use Azure OpenAI instead, set use_azure: true in src/type_r_app/config/prompt_augmentation.yaml:

πŸ“˜ OpenAI API configuration

This repository manages the configuration of the OpenAI API via environment variables. Please set the following variable:

  • OPENAI_API_KEY

To use the Azure OpenAI API instead, please configure the following environment variables accordingly:

  • OPENAI_API_VERSION
  • AZURE_OPENAI_DEPLOYMENT_NAME
  • AZURE_OPENAI_GPT4_DEPLOYMENT_NAME
  • AZURE_OPENAI_ENDPOINT
  • AZURE_OPENAI_API_KEY

Note that we only verified the basic functionality of the Azure OpenAI API.

πŸ“˜ Result

We assume the output directory is as follows:

results/
β”œβ”€β”€ ref_img               # T2I-generated images
β”œβ”€β”€ layout_corrected_img  # Images with surplus text removed
β”œβ”€β”€ typo_corrected_img    # Final output
β”œβ”€β”€ word_mapping          # JSON files with OT-based mapping
└── evaluation            # Evaluation results

To convert the results into an Excel file for easier viewing:

uv run python tools/result2xlsx.py

πŸ“˜ Test

To run tests, run the following.

uv run pytest tests --gpufunc

License

This project is licensed under the Apache License.
See LICENSE for details.

Third-party licenses

This project depends on the following third-party libraries/components, each of which has its own license:

OCR-related projects

Text editor

Text remover

Evaluation metrics

Data

No license projects

Our repository does not contain code from the following repositories due to the absence of a license.
Please gather codes and weights from the following links.

Citation

If you find this code useful for your research, please cite our paper:

@inproceedings{shimoda2025typer,
  title={{Type-R: Towards Reproducible Automatic Graphic Design Generation}},
  author={Wataru Shimoda and Naoto Inoue and Daichi Haraguchi and Hayato Mitani and Seiichi Uchida and Kota Yamaguchi},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2025},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages