Wataru Shimoda1β
Naoto Inoue1β
Daichi Haraguchi1β
Hayato Mitani2β
Seichi Uchida2β
Kota Yamaguchi1β
1CyberAgent, 2Kyushu University
The repository is the official implementation of the paper entitled Type-R: Automatically Retouching Typos for Text-to-Image Generation.
The implementation of Type-R in this repository consists of a three-step pipeline:
- Text-to-image generation
- Generate images from prompts.
- Layout correction
- Performs layout refinement by detecting errors, erasing text, and regenerating the layout.
- Typo correction.
- Renders corrected raster text using a text editing model with OCR-based verification
The pipeline is designed to be plug-and-play, with each module configured using Hydra.
All configuration files are located in src/type_r_app/config.
We check the reproducibility under this environment.
This project manages Python runtime via uv.
This project depends on several packages that involve heavy compilation such as Apex, MaskTextSpotterv3, DeepSolo, and Detectron2.
This project assumes that the environment includes a GPU and CUDA support. If your system does not have CUDA installed, you can install the required CUDA components using the following commands:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
Then, install the required build tools using the command below:
apt-mark unhold $(apt-mark showhold)
apt update
apt -y install \
libfontconfig1 \
libglib2.0-0 \
cuda-nvcc-12-6 \
cuda-profiler-api-12-6 \
libcusparse-dev-12-6 \
libcublas-dev-12-6 \
libcusolver-dev-12-6 \
python3-dev \
libgl1
For more details, see the Dockerfile.
β οΈ The above command assumes that CUDA 12.6 is already installed.
If you're using a different CUDA version, replacing12-6
with the appropriate version number should work.
Once the build dependencies are installed, run the following command:
git clone --recursive https://github.com/CyberAgentAILab/Type-R
cd Type-R
./script/apply_patch.sh
uv sync --extra full
β οΈ uv sync may take up to 30 minutes due to building some dependencies. If it completes instantly, your environment might be misconfigured. In that case, refer to the Dockerfile, or try building within a Docker container. You may omit the--extra full
option if you do not run the evaluation pipeline to reduce dependencies.β οΈ This project uses a namespace package, which is currently incompatible with editable installs. Be sure to pass the --no-editable option to uv when syncing dependencies.
To reset the applied patch:
./script/clean_patch.sh
We provide the data resources via Hugging Face Datasets. You can download them using the following command:
uv run python tools/dl_resources.py
Or, add the --full
option to download all resources:
uv run python tools/dl_resources.py --full
These resources include pretrained model weights, font files, and the MarioEval benchmark dataset if specified full.
β οΈ Some resources with stricter licenses must be downloaded manually. Please refer to the link for details.
Type-R requires different machine specs for each step:
- text-to-image generation
- The text-to-image generation step requires a large amount of VRAMβmore than an A100 40GB GPU, especially when using Flux.
-
β οΈ Therun_on_low_vram_gpus
option in src/type_r_app/config/t2i/flux.yaml allows the model to run on an L4 machine, but inference may take a few minutes.
- layout correction
- Layout correction is relatively lightweight in terms of computational cost compared to the other steps.
- typo correction.
- Typo correction requires a GPU with L4-level specifications when using AnyText.
Flux requires authentication of your Hugging Face profile in order to download model files. Please see their model card for more information. You must authenticate your Hugging Face account before running the text-to-image models in the text-to-image generation step by executing:
uv run huggingface-cli login
Type-R is designed to be plug-and-play, and module selection is managed via Hydra configuration.
We provide a convenient script to try Type-R using a sample prompt.
To run the demo (configured via src/type_r_app/config/demo.yaml):
bash script/demo.sh
- Default output directory:
results/demo
- Input prompts are read from resources/prompt/example.txt
- Prompts should be separated by line breaks, with renderable text enclosed in double quotes (")
A script is also provided for running Type-R on the Mario-Eval benchmark using only components with permissive licenses and no paid APIs.
bash script/marioevalbench_trial.sh
- Config file: src/type_r_app/config/marioevalbench_trial.yaml
- Output directory:
results/marioevalbench_trial
- Prompt data (including GPT-4o augmented versions) is provided in: resources/data/marioevalbench/hfds
This script is configured to process a subset of 10 images for the ablation study in the MarioEval benchmark.
See src/type_r_app/config/dataset/marioeval_trial.yaml
This configuration achieves the best results reported in the paper. It uses an external model with a non-commercial license and accesses a paid API.
bash script/marioevalbench_best.sh
- Config file: src/type_r_app/config/marioevalbench_best.yaml
- Output directory:
results/marioevalbench_best
β οΈ Layout correction assumes that the OpenAI API is used. See the usage of the setting from OpenAI API config.
To use Azure OpenAI instead, setuse_azure: true
in src/type_r_app/config/marioevalbench_best.yaml:
This script is configured to process a subset of all 500 images for the ablation study in the MarioEval benchmark.
See src/type_r_app/config/dataset/marioeval.yamlγγ
To run the test set of the MarioEval benchmark, set sub_set: test
in src/type_r_app/config/dataset/marioeval.yaml.
Please note that this will process 5,000 images.
We provide evaluation scripts in this repository. To run the evaluation scripts on images generated with the best setting:
uv run python -m type_r_app --config-name marioevalbench_best command=evaluation
- You can change the evaluation target by editing the YAML config.
- By default, evaluation includes: VLM evaluation, OCR accuracy, FID score, and CLIPScore.
- VLM evaluation requires a paid API.
- By default, the system evaluates graphic design quality using
rating_design_quality
. - To evaluate other criteria, modify the
evaluation
field in src/type_r_app/config/evaluation.yaml.
β οΈ The VLM evaluation assumes that the OpenAI API is used. See the usage of the setting from OpenAI API config.
To use Azure OpenAI instead, setuse_azure: true
in src/type_r_app/config/evaluation.yaml:
We provide both the data and the code for prompt augmentation. This process requires a paid API.
uv run python -m type_r_app --config-name demo command=prompt-augmentation
- Input: resources/prompt/example.txt
- Output:
prompt/augmented.txt
under the configured results directory - Optionally, HFDS format output is also supported (see src/type_r_app/launcher/prompt_augmentation.py)
β οΈ Prompt augmentation assumes that the OpenAI API is used. See the usage of the setting from OpenAI API config.
To use Azure OpenAI instead, set
use_azure: true
in src/type_r_app/config/prompt_augmentation.yaml:
This repository manages the configuration of the OpenAI API via environment variables. Please set the following variable:
OPENAI_API_KEY
To use the Azure OpenAI API instead, please configure the following environment variables accordingly:
OPENAI_API_VERSION
AZURE_OPENAI_DEPLOYMENT_NAME
AZURE_OPENAI_GPT4_DEPLOYMENT_NAME
AZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_KEY
Note that we only verified the basic functionality of the Azure OpenAI API.
We assume the output directory is as follows:
results/ βββ ref_img # T2I-generated images βββ layout_corrected_img # Images with surplus text removed βββ typo_corrected_img # Final output βββ word_mapping # JSON files with OT-based mapping βββ evaluation # Evaluation results
To convert the results into an Excel file for easier viewing:
uv run python tools/result2xlsx.py
To run tests, run the following.
uv run pytest tests --gpufunc
This project is licensed under the Apache License.
See LICENSE for details.
This project depends on the following third-party libraries/components, each of which has its own license:
- Deepsolo β Licensed under Adelaidet
- MaskTextSpotterV3 β Licensed under CC BY-NC 4.0
- Apex β Licensed under BSD 3-Clause
- CRAFT β Licensed under MIT License
- MaskRCNN Benchmark β Licensed under MIT License
- Clova Recognition β Licensed under Apache 2.0
- Detectron2 β Licensed under Apache 2.0
- Hi-SAM β Licensed under Apache 2.0
- Paddle β Licensed under Apache 2.0
- AnyText β Licensed under Apache 2.0
- UDiffText β Licensed under MIT License
- Lama β Licensed under Apache 2.0
- Garnet β Licensed under Apache 2.0
- CLIP score β Licensed under MIT License
- Pytorch FID β Licensed under Apache 2.0
- VLMEval β Licensed under Apache 2.0
- Mario-Eval Benchmark β Licensed under MIT License
Our repository does not contain code from the following repositories due to the absence of a license.
Please gather codes and weights from the following links.
If you find this code useful for your research, please cite our paper:
@inproceedings{shimoda2025typer,
title={{Type-R: Towards Reproducible Automatic Graphic Design Generation}},
author={Wataru Shimoda and Naoto Inoue and Daichi Haraguchi and Hayato Mitani and Seiichi Uchida and Kota Yamaguchi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025},
}