Skip to content

ttop32/ArcaneAnimeGAN

Repository files navigation

ArcaneAnimeGAN

Colab
AnimeGAN2 trained on Arcane
trying to follow bryandlee animegan2 training methodology

Result

result

Training Workflow

  • Get video data
  • Split video into frames
  • Align frame image using face-alignment
  • Filter blurry image using opencv Laplacian
  • Zip image dataset for fit into styleGAN
  • Finetune FFHQ pretrained styleGAN using created zip dataset
  • Blend findtuned styleGAN weight and pretrained styleGAN weight
  • Create data pair using blended styleGAN model and pretrained model
  • Train animeGAN using paired data

Change log

  • 0.2
    • use anime-face-detector
    • add color correction on data preprocessing
    • add aug_transforms() on batch transform
    • use l1_loss(vgg(g(x)), vgg(y)) and mse_loss(g(x), y) instead of vgg feature loss and gram matrix loss
  • 0.1
    • first release

To Do

  • Use animegan vgg19[0,255] instead of vgg19_bn[0,1]
  • Add canny edge method to gaussigan blur
  • Background segmentation

Required environment to run

!conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c nvidia -y
!sudo apt install ffmpeg
!pip install face-alignment
!pip install --upgrade psutil
!pip install kornia
!pip install fastai==2.5.3
!pip install opencv-python
!git clone https://github.com/NVlabs/stylegan3.git

!pip install openmim
!mim install mmcv-full mmdet mmpose -y
!pip install anime-face-detector --no-dependencies

Acknowledgement and References