Skip to content

FakeVLM: Advancing Synthetic Image Detection through Explainable Multimodal Models and Fine-Grained Artifact Analysis

Notifications You must be signed in to change notification settings

opendatalab/FakeVLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Alt Text Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation

Siwei Wen1,3*, Junyan Ye2,1*, Peilin Feng1,3, Hengrui Kang4,1,
Zichen Wen4,1, Yize Chen5, Jiang Wu1, Wenjun Wu3, Conghui He1, Weijia Li2,1†

1Shanghai Artificial Intelligence Laboratory, 2Sun Yat-sen University
3Beihang University, 4Shanghai Jiao Tong University, 5The Chinese University of Hong Kong, Shenzhen

arXiv GitHub issues GitHub Stars

📰 News

  • [2025.3.20]: 🔥 We have released Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation. Check out the paper. We present FakeClue dataset and FakeVLM model.

FakeVLM Overview

With the rapid advancement of Artificial Intelligence Generated Content (AIGC) technologies, synthetic images have become increasingly prevalent in everyday life, posing new challenges for authenticity assessment and detection. Despite the effectiveness of existing methods in evaluating image authenticity and locating forgeries, these approaches often lack human interpretability and do not fully address the growing complexity of synthetic data. To tackle these challenges, we introduce FakeVLM, a specialized large multimodal model designed for both general synthetic image and DeepFake detection tasks. FakeVLM not only excels in distinguishing real from fake images but also provides clear, natural language explanations for image artifacts, enhancing interpretability. Additionally, we present FakeClue, a comprehensive dataset containing over 100,000 images across seven categories, annotated with fine-grained artifact clues in natural language. FakeVLM demonstrates performance comparable to expert models while eliminating the need for additional classifiers, making it a robust solution for synthetic data detection. Extensive evaluations across multiple datasets confirm the superiority of FakeVLM in both authenticity classification and artifact explanation tasks, setting a new benchmark for synthetic image detection.

framework

Contributions

  • We propose FakeVLM, a multimodal large model designed for both general synthetic and deepfake image detection tasks. It excels at distinguishing real from fake images while also providing excellent interpretability for artifact details in synthetic images.
  • We introduce the FakeClue dataset, which includes a rich variety of image categories and fine-grained artifact annotations in natural language.
  • Our method has been extensively evaluated on multiple datasets, achieving outstanding performance in both synthetic detection and abnormal artifact explanation tasks.

🛠️ Installation

Please clone our repository and change to that folder

git clone [email protected]:opendatalab/FakeVLM.git
cd FakeVLM

Our model is based on the llava environment. Please follow the steps below to configure the environment.

conda create -n fakevlm python=3.9 -y
conda activate fakevlm
pip install --upgrade pip  
pip install -e .
pip install -e ".[train]"
pip install flash-attn --no-build-isolation

📦 Dataset

The directory containing the images should have the following structure:

playground       
└──data
    └──train
        |--doc
            |--fake
            |--real
        .
        .
        |--satellite
    └──test
        .
        .
        .    

📌 Usage

📊 Results

Performance of 7 leading LMMs and FakeVLM on DD-VQA, Fake Clues and Loki.

  • FakeClue
    Ours dataset.
  • LOKI
    A new benchmark for evaluating multimodal models in synthetic detection tasks. It includes human-annotated fine-grained image artifacts, enabling deeper analysis of artifact explanations. We used its image modality, covering categories like Animals, Humans, Scenery, and Documents.

framework

  • DD-VQA
    A dataset for explaining facial artifacts, using manual annotations in a VQA format. Artifacts include blurred hairlines, mismatched eyebrows, rigid pupils, and unnatural shadows. It builds on FF++ data and emphasizes common-sense reasoning.
framework

To provide a comprehensive comparison of the model performance across the three datasets—FakeClue, LOKI, and DD-VQA—we present the following radar chart. This chart visually highlights the strengths and weaknesses of the 7 leading LMMs and FakeVLM, offering a clear depiction of their results in synthetic detection and artifact explanation tasks.

result

😄 Acknowledgement

This repository is built upon the work of LLaVA. We appreciate their contributions and insights that have provided a strong foundation for our research.

📨 Contact

If you have any questions or suggestions, please feel free to contact us at [email protected].

📝 Citation

If you find our work interesting and helpful, please consider giving our repo a star. Additionally, if you would like to cite our work, please use the following format:

@misc{wen2025spotfakelargemultimodal,
      title={Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation}, 
      author={Siwei Wen and Junyan Ye and Peilin Feng and Hengrui Kang and Zichen Wen and Yize Chen and Jiang Wu and Wenjun Wu and Conghui He and Weijia Li},
      year={2025},
      eprint={2503.14905},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.14905}, 
}

About

FakeVLM: Advancing Synthetic Image Detection through Explainable Multimodal Models and Fine-Grained Artifact Analysis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •