A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
-
Updated
Mar 3, 2024 - Python
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle
Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. https://arxiv.org/abs/2012.12975
The Hateful Memes Challenge example code using MMF
Dataset and code implementation for the paper "Decoding the Underlying Meaning of Multimodal Hateful Memes" (IJCAI'23).
Racist or Sexist Meme? Classifying Memes beyond Hateful
Submission for Precog Recruitment Task 2: Analyzing Hateful Memes
🃏 Using multiple types of annotation extracted from hateful-memes dataset and feed those data into multi-modal transformers to achieve high accuracy.
[EACL'24] Multimodal Hate Speech Detection in Bengali
Hateful Meme challenge, Knowledge graph based approach
Add a description, image, and links to the hateful-memes topic page so that developers can more easily learn about it.
To associate your repository with the hateful-memes topic, visit your repo's landing page and select "manage topics."