Source code of a sample iOS app for the paper by Alfreds Lapkovskis, Natalia Nefedova & Ali Beikmohammadi (2024): Automatic Fused Multimodal Deep Learning for Plant Identification
-
Updated
Jun 6, 2024 - Swift
Source code of a sample iOS app for the paper by Alfreds Lapkovskis, Natalia Nefedova & Ali Beikmohammadi (2024): Automatic Fused Multimodal Deep Learning for Plant Identification
Source code for the paper by Alfreds Lapkovskis, Natalia Nefedova & Ali Beikmohammadi (2024): Automatic Fused Multimodal Deep Learning for Plant Identification
Web scraper for Wildberries + simple vectorization/multimodal embedding workflow
Official implementation of "Multi-scale Bottleneck Transformer for Weakly Supervised Multimodal Violence Detection"
The code and data for the Paper 'Inferring Climate Change Stances from Multimodal Tweets' accepted by the Short Paper track of SIGIR 2024
We propose Multi-Modal Segmentation TransFormer (MMSFormer) that incorporates a novel fusion strategy to perform multimodal material segmentation.
A Transferability-guided Protein-Ligand Interaction Prediction Method
[FR|EN - Trio] 2023 - 2024 Centrale Méditerranée AI Master | Multimodal retranscription with text, audio and video
Repo for "Centaur: Robust Multimodal Fusion for Human Activity Recognition"
MIntRec: A New Dataset for Multimodal Intent Recognition (ACM MM 2022)
The codebase for our paper on Multi-modal Medical Dialogue Summarization
Repository for context based emotion recognition
[CVAMD 2021] "End-to-End Learning of Fused Image and Non-Image Feature for Improved Breast Cancer Classification from MRI"
Multimodal sentiment analysis
A generalized self-supervised training paradigm for unimodal and multimodal alignment and fusion.
Source code for "Bi-modal Transformer for Dense Video Captioning" (BMVC 2020)
This repository contains the dataset and baselines explained in the paper: M2H2: A Multimodal Multiparty Hindi Dataset For HumorRecognition in Conversations
Multimodal sentiment analysis using hierarchical fusion with context modeling
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.
FusionBrain Challenge 2.0: creating multimodal multitask model
Add a description, image, and links to the multimodal-fusion topic page so that developers can more easily learn about it.
To associate your repository with the multimodal-fusion topic, visit your repo's landing page and select "manage topics."