Website for Cross Modal Learning and Application workshop - ACM ICMR 2019
-
Updated
May 24, 2022 - HTML
Website for Cross Modal Learning and Application workshop - ACM ICMR 2019
cDCGAN model for audio-to-image generation: a cross-modal analysis using deep-learning techniques
Search targeted pedestrians with the text.
An intentionally simple Image to Food cross-modal search. Created by Prithiviraj Damodaran.
Implementation of `Objects that Sound` and `Look, Listen, and Learn` papers by Relja Arandjelovi´c and Andrew Zisserman
MMAct: A Large-Scale Dataset for Cross Modal Learning on Human Action Understanding
Create Disco Diffusion artworks in one line
Implementation of "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives" in Tensorflow.
[IEEE T-IP 2020] Deep Image-to-Video Adaptation and Fusion Networks for Action Recognition
A hub hosting essential remote sensing datasets.
Code release of "Collective Deep Quantization of Efficient Cross-modal Retrieval" (AAAI 17)
Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval [ECCV 2020]
Cross-modal convolutional neural networks
Code for COBRA: Contrastive Bi-Modal Representation Algorithm (https://arxiv.org/abs/2005.03687)
Official implementation of the paper "ALADIN: Distilling Fine-grained Alignment Scores for Efficient Image-Text Matching and Retrieval"
DSCNet Visible-Infrared Person ReID (TIFS 2022)
Implementation of Fast ml-CCA from the ICCV-2015 work "Multi-Label Cross-Modal Retrieval"
[IEEE T-IP 2021] Semantics-aware Adaptive Knowledge Distillation for Cross-modal Action Recognition
Add a description, image, and links to the cross-modal topic page so that developers can more easily learn about it.
To associate your repository with the cross-modal topic, visit your repo's landing page and select "manage topics."