This repo presents a collection of recent papers related to speech model compression. Please feel free to suggest other papers!
- [INTERSPEECH] [arXiv] Task-Agnostic Structured Pruning of Speech Representation Models
- [INTERSPEECH] [arXiv] [code] DPHuBERT: Joint Distillation and Pruning of Self-Supervised Speech Models
- [ICASSP] [arXiv] Learning ASR pathways: A sparse multilingual ASR model
- [ICASSP] [arXiv] I3D: Transformer Architectures with Input-Dependent Dynamic Depth for Speech Recognition
- [ICASSP] [arXiv] Structured Pruning of Self-Supervised Pre-Trained Models for Speech Recognition and Understanding
- [ICASSP] [arXiv] RobustDistiller: Compressing Universal Speech Representations for Enhanced Environment Robustness
- [ICASSP] [arXiv] Ensemble Knowledge Distillation of Self-Supervised Speech Models
- [SLT] [arXiv] Learning a Dual-Mode Speech Recognition Model via Self-Pruning
- [INTERSPEECH] [arXiv] [code] LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT
- [INTERSPEECH] [arXiv] Deep versus Wide: An Analysis of Student Architectures for Task-Agnostic Knowledge Distillation of Self-Supervised Speech Models
- [INTERSPEECH] [arXiv] [code] FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
- [ICASSP] [arXiv] [code] DistilHuBERT: Speech Representation Learning by Layer-wise Distillation of Hidden-unit BERT