This repository contains the Pytorch implementation of Stochastic Multiple Target Sampling Gradient Descent.
If you find our code useful in your research, please cite:
@article{phan2022stochastic,
title={Stochastic Multiple Target Sampling Gradient Descent},
author={Phan, Hoang and Tran, Ngoc and Le, Trung and Tran, Toan and Ho, Nhat and Phung, Dinh},
journal={Advances in Neural Information Processing Systems},
year={2022}
}
Please refer to the bash script (*.sh) in each experiment to reproduce the reported results in the paper.
For the running time of our approach, you can take a look at this notebook in Google Colab as we wanted to utilize the cloud computing service for a fair comparison.
If you have any questions about the paper or the codebase, please feel free to contact [email protected].