Skip to content

lucidrains/autoregressive-linear-attention-cuda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Linear Attention - Autoregressive CUDA kernel (wip)

CUDA implementation of autoregressive linear attention, with all the latest research findings.

Citations

@inproceedings{katharopoulos-et-al-2020,
    author    = {Katharopoulos, A. and Vyas, A. and Pappas, N. and Fleuret, F.},
    title     = {Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention},
    booktitle = {Proceedings of the International Conference on Machine Learning (ICML)},
    year      = {2020},
    url       = {https://arxiv.org/abs/2006.16236}
}
@article{Nguyen2022MomentumTC,
    title   = {Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization},
    author  = {Tan Minh Nguyen and Richard Baraniuk and Robert M. Kirby and Stanley J. Osher and Bao Wang},
    journal = {ArXiv},
    year    = {2022},
    volume  = {abs/2208.00579}
}
@article{Zhai2021AnAF,
    title   = {An Attention Free Transformer},
    author  = {Shuangfei Zhai and Walter A. Talbott and Nitish Srivastava and Chen Huang and Hanlin Goh and Ruixiang Zhang and Joshua M. Susskind},
    journal = {ArXiv},
    year    = {2021},
    volume  = {abs/2105.14103}
}
@inproceedings{Peng2023RWKVRR,
    title   = {RWKV: Reinventing RNNs for the Transformer Era},
    author  = {Bo Peng and Eric Alcaide and Quentin Anthony and Alon Albalak and Samuel Arcadinho and Huanqi Cao and Xin Cheng and Michael Chung and Matteo Grella and GV KranthiKiran and Xuzheng He and Haowen Hou and Przemyslaw Kazienko and Jan Kocon and Jiaming Kong and Bartlomiej Koptyra and Hayden Lau and Krishna Sri Ipsit Mantri and Ferdinand Mom and Atsushi Saito and Xiangru Tang and Bolun Wang and Johan S. Wind and Stansilaw Wozniak and Ruichong Zhang and Zhenyuan Zhang and Qihang Zhao and Peng Zhou and Jian Zhu and Rui-Jie Zhu},
    year    = {2023}
}

About

CUDA implementation of autoregressive linear attention, with all the latest research findings

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages