Skip to content

๐Ÿง‘โ€๐Ÿซ 60 Implementations/tutorials of deep learning papers with side-by-side notes ๐Ÿ“; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ๐ŸŽฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐Ÿง 

License

labmlai/annotated_deep_learning_paper_implementations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Twitter Sponsor

This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

The website renders these as side-by-side formatted notes. We believe these would help you understand these algorithms better.

Screenshot

We are actively maintaining this repo and adding new implementations almost weekly. Twitter for updates.

Paper Implementations

โœจ Transformers

โœจ LSTM

โœจ ResNet

โœจ ConvMixer

โœจ U-Net

โœจ Sketch RNN

โœจ Graph Neural Networks

Solving games with incomplete information such as poker with CFR.

โœจ Optimizers

โœจ Distillation

โœจ Uncertainty

โœจ Activations

Highlighted Research Paper PDFs

Installation

pip install labml-nn

Citing

If you use this for academic research, please cite it using the following BibTeX entry.

@misc{labml,
 author = {Varuna Jayasiri, Nipun Wijerathne},
 title = {labml.ai Annotated Paper Implementations},
 year = {2020},
 url = {https://nn.labml.ai/},
}

Other Projects

This shows the most popular research papers on social media. It also aggregates links to useful resources like paper explanations videos and discussions.

This is a library that let's you monitor deep learning model training and hardware usage from your mobile phone. It also comes with a bunch of other tools to help write deep learning code efficiently.

About

๐Ÿง‘โ€๐Ÿซ 60 Implementations/tutorials of deep learning papers with side-by-side notes ๐Ÿ“; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ๐ŸŽฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐Ÿง 

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project