Skip to content

Stochastic Automatic Differentiation library for PyTorch.

License

Notifications You must be signed in to change notification settings

HEmile/storchastic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

General Stochastic Automatic Differentiation for Pytorch

Documentation Status

Storchastic is a PyTorch library for stochastic gradient estimation in Deep Learning [1]. Many state of the art deep learning models use gradient estimation, in particular within the fields of Variational Inference and Reinforcement Learning. While PyTorch computes gradients of deterministic computation graphs automatically, it will not estimate gradients on stochastic computation graphs [2].

With Storchastic, you can easily define any stochastic deep learning model and let it estimate the gradients for you. Storchastic provides a large range of gradient estimation methods that you can plug and play, to figure out which one works best for your problem. Storchastic provides automatic broadcasting of sampled batch dimensions, which increases code readability and allows implementing complex models with ease.

When dealing with continuous random variables and differentiable functions, the popular reparameterization method [3] is usually very effective. However, this method is not applicable when dealing with discrete random variables or non-differentiable functions. This is why Storchastic has a focus on gradient estimators for discrete random variables, non-differentiable functions and sequence models.

Documentation on Read the Docs.

Example: Discrete Variational Auto-Encoder

Installation

In your virtual Python environment, run pip install storchastic

Requires Pytorch 1.8 and Pyro. The code is build using Python 3.8.

Algorithms

Feel free to create an issue if an estimator is missing here.

  • Reparameterization [1, 3]
  • REINFORCE with Moving Average baseline [1, 4]
  • REINFORCE with Leave-One-Out baseline (RLOO) [5, 6]
  • Expected value for enumerable distributions
  • (Straight through) Gumbel Softmax [7, 8]
  • LAX, RELAX [9]
  • REBAR [10]
  • REINFORCE Without Replacement [6]
  • Unordered Set Estimator [13]
  • ARM [15]
  • Rao-Blackwellized REINFORCE [12]

In development

  • Memory Augmented Policy Optimization [11]

Planned

  • Measure valued derivatives [1, 14]
  • Automatic Credit Assignment [16]
  • ...

References

Cite

To cite Storchastic, please cite this preprint:

@article{van2021storchastic,
  title={Storchastic: A Framework for General Stochastic Automatic Differentiation},
  author={van Krieken, Emile and Tomczak, Jakub M and Teije, Annette ten},
  booktitle = {Advances in Neural Information Processing Systems},
  editor = {M. Ranzato and A. Beygelzimer and Y. Dauphin and P.S. Liang and J. Wortman Vaughan},
  pages = {7574--7587},
  url = {https://proceedings.neurips.cc/paper_files/paper/2021/file/3dfe2f633108d604df160cd1b01710db-Paper.pdf},
  volume = {34},
  year={2021}
}