You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Part of #42. Depends on #44. Once an environment is set up, it will be easy to train several of the RL algorithms provided by pytorch. All of these algorithms should be bench marked and a team discussion take place on which one to use for production training. The computation library for performing these tasks will be caffe2 as it is easy to deploy on production cloud services. The focus is on 2019: creating a generic tool for this is not essential, but it will be very beneficial for future years, and the task of #51.
The text was updated successfully, but these errors were encountered:
We are now going to be using pytorch for several reasons, after talking to industry experts on machine learning. The main reasons are:
Faster development speed. The number one reason highlighted by every expert, regardless of whether they preferred pytorch or not, is that pytorch has undoubtedly the fastest development cycle of any machine learning library.
Brighter future: pytorch is growing faster than tensorflow/keras and is expected to continue. It's development community is, at the moment, growing faster as well, and with it comes exceptional libraries and support.
🚀 Feature Request
Part of #42. Depends on #44. Once an environment is set up, it will be easy to train several of the RL algorithms provided by
pytorch
. All of these algorithms should be bench marked and a team discussion take place on which one to use for production training. The computation library for performing these tasks will becaffe2
as it is easy to deploy on production cloud services. The focus is on 2019: creating a generic tool for this is not essential, but it will be very beneficial for future years, and the task of #51.The text was updated successfully, but these errors were encountered: