Description
🚀 Feature
I want to add support for a new autocast policy for the NEURON backend.
Motivation
In my usecase, the device is an XLA device but the backend is different. Currently the autocast policy being used is : https://github.com/pytorch/xla/blob/master/torch_xla/csrc/autocast_mode.cpp, but if I want to have a different policy for TPU and NEURON, then I am not sure if it is currently possible. It would be nice to be able to support this feature, because even though both are xla devices, each backend can maintain its own set of operations in a different precision as needed.
Pitch
I want to be able to create a new autocast policy for my backend of interest. I also want to be able to inherit the policy from the xla device, but want the feature to be able to override it as per the need.