Skip to content

RNN / GRU / LSTM implementation for torch_xla #8655

Closed
@qihqi

Description

@qihqi

🚀 Feature

Given that the experimental launch of scan operator that lowers to XLA's WhileOp, we should leverage it to implement performant RNN layers.

It should great to leverage it in place of the for loop that loops throught the time dimension, which could be large.

Motivation

for loops defined in python will be traced through by Lazy tensor and create a huge XLA graph. The runtime of these graphs might even be better than the rolled loop; but the compile time would be too long.

Scan operator is the technique for shortening compile time, this is also used in flax's RNN implementation.

Pitch

Alternatives

Additional context

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions