-
Notifications
You must be signed in to change notification settings - Fork 498
Issues: pytorch/xla
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Build PyTorch/XLA with the same version of CUDA as PyTorch
CI
CI related change
xla:gpu
#8700
opened Feb 11, 2025 by
tengyifei
Enable 2D shardign with New feature or request
SPMD / Distributed
minibatch=True
for SPMD
enhancement
#8696
opened Feb 10, 2025 by
miladm
In the case of xla amp, turning on gradient checkpoint, cause OOM
gradient_checkpointing
xla:gpu
#8695
opened Feb 10, 2025 by
mars1248
[torch_xla] scan only captures aten operations
dynamo
enhancement
New feature or request
SPMD / Distributed
#8691
opened Feb 9, 2025 by
tengyifei
Check if the current Pallas kernels have enough test coverage.
pallas
testing
Testing and coverage related issues.
#8687
opened Feb 8, 2025 by
vanbasten23
Segment fault on Llama3 and Mixtral model using PyTorch/XLA nightly
libtpu-update
#8683
opened Feb 6, 2025 by
zpcore
Introduce a New feature or request
SPMD / Distributed
mark_sharding
that also shards the backward
enhancement
#8678
opened Feb 4, 2025 by
tengyifei
cummax
: returned indices are not consistent with PyTorch.
bug
#8675
opened Feb 4, 2025 by
ysiraichi
torch.nan_to_num doesn't work with -inf/inf
bug
Something isn't working
pytorch api
XLA behavior doesn't match Pytorch eager frontend
#8674
opened Feb 4, 2025 by
Akshat-Tripathi
Autocast Policy for new xla backend.
enhancement
New feature or request
#8672
opened Feb 4, 2025 by
avizon-aws
Improve testing coverage across all device hw types
enhancement
New feature or request
#8669
opened Feb 3, 2025 by
rpsilva-aws
Missing sharding specs when annotating sharding over views
bug
Something isn't working
SPMD / Distributed
#8662
opened Feb 1, 2025 by
rpsilva-aws
Torch XLA Model all_gather does not work with tensors of different sizes along dimension 0
enhancement
New feature or request
SPMD / Distributed
usability
Bugs/features related to improving the usability of PyTorch/XLA
#8660
opened Jan 31, 2025 by
ajayvohra2005
RNN / GRU / LSTM implementation for torch_xla
enhancement
New feature or request
#8655
opened Jan 30, 2025 by
qihqi
how to use device_map in diffusers.stablediffusionpipeline.from_pretrained
usability
Bugs/features related to improving the usability of PyTorch/XLA
xla:tpu
TPU specific issues and PRs
#8646
opened Jan 29, 2025 by
chaowenguo
Make Mixtral pallas kernels Dynamo/AOTAutograd traceable
pallas
#8642
opened Jan 28, 2025 by
tengyifei
split on second dimension of 2D array not working
functionalization-disabled
Issues specifically for when functionalization is disabled.
pytorch api
XLA behavior doesn't match Pytorch eager frontend
#8640
opened Jan 28, 2025 by
jeffhataws
[torchax][RFC] Anchor on the device API everywhere
RFC
torchxla2
#8638
opened Jan 28, 2025 by
tengyifei
[torchax] RNG handling in a jitted graph is unsound
bug
Something isn't working
torchxla2
#8636
opened Jan 28, 2025 by
tengyifei
[torchax] jit compile the model constructor
enhancement
New feature or request
torchxla2
#8635
opened Jan 28, 2025 by
tengyifei
[scan] Avoid re-tracing the combine function on every call
enhancement
New feature or request
#8632
opened Jan 27, 2025 by
tengyifei
Add type annotation to PyTorch/XLA code and tests & expand tests for various types as needed
TECHNICAL_DEBT
Technical Debt Is Evil
usability
Bugs/features related to improving the usability of PyTorch/XLA
#8627
opened Jan 26, 2025 by
miladm
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.