Skip to content

export fails with latest executorch #9714

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
gpchowdari opened this issue Mar 27, 2025 · 2 comments
Closed

export fails with latest executorch #9714

gpchowdari opened this issue Mar 27, 2025 · 2 comments
Assignees
Labels
module: exir Issues related to Export IR and the code under exir/ need-user-input The issue needs more information from the reporter before moving forward

Comments

@gpchowdari
Copy link

gpchowdari commented Mar 27, 2025

🐛 Describe the bug

export_for_training is failing with the following error after updating the executorch. Before updating executorch, I was able to export the model to pte format.

dynamic shapes used :
x_seq_len = Dim('x_seq', min=3, max=1024)
y_seq_len = Dim('y_seq', min=1, max=255)
dynamic_shapes_1 = {"x": {1:x_seq_len},"y":{1:y_seq_len},"ilens": {0:1},}

V0327 18:06:03.859000 1958868 torch/fx/experimental/symbolic_shapes.py:6769] eval size_oblivious(Eq(151936s1 + 151936((s0//32)) + 151936, 0)) == False [statically known]
V0327 18:06:03.864000 1958868 torch/fx/experimental/symbolic_shapes.py:6769] eval size_oblivious(((s0//32)) > 9223372036854775807) == False [statically known]
V0327 18:06:03.881000 1958868 torch/fx/experimental/symbolic_shapes.py:6769] eval size_oblivious(s1 > 9223372036854775807) == False [statically known]
python3.10/site-packages/torch/backends/mkldnn/init.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)

File "python3.10/site-packages/torch/export/init.py", line 360, in export
return _export(
File "python3.10/site-packages/torch/export/_trace.py", line 1092, in wrapper
raise e
File "python3.10/site-packages/torch/export/_trace.py", line 1065, in wrapper
ep = fn(*args, **kwargs)
File "python3.10/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "python3.10/site-packages/torch/export/_trace.py", line 2112, in _export
ep = _export_for_training(
File "python3.10/site-packages/torch/export/_trace.py", line 1092, in wrapper
raise e
File "python3.10/site-packages/torch/export/_trace.py", line 1065, in wrapper
ep = fn(*args, **kwargs)
File "python3.10/site-packages/torch/export/exported_program.py", line 121, in wrapper
return fn(*args, **kwargs)
File "python3.10/site-packages/torch/export/_trace.py", line 1975, in _export_for_training
export_artifact = export_func( # type: ignore[operator]
File "python3.10/site-packages/torch/export/_trace.py", line 1344, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "python3.10/site-packages/torch/export/_trace.py", line 739, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1691, in inner
dim_constraints.solve()
File "python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 2641, in solve
assert isinstance(
AssertionError: Expected an equality constraint for s0, got (s0 >= 4) & (s0 >= 8) & (64 <= s0) & (s0 <= 1024)

Versions

PyTorch version: 2.7.0.dev20250310+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.31.6
Libc version: glibc-2.35

Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] executorch==0.6.0a0+c048ea2
[pip3] numpy==2.1.3
[pip3] torch==2.7.0.dev20250310+cpu
[pip3] torchao==0.10.0+git923242e2
[pip3] torchaudio==2.6.0.dev20250310+cpu
[pip3] torchsr==1.0.4
[pip3] torchvision==0.22.0.dev20250310+cpu

cc @JacobSzwejbka @angelayi

@JacobSzwejbka
Copy link
Contributor

I suspect the latest ET version probably bumped the version of pytorch/pytorch we depend on. Can you share the relevant lines of your model this guard is firing on? Since its exported related you will probably get a faster response in pytorch/pytorch issues but Ill tag some folks from compiler cc @angelayi @tugsbayasgalan

@JacobSzwejbka JacobSzwejbka added the module: exir Issues related to Export IR and the code under exir/ label Mar 27, 2025
@github-project-automation github-project-automation bot moved this to To triage in ExecuTorch Core Mar 27, 2025
@angelayi angelayi added the need-user-input The issue needs more information from the reporter before moving forward label Mar 27, 2025
@gpchowdari
Copy link
Author

@JacobSzwejbka tested with the latest executorch. Working fine now. Thanks.

@github-project-automation github-project-automation bot moved this from To triage to Done in ExecuTorch Core Apr 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: exir Issues related to Export IR and the code under exir/ need-user-input The issue needs more information from the reporter before moving forward
Projects
Status: Done
Development

No branches or pull requests

3 participants