-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert the pytorch model to the onnx model? #52
Comments
@DidaDidaDidaD
or like this
after that use pnnx.exe
or
|
@DidaDidaDidaD have you tried @magicse 's method? does it work? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
How to convert the pytorch model to the onnx model? I tried the conversion process, but I reported an error. I don't know what the problem is. I'm Xiaobai. Thank you for your advice.My script as follows:
import torch
import importlib
device = torch.device("cpu")
model = "e2fgvi_hq"
ckpt = 'release_model/E2FGVI-HQ-CVPR22.pth'
net = importlib.import_module('model.' + model)
model = net.InpaintGenerator().to(device)
data = torch.load(ckpt, map_location=device)
model.load_state_dict(data)
print(f'Loading model from: {ckpt}')
model.eval()
x = torch.randn(1,1, 3, 240, 864, requires_grad=True)
torch.onnx.export(model, # model being run
(x,2), # model input (or a tuple for multiple inputs)
"E2FGVI-HQ-CVPR22.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=16, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input' : {1 : 'batch_size'}})
the error as follows:
torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::col2im to ONNX opset version 16 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
The text was updated successfully, but these errors were encountered: