-
Notifications
You must be signed in to change notification settings - Fork 666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flexible shape input and output doesn't work at runtime #276
Comments
Are you using the vision API or CoreML API directly ? |
Sorry for late reply. @aseemw I found a cause of problem. However, When I convert mlmodel to have flexible input shape, intermediate reshape layer crashes since it has static I tried to delete |
CoreML's reshape layer is a static layer, so the target shape must be fully specified. |
OK, thank you. |
@h-shib do you fix that problem yet? |
@lgyStoic well, actually not yet. I think CoreML doesn't have implemented Pixel shuffle layer yet, so you still need to implement it in custom layer. |
@h-shib I also try MPSCNNSubPixelConvolutionDescriptor by using MPNNGraph, but got a uncorrect image , has a bit difference with pytorch result. can you have a look on this? https://github.com/lgyStoic/super_resolution_MPSNNGraph |
@lgyStoic, @h-shib and @aseemw I am having the same error in the issue linked above. But I do not appear to be using a Pixel Shuffle layer. Is there a way to diagnose which part of the model might be having this issue? The specific runtime error looks exactly like yours: Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs. |
Has anyone found a solution to this? I also have the same error as @h-shib. I also have a Pixel Shuffle layer. I'm also getting a similar error to @BorisKourt |
I'm trying to run super resolution CoreML model which takes
50...300 x 50...300
image for input and100...600 x 100...600
image for output.I could convert .mlmodel as like above from the model which takes
100x100
for input and200x200
for output withcoremltools==2.0
. However, I couldn't run the model for any other than100x100
size of image on Xcode (and iPhone6s device).this is error code.
Xcode console shows the model has flexible input.
Also,

spec.description.input
andspec.description.output
returns the results below.What's wrong?
The text was updated successfully, but these errors were encountered: