-
Notifications
You must be signed in to change notification settings - Fork 819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proxy: Add GetLayerInfoPiped #2554
Conversation
Depends on containers/skopeo#2554 This ensures we support e.g. images with > 100 layers. Signed-off-by: Colin Walters <walters@verbum.org>
Consumer in containers/containers-image-proxy-rs#76 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
The code is fair enough, but, just to be clear, this changes ~nothing Skopeo-side: the manifest, and the layer info array, all exist in memory. AFAICS this makes a difference only because the caller has a https://github.com/containers/containers-image-proxy-rs/blob/63f5f96d4196ffb31a9167f6421ceb0df07ec111/src/imageproxy.rs#L382 .
That’s a reasonable design — a caller looking for a success/failure indicator has a fair need to get that value without having to read an unbounded value into memory first — I just want to make sure there is no misunderstanding.
No, that issue is inherently because we use SOCK_SEQPACKET and the kernel doesn't support queuing arbitrary message sizes there. There's an equivalent constant here Line 98 in dd71592
|
Oh. Good thing I asked :) |
ca37b8e
to
2103c89
Compare
I was experimenting with images with lots of layers (> 200) and this invocation was incorrectly adding the entire response into what was intended as the metadata plane. `GetManifest` and `GetConfig` (even those are relatively small) still always return their data over a pipe, same as blobs. Add a new `GetLayerInfoPiped` that does the same so we can easily get this information for images with a lot of layers. Signed-off-by: Colin Walters <walters@verbum.org>
2103c89
to
a31470d
Compare
Just to recap briefly here the rationale for the protocol as designed is that it clearly splits metadata and data channels, allowing us to pipeline operations. We previously discussed alternative IPC protocols in e.g. #1844 (comment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM.
Hmm...it looks like the testing farm jobs succeeded, but didn't report that success to GH? |
/packit test |
You’re all good, the “release” jobs don’t run on the @lsm5 this is packit/packit-service#2678 , isn’t it? |
Hmm, but all I can do here is "Enable auto-merge" but that I think is waiting for those jobs? |
Not on those. I’m confused about that button as well — my current theory is that it is waiting on the PR to be rebased (in some other PR, “Update Branch / Update with merge commit” didn’t help IIRC), but I didn’t properly test that theory yet. |
Hmm, does this repo have a config enabled that requires each PR to be against current main? That seems to be an unnecessary hit to ergonomics |
I’m not at all sure about the full history. ISTR requiring PRs to be fresh is intentional, but also admins had the option to override it and force-merge — maybe that has been disabled. I don’t know when or by who. |
LGTM |
yes. Actually I think it's best to remove those additional jobs until I have the right github actions figured out. Will submit a PR. |
I was experimenting with images with lots of layers (> 200) and this invocation was incorrectly adding the entire response into what was intended as the metadata plane.
GetManifest
andGetConfig
(even those are relatively small) still always return their data over a pipe, same as blobs.Add a new
GetLayerInfoPiped
that does the same so we can easily get this information for images with a lot of layers.