-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add optional highAvailability feature for registry cache #298
base: main
Are you sure you want to change the base?
Conversation
The Gardener project currently lacks enough active contributors to adequately respond to all PRs.
You can:
/lifecycle stale |
/remove-lifecycle stale |
3fe41e1
to
2c2264c
Compare
The Gardener project currently lacks enough active contributors to adequately respond to all PRs.
You can:
/lifecycle stale |
/remove-lifecycle stale |
/test pull-gardener-extension-registry-cache-unit |
The registry cache runs with a single replica. This fact may lead to concerns for the high availability such as "What happens when the registry cache is down? Does containerd fail to pull the image?". As outlined in the [How does it work? section](#how-does-it-work), containerd is configured to fall back to the upstream registry if it fails to pull the image from the registry cache. Hence, when the registry cache is unavailable, the containerd's image pull operations are not affected because containerd falls back to image pull from the upstream registry. | ||
Per default the registry cache runs with a single replica. This fact may lead to concerns for the high availability such as "What happens when the registry cache is down? Does containerd fail to pull the image?". As outlined in the [How does it work? section](#how-does-it-work), containerd is configured to fall back to the upstream registry if it fails to pull the image from the registry cache. Hence, when the registry cache is unavailable, the containerd's image pull operations are not affected because containerd falls back to image pull from the upstream registry. | ||
|
||
In special cases where this is not enough (for example when using the registry cache with a proxy) it is possible to set `providerConfig.caches[].highAvailability` to `true`, this will add the label `high-availability-config.resources.gardener.cloud/type=server` and scale the statefulset to 2 replicas. The `topologySpreadConstraints` is added according to the cluster configuration. See also [High Availability of Deployed Components](https://gardener.cloud/docs/gardener/high-availability/#system-components). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate more on the use case that requires running the registry cache in HA mode? I am not able to understand how a registry cache running against an upstream behind a proxy requires the registry cache to run in HA mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my example I mean if the upstream registry is only accessible via a proxy no image can be pulled i the cache is not reachable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct that:
- When using a registry behind a proxy it is not easy/possible to set http proxy settings to containerd. And with this feature, you would like to run the registry cache with 2 replicas and with this you would reduce the probability of having the registry-cache down and prevent containerd from falling back to the upstream.
?
If that is the case, I am not sure if it is a good solution to the problem.
How the initial Shoot creation with enabled registry-cache extension works in the case containerd is not able to fall back to the upstream? As described in https://github.com/gardener/gardener-extension-registry-cache/blob/main/docs/usage/registry-cache/configuration.md#limitations, we cannot cache all images from Shoot system components (because of the design decision that the registry-cache runs in the cluster and requires the Pod and Service network to be set up). If you create a Shoot with enabled registry-cache extension for such an upstream, and if indeed containerd cannot fall back to the upstream due to missing http proxy config, then Shoot creation wouldn't work.
The registry caches also rely on volumes. Having attach/detach issues with the volume would again lead to not having a running registry cache Pod and would again lead to image pull failures from containerd.
Let me know if my assumptions are correct.
If yes, IMO it would make much more sense to set http proxy settings to containerd instead of enabling HA for the registry cache StatefulSet.
e2e tests job also failed due to network issues. /test pull-gardener-extension-registry-cache-e2e-kind |
The e2e test failure is occurrence of #283. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also add e2e test to cover this scenario. @dimitar-kostadinov raised the point that helper funcs that perform the checks have to be adapted to consider now the 2 replicas of the StatefulSet.
2c2264c
to
5c1c22a
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
5c1c22a
to
853b68b
Compare
@ialidzhikov Thanks for your feedback. I added your feedback from the PR review. Except the E2E test, so you mean an E2E test with highAvailability enabled and check if 2 pods are running? |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One unused func to be cleanup, otherwise
/lgtm
@dergeberl can you rebase the PR?
_ = func(name, upstream, remoteURL string) *corev1.Service { | ||
return &corev1.Service{ | ||
ObjectMeta: metav1.ObjectMeta{ | ||
Name: name, | ||
Namespace: "kube-system", | ||
Labels: map[string]string{ | ||
"app": name, | ||
"upstream-host": upstream, | ||
}, | ||
Annotations: map[string]string{ | ||
"upstream": upstream, | ||
"remote-url": remoteURL, | ||
}, | ||
}, | ||
Spec: corev1.ServiceSpec{ | ||
Selector: map[string]string{ | ||
"app": name, | ||
"upstream-host": upstream, | ||
}, | ||
Ports: []corev1.ServicePort{{ | ||
Name: "registry-cache", | ||
Port: 5000, | ||
Protocol: corev1.ProtocolTCP, | ||
TargetPort: intstr.FromString("registry-cache"), | ||
}}, | ||
Type: corev1.ServiceTypeClusterIP, | ||
}, | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These lines can be removed as registry cache services are being moved to a separate component and this function is now here.
LGTM label has been added. Git tree hash: d8822ea739cd78018294c09e13efde39e485236d
|
The registry cache runs with a single replica. This fact may lead to concerns for the high availability such as "What happens when the registry cache is down? Does containerd fail to pull the image?". As outlined in the [How does it work? section](#how-does-it-work), containerd is configured to fall back to the upstream registry if it fails to pull the image from the registry cache. Hence, when the registry cache is unavailable, the containerd's image pull operations are not affected because containerd falls back to image pull from the upstream registry. | ||
Per default the registry cache runs with a single replica. This fact may lead to concerns for the high availability such as "What happens when the registry cache is down? Does containerd fail to pull the image?". As outlined in the [How does it work? section](#how-does-it-work), containerd is configured to fall back to the upstream registry if it fails to pull the image from the registry cache. Hence, when the registry cache is unavailable, the containerd's image pull operations are not affected because containerd falls back to image pull from the upstream registry. | ||
|
||
In special cases where this is not enough (for example when using an upstream which is only accessible with a proxy) it is possible to set `providerConfig.caches[].highAvailability` to `true`, this will add the label `high-availability-config.resources.gardener.cloud/type=server` and scale the statefulset to 2 replicas. The `topologySpreadConstraints` is added according to the cluster configuration. See also [High Availability of Deployed Components](https://gardener.cloud/docs/gardener/high-availability/#system-components). Each registry cache replica uses an own volume, so each registry cache needs to pull the image from upstream. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In special cases where this is not enough (for example when using an upstream which is only accessible with a proxy) it is possible to set `providerConfig.caches[].highAvailability` to `true`, this will add the label `high-availability-config.resources.gardener.cloud/type=server` and scale the statefulset to 2 replicas. The `topologySpreadConstraints` is added according to the cluster configuration. See also [High Availability of Deployed Components](https://gardener.cloud/docs/gardener/high-availability/#system-components). Each registry cache replica uses an own volume, so each registry cache needs to pull the image from upstream. | |
In special cases where this is not enough (for example when using an upstream which is only accessible with a proxy) it is possible to set `providerConfig.caches[].highAvailability.enabled` to `true`. This will add the label `high-availability-config.resources.gardener.cloud/type=server` to the StatefulSet and it will be scaled to 2 replicas. Appropriate [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/) will be added to the registry cache Pods according to the Shoot cluster configuration. See also [High Availability of Deployed Components](https://github.com/gardener/gardener/blob/master/docs/development/high-availability-of-components.md#system-components). Each registry cache replica uses an own volume, so each registry cache needs to pull the image from the upstream. |
Few nits:
- The field name is now
providerConfig.caches[].highAvailability.enabled
- We link gardener/gardener docs via github, not website (gardener.cloud) - see Fix links to gardener/gardener docs #158
- Other working suggestions
The registry cache runs with a single replica. This fact may lead to concerns for the high availability such as "What happens when the registry cache is down? Does containerd fail to pull the image?". As outlined in the [How does it work? section](#how-does-it-work), containerd is configured to fall back to the upstream registry if it fails to pull the image from the registry cache. Hence, when the registry cache is unavailable, the containerd's image pull operations are not affected because containerd falls back to image pull from the upstream registry. | ||
Per default the registry cache runs with a single replica. This fact may lead to concerns for the high availability such as "What happens when the registry cache is down? Does containerd fail to pull the image?". As outlined in the [How does it work? section](#how-does-it-work), containerd is configured to fall back to the upstream registry if it fails to pull the image from the registry cache. Hence, when the registry cache is unavailable, the containerd's image pull operations are not affected because containerd falls back to image pull from the upstream registry. | ||
|
||
In special cases where this is not enough (for example when using the registry cache with a proxy) it is possible to set `providerConfig.caches[].highAvailability` to `true`, this will add the label `high-availability-config.resources.gardener.cloud/type=server` and scale the statefulset to 2 replicas. The `topologySpreadConstraints` is added according to the cluster configuration. See also [High Availability of Deployed Components](https://gardener.cloud/docs/gardener/high-availability/#system-components). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is my understanding correct that:
- When using a registry behind a proxy it is not easy/possible to set http proxy settings to containerd. And with this feature, you would like to run the registry cache with 2 replicas and with this you would reduce the probability of having the registry-cache down and prevent containerd from falling back to the upstream.
?
If that is the case, I am not sure if it is a good solution to the problem.
How the initial Shoot creation with enabled registry-cache extension works in the case containerd is not able to fall back to the upstream? As described in https://github.com/gardener/gardener-extension-registry-cache/blob/main/docs/usage/registry-cache/configuration.md#limitations, we cannot cache all images from Shoot system components (because of the design decision that the registry-cache runs in the cluster and requires the Pod and Service network to be set up). If you create a Shoot with enabled registry-cache extension for such an upstream, and if indeed containerd cannot fall back to the upstream due to missing http proxy config, then Shoot creation wouldn't work.
The registry caches also rely on volumes. Having attach/detach issues with the volume would again lead to not having a running registry cache Pod and would again lead to image pull failures from containerd.
Let me know if my assumptions are correct.
If yes, IMO it would make much more sense to set http proxy settings to containerd instead of enabling HA for the registry cache StatefulSet.
How to categorize this PR?
/area high-availability
/kind enhancement
What this PR does / why we need it:
I reopend this PR due to some special
A
in the branch name of the other PR (#247)This PR adds the optional
highAvailability
setting for the proxy cache. As mentioned in #246 in some cases the registry cache becomes more critical (like the proxy use case). In such cases it makes sense to scale the registry cache to more replicas. This PR uses the High Availability of Deployed Components feature to also set a workingtopologySpreadConstraints
.Which issue(s) this PR fixes:
N/A
Special notes for your reviewer:
Release note: