-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Helm charts proposal #224
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @atanasdinov. Thanks for your PR. I'm waiting for a kubevirt member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Signed-off-by: Atanas Dinov <[email protected]>
358d304
to
bdd730a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @atanasdinov, great proposal. Thanks a lot for bringing it up here.
I had an initial look and here are my thoughts on the topic.
From the end-user perspective, I personally like the approach #1
. It is simple and less error-prone (no chance for the user to uninstall the resources in wrong order or smth like that). However, the simplicity comes at the cost of additional logic within the hooks that needs to be maintained.
approach #3
looks like a good alternative which basically replicates the existing flow with the manifests. It is well-known to the users already. Also, from the implementation and maintenance point of view, it does not seem to add any complexity. And in fact I think that by using the hooks it might be possible to e.g. prevent uninstallation of the operator if the CR is still present in the cluster. Thus enforcing proper uninstallation order. The only downside I see for now with this approach is related to potential inconsistency between operator/cr charts versions. When updating KubeVirt/CDI you usually do not need to update the CR, only the operator (and CRD). Therefore, after an update, the CR chart will stay at version N while the operator will move to N+1. Maybe this is in fact not a big issue at the end. But still maybe worth thinking about it... if it may cause some problems or confusions for the users.
Regarding approach #2
I think there might be an issue if it always installs/updates the CR resource. Users may have some settings defined in the CR, and I assume those will be overwritten by an update.
Just to summarize: I am not sure about #2
but #1
and #3
look fine.
design-proposals/helm-chart.md
Outdated
# Implementation Phases | ||
|
||
- Create a KubeVirt Helm chart | ||
- Configure a Helm repository (using GitHub pages is probably the best option) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see it mentioned in the proposal, but Helm also supports charts deployment from OCI registries: https://helm.sh/docs/topics/registries/. I think it's also an interesting option to consider. Though, I do not know if there are limitations or potential compatibility issues from the perspective of quay.io. Quick googling suggests that it should work https://cloud.redhat.com/blog/quay-oci-artifact-support-for-helm-charts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is indeed a viable option. I'll include it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my helm, I use approach approach #1
.
But when upgrading crd need to do it manually, when deleting use pre-delete
to delete crd.
I agree that using #1
reduces the number of steps and the number of helm repos
We should be able to plug in the functionality to generate a `Chart.yaml`, `values.yaml` and possibly hooks and organize those in the necessary file structure. | ||
The charts can then be generated, packaged and pushed to the configured Helm repository at build time. | ||
|
||
### Approach 2: Manually curate a Helm chart |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Helm charts in fact use the content of the manifests. Since this content is generated for every build, it should be relatively easy to also structure it properly for Helm and produce the related artifacts. Those can be released as *.tgz
or via quay.io
. IMHO, it would be a better way, comparing to manual maintenance in a separate repository.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Generate Helm chart(s)
approach definitely has the better end result. However, most charts are highly customisable and even if we don't take such approach, I imagine we'd still template some values so the manifests will not be completely the same.
The users would be able to work with their preferred configuration without relying on kubectl patch
and / or manually modifying manifests before applying them e.g. configuring custom options for the CR would be possible via something like $ helm install kubevirt-cr <repo-url> --set configuration.developerConfiguration.cpuAllocationRatio=5
This is the only thing we'd need to keep in mind.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a kubevirt developer, but I had a thought.
I understand the motivation for generating one from the other. It's DRY, right? But generating a helm chart from the manifests seems backwards to me. Couldn't you generate manifests from the helm chart, instead, using helm template
and default values?
Patching templates into manifests sounds like it will become tricky to keep up with changes to the manifest in future releases. Working in the other direction seems trivial to me (just apply the template).
That way, you'd get:
- a single source of truth (the helm chart)
- an easy, low-maintenance release process (just run
helm template
) - guaranteed consistency between the two installation methods
- consistency of template parameters and values across versions
- no scramble to update patches for new versions
- ability to add as much customization as you want, without exponentially increasing complexity
I may be wrong, maybe it's a terrible idea. But I was surprised to see no one mention it yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this makes perfect sense. The initial goal of the proposal was not to be invasive with regards to how the manifests are currently generated hence why I didn't include it.
Signed-off-by: Atanas Dinov <[email protected]>
Signed-off-by: Atanas Dinov <[email protected]>
Thank you for your input, @vasiliy-ul!
Custom options will not be overwritten on upgrade.
I had missed some of the RBAC changes from v0.58.1 which were causing the upgrade to fail. The single chart approach did work properly once I added those. |
Just wanted to double-check this. So, assuming there is a valid KubeVirt installation running on my cluster and I have enabled some feature gates by editing the CR resource ...
spec:
certificateRotateStrategy: {}
configuration:
developerConfiguration:
featureGates:
- WorkloadEncryptionSEV
... The generated ---
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
name: kubevirt
namespace: kubevirt
spec:
certificateRotateStrategy: {}
configuration:
developerConfiguration:
featureGates: []
customizeComponents: {}
imagePullPolicy: IfNotPresent
workloadUpdateStrategy: {} I assume it will look similar when generated for the Helm chart. If I do smth like But what will happen on update with Helm? Does Helm perform the same |
Helm 3 will consider the previous manifest, the current live state and the new manifest. The flow you described will keep the manually edited feature gate
We can introduce a relevant functional test covering such scenarios. More info on the topic from the official Helm documentation. |
Regarding the versioning concerns, upgrading the operator using the manifests will actually also upgrade the CR. I've confirmed that this also works for Helm upgrades with This means that the CR chart can stay at version N while the Operator / CRD chart is at version N + M. We'd only need to upgrade if there was a change to the CRD and / or there's a particular feature which we'd want to enable / disable in the standard CR manifest. All this also applies to Regenerating the same CR or CRD charts (depending on the approach) is definitely not necessary and we can opt for doing so only when something has changed. However, I agree that it may bring confusion to the users. There is also the option to attempt creating subcharts (either of CR or CRD) as dependencies of the main chart and install a single chart only, however we'd need to have multiple hooks taking care of the ordering which would make it so complex that it didn't even make the exotic options list. Please let me know if any of the comments have not been fully addressed, thanks! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Hello, @aburdenthehand! We're still interested in contributing a Helm chart and would definitely appreciate it if we are able to get more people review the proposal and decide on the design and implementation options. In the meantime, we plan on using a manually curated one which we'll bump to the GA release soon. Please keep in mind that (depending on the implementation approach) we might not be able to start working on it right away due to limited capacity. |
As an outsider, but a very heavy user of this project, is there any assistance that I can offer to help speed this through review? |
@vasiliy-ul do you have any updates on this one? Not exactly sure what the state is here? |
@dhiller, please refer to the previous comment from @atanasdinov #224 (comment) |
/cc @vladikr @davidvossel |
@atanasdinov It's a nice proposal. My only concern is how it can be maintained and supported as part of core KubeVirt? I doubt we have any expertise around Helm... |
The main issue with not having a chart or kustomize is CD, Argo and other cd systems are built around using charts, but kubevirt/cdi requires some funny dancing around to shoehorn into the workflow. |
@vladikr I believe it's easier to maintain the chart if it's in the same repository as the source code but I've also seen examples of the opposite. It's pretty much up to the preferences of the maintainers. |
Regarding the dependency between CRDs and CRs: there are other very popular charts which deploy CRDs as templates in the same chart as the CRs, e.g.: https://github.com/akuity/kargo/tree/main/charts/kargo/templates see also https://github.com/akuity/kargo/releases#chart-improvements so from this it should work to have CRDs in the same chart and have them as templates so that CRD upgrades should also work |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
My preference would be to keep this chart outside of the kubevirt/kubevirt repo but in the kubevirt org. |
@davidvossel @mhenriks @awels can you please take a look at this? |
As an end user, I wanted to say I really like this proposal, was searching for the helm chart today to discover one did not exist yet (managed by the project). The proposal is very well written and I can tell a good amount of work has gone into it and this ongoing discussion. I really appreciate everyone involved and look forward to when this is accepted. |
* [kubevirt/kubevirt](https://github.com/kubevirt/kubevirt) | ||
* [kubevirt/containerized-data-importer](https://github.com/kubevirt/containerized-data-importer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@atanasdinov @vladikr
As I understand it, this might be the only thing actually blocking this moving forward - and I see in comments/have heard quite a bit of support for this proposal :)
Is it reasonable for the helm charts to be in their own repo, which can then be maintained outside of kubevirt and CDI repos by folks with particular experience in helm charts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's perfectly fine. I can update the proposal to mark that manually curated Helm chart in a separate Git repository is the selected approach.
We still have to decide on the chart layout, though. I assume the safest route is with the second option. Does the community agree?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not in the same chart as mentioned in #224 (comment) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both CRDS and CR's that use the CRD cant reliably be in the same chart. long standing helm issue. :/
As most folks want to deploy the operator, and then use it to immediately launch kubevirt, which object do you seperate? The CRDs or the CRs?
I generally like to manually apply static CRDs via kubectl, as they can be rather fiddly, then launch everything else with helm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I see, then I would prefer also the second option as mentioned above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both CRDS and CR's that use the CRD cant reliably be in the same chart. long standing helm issue. :/
I know that helm says it's not reliable, but kube-prometheus-stack does it - and that is a rather popular project which heavily uses CRDs.
But they can't be upgraded directly.
One can use a dedicated chart that only contains the CRDs, also like what kube-prometheus does.
There are also other, less standard options, like putting all the CRD's in the template-dir for helm, instead of in the CRD-directory, like what fluxcd-community-charts do
It would be really nice to be able to use a only helm-chart-way of managing clusters - automations with gitops and helm is very nice, and having to drop to a command-line (or write an automation that does) kubectl apply isn't great at scale, fluxcd has ways of handling this already, I assume ArgoCD has the same.
(But, I'm also a completely new person to kubevirt, so I have no idea of how kubevirt handles/changes CRDs over time, this might be a bigger issue to handle than what I know. I am subscriped to this topic, and eagerly await a helm-chart'y-way to install this project, as the current implementation comes with a bigger initial cost for me/my company, since we so heavily invested in helm and flux. I wanted to chime in as the approach of "just kubectl apply the CRD-upgrades" isn't a great scaling-option, and other options exist.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a crds chart separate from the main chart is a common solution. It still works with gitops.
sticking them in the templates directory does not work around the issue of having CRDS and CR's in the same chart.
This is usually solved with an “install-cards: true/false” or two charts,
once for app, one for CRD. Mayastor for instance does the second method.
…On Sat, Jan 18, 2025 at 4:53 AM davralin ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In design-proposals/helm-chart.md
<#224 (comment)>:
> +* [kubevirt/kubevirt](https://github.com/kubevirt/kubevirt)
+* [kubevirt/containerized-data-importer](https://github.com/kubevirt/containerized-data-importer)
Both CRDS and CR's that use the CRD cant reliably be in the same chart.
long standing helm issue. :/
I know that helm
<https://helm.sh/docs/chart_best_practices/custom_resource_definitions/>
says it's not reliable, but kube-prometheus-stack does it - and that is a
rather popular project which heavily uses CRDs.
But they can't be upgraded
<https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#upgrading-chart>
directly.
One can use a dedicated chart that only contains the CRDs, also like what
kube-prometheus
<https://artifacthub.io/packages/helm/prometheus-community/prometheus-operator-crds>
does.
There are also other, less standard options, like putting all the CRD's in
the template-dir for helm, instead of in the CRD-directory, like what
fluxcd-community-charts
<https://github.com/fluxcd-community/helm-charts/blob/main/charts/flux2/templates/helm-controller.crds.yaml>
do
It would be *really* nice to be able to use a *only* helm-chart-way of
managing clusters - automations with gitops and helm is very nice, and
having to drop to a command-line (or write an automation that does) kubectl
apply isn't great at scale, fluxcd
<https://fluxcd.io/flux/components/helm/helmreleases/#controlling-the-lifecycle-of-custom-resource-definitions>
has ways of handling this already, I assume ArgoCD has the same.
(But, I'm also a completely new person to kubevirt, so I have no idea of
how kubevirt handles/changes CRDs over time, this might be a bigger issue
to handle than what I know. I am subscriped to this topic, and eagerly
await a helm-chart'y-way to install this project, as the current
implementation comes with a bigger initial cost for me/my company, since we
so heavily invested in helm and flux. I wanted to chime in as the approach
of "just kubectl apply the CRD-upgrades" isn't a great scaling-option, and
other options exist.)
—
Reply to this email directly, view it on GitHub
<#224 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARU4Z2ALTLQKFVIAYXBHP32LIXEHAVCNFSM6AAAAAAZXL3WBCVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDKNRQGQYDCMRXGQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
This design proposal addresses the Helm chart installation method alternative. For reference: kubevirt/kubevirt#8347