Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] Grafana operator integration #5216

Closed
wants to merge 8 commits into from

Conversation

dzirg44
Copy link

@dzirg44 dzirg44 commented Jan 19, 2025

Grafana Operator Integration

This pr brings the integration with Grafana Operator, and in addition to configmaps creates CRD for Grafana (it should be installed independently)
it creates:

  • GrafanaDatasource from the datasources (unfortunately we can't reuse configmaps)
  • GrafanaDashboard resources from the configmaps
  • Fixed sync_grafana_dashboards.py to generate GrafanaDashboard dynamically

Tested on my local Kubernetes cluster.
This is POC Pull Request, so if you already discussed it - sorry in advance.

Special notes for your reviewer

@andrewgkew @gianrubio @gkarthiks @GMartinez-Sisti @jkroepke @scottrigby @Xtigyro @QuentinBisson

Checklist

  • DCO signed
  • Chart Version bumped
  • Title of the PR starts with chart name (e.g. [prometheus-couchdb-exporter])

@dzirg44 dzirg44 changed the title Grafana operator integration [kube-prometheus-stack] Grafana operator integration Jan 20, 2025
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we have GrafanaDashboard twice? One here and one in each file of dashboard-1.14

Copy link
Author

@dzirg44 dzirg44 Jan 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for mention it! I had the same question! in essence I suppose we had to have
charts/kube-prometheus-stack/files/dashboards but we already take all json dashboards via
sync_grafana_dashboards.py so because I couldn't explain it and (probably) somebody made it intentionally (like if we put any *.json dashboards in the charts/kube-prometheus-stack/templates/grafana/dashboards-1.14 folder it will be created by charts/kube-prometheus-stack/templates/grafana/crd-dashboards.yaml
So, even if I can understand somehow it, I don't know if it was a kind of legacy or some improvements.
@jkroepke

Copy link
Member

@jkroepke jkroepke Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/dzirg44/prometheus-community-helm-charts/blob/4de336ab548730f8af342b487140734c8a88c86b/charts/kube-prometheus-stack/hack/sync_grafana_dashboards.py#L41-L47

In general the sync python script is aware of the extra json file in that directory.

However, it's not the intension to put additional json into this directory. Once packaged, the user is not able put additional files into that directory. I know, its possible by clone this chart, but I would not like to support this.

I would like to keep the same behavior has we have it already with ConfigMap. (no automatic generation of YAML objects based on the json files.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ref: #5233

app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana-operator
{{- end }}
datasource:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this config below datasource looks similar to https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml

can we move it an distinct template that the template can be shared for CRD, non CRD approach?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkroepke Sure, do you prefer making the file like
datasource.yaml and have ConfigMap(datasource) and GrafanaDatasource there ? Initially I made 1 file, but I found that it creates additional level of complexity, and probably people can find it more complex than it should be. But if you think that it is fine - I would be glad to make changes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 distinct a fine, just move the shared part in a named templated, inside _helper.tpl.

May create a new _helper.tpl inside grafana directory.

Copy link
Author

@dzirg44 dzirg44 Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkroepke when I use a helper and create a list of dictionaries i perform tyYaml at the end, like

  datasource.yaml: |
    apiVersion: 1
    datasources:
    {{- (include "kube-prometheus-stack.datasources" . | fromYaml).datasources | toYaml | nindent 4 }}

but it breaks the order to

  datasource.yaml: |
    apiVersion: 1
    datasources:
    - isDefault: true
      jsonData:
        httpMethod: POST
        timeInterval: 30s
        timeout: null
      name: Prometheus
      type: prometheus
      uid: prometheus
      url: http://prometheus-stack-kube-prom-prometheus.default:9090/
    - isDefault: false
      jsonData:
        handleGrafanaManagedAlerts: false
        implementation: prometheus
      name: Alertmanager
      type: alertmanager
      uid: alertmanager
      url: http://prometheus-stack-kube-prom-alertmanager.default:9093/

Is it okay, or should I create not a list and only dict?

POC helper

{{- define "kube-prometheus-stack.datasources" -}}
{{- $scrapeInterval := .Values.grafana.sidecar.datasources.defaultDatasourceScrapeInterval | default .Values.prometheus.prometheusSpec.scrapeInterval | default "30s" }}
{{- $datasources := list }}
{{- if .Values.grafana.sidecar.datasources.defaultDatasourceEnabled }}
{{/* Create jsonData dictionary first */}}
{{- $jsonData := dict 
 "httpMethod" .Values.grafana.sidecar.datasources.httpMethod
 "timeInterval" $scrapeInterval
 "timeout" .Values.grafana.sidecar.datasources.timeout
}}
{{/* Conditionally add exemplarTraceIdDestinations */}}
{{- if .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations }}
{{- $_ := set $jsonData "exemplarTraceIdDestinations" (list (dict 
 "datasourceUid" .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.datasourceUid
 "name" .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.traceIdLabelName)) }}
{{- end }}
{{/* Create defaultDS with ordered fields */}}
{{- $defaultDS := dict }}
{{- $_ := set $defaultDS "name" .Values.grafana.sidecar.datasources.name }}
{{- $_ := set $defaultDS "type" "prometheus" }}
{{- $_ := set $defaultDS "uid" .Values.grafana.sidecar.datasources.uid }}
{{- $_ := set $defaultDS "url" (default (printf "http://%s-prometheus.%s:%v/%s"
   (include "kube-prometheus-stack.fullname" .)
   (include "kube-prometheus-stack.namespace" .)
   .Values.prometheus.service.port
   (trimPrefix "/" .Values.prometheus.prometheusSpec.routePrefix)
 ) .Values.grafana.sidecar.datasources.url) }}
{{- $_ := set $defaultDS "isDefault" .Values.grafana.sidecar.datasources.isDefaultDatasource }}
{{- $_ := set $defaultDS "jsonData" $jsonData }}
{{- $datasources = append $datasources $defaultDS }}
{{- end }}

{{/* Same pattern for replica datasources */}}
{{- if .Values.grafana.sidecar.datasources.createPrometheusReplicasDatasources }}
{{- range until (int .Values.prometheus.prometheusSpec.replicas) }}
{{- $replicaDS := dict }}
{{- $_ := set $replicaDS "name" (printf "%s-%d" $.Values.grafana.sidecar.datasources.name .) }}
{{- $_ := set $replicaDS "type" "prometheus" }}
{{- $_ := set $replicaDS "uid" (printf "%s-replica-%d" $.Values.grafana.sidecar.datasources.uid .) }}
{{- $_ := set $replicaDS "url" (printf "http://prometheus-%s-%d.prometheus-operated:9090/%s"
   (include "kube-prometheus-stack.prometheus.crname" $)
   .
   (trimPrefix "/" $.Values.prometheus.prometheusSpec.routePrefix)) }}
{{- $_ := set $replicaDS "isDefault" false }}
{{- $_ := set $replicaDS "jsonData" (dict "timeInterval" $scrapeInterval) }}
{{- if $.Values.grafana.sidecar.datasources.exemplarTraceIdDestinations }}
{{- $_ := set $replicaDS "jsonData" (dict 
   "timeInterval" $scrapeInterval
   "exemplarTraceIdDestinations" (list (dict 
     "datasourceUid" $.Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.datasourceUid
     "name" $.Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.traceIdLabelName))) }}
{{- end }}
{{- $datasources = append $datasources $replicaDS }}
{{- end }}
{{- end }}

{{/* And for alertmanager datasource */}}
{{- if .Values.grafana.sidecar.datasources.alertmanager.enabled }}
{{- $alertmanagerDS := dict }}
{{- $_ := set $alertmanagerDS "name" .Values.grafana.sidecar.datasources.alertmanager.name }}
{{- $_ := set $alertmanagerDS "type" "alertmanager" }}
{{- $_ := set $alertmanagerDS "uid" .Values.grafana.sidecar.datasources.alertmanager.uid }}
{{- $_ := set $alertmanagerDS "url" (default (printf "http://%s-alertmanager.%s:%v/%s"
   (include "kube-prometheus-stack.fullname" .)
   (include "kube-prometheus-stack.namespace" .)
   .Values.alertmanager.service.port
   (trimPrefix "/" .Values.alertmanager.alertmanagerSpec.routePrefix)
 ) .Values.grafana.sidecar.datasources.alertmanager.url) }}
{{- $_ := set $alertmanagerDS "isDefault" false }}
{{- $_ := set $alertmanagerDS "jsonData" (dict
   "handleGrafanaManagedAlerts" .Values.grafana.sidecar.datasources.alertmanager.handleGrafanaManagedAlerts
   "implementation" .Values.grafana.sidecar.datasources.alertmanager.implementation) }}
{{- $datasources = append $datasources $alertmanagerDS }}
{{- end }}

{{- if .Values.grafana.additionalDataSources }}
{{- $datasources = concat $datasources .Values.grafana.additionalDataSources }}
{{- end }}
{{- $result := dict "datasources" $datasources -}}
{{- $result | toYaml -}}
{{- end }}

@QuentinBisson
Copy link
Member

I quite like the general approach here

@jkroepke do you think it might be time to switch the dashboards to it's own subcharts?

I'm a bit afraid about the possible implications about the helm secret size if everything is enabled.

@jkroepke
Copy link
Member

jkroepke commented Jan 21, 2025

If yes, it should be a distinct PR before merging this.

Here is list of template sizes, via

kubectl get secrets sh.helm.release.v1.kube-prometheus-stack.v1 -o jsonpath='{.data.release}' | base64 -d | base64 -d | zcat - | jq -r '.chart.templates[] | (.data | length | tostring) + ": " + .name' release.json | sort -n
Details
96: templates/extra-objects.yaml
468: templates/NOTES.txt
508: templates/prometheus-operator/_prometheus-operator.tpl
688: templates/prometheus/csi-secret.yaml
844: templates/prometheus-operator/clusterrolebinding.yaml
880: templates/exporters/kube-etcd/endpoints.yaml
888: templates/exporters/kube-proxy/endpoints.yaml
904: templates/prometheus/clusterrolebinding.yaml
928: templates/prometheus-operator/admission-webhooks/deployment/pdb.yaml
1020: templates/prometheus-operator/psp-clusterrolebinding.yaml
1024: templates/prometheus-operator/serviceaccount.yaml
1032: templates/prometheus/extrasecret.yaml
1040: templates/thanos-ruler/extrasecret.yaml
1056: templates/alertmanager/extrasecret.yaml
1060: templates/prometheus/additionalAlertmanagerConfigs.yaml
1068: templates/prometheus/psp-clusterrolebinding.yaml
1072: templates/prometheus/additionalAlertRelabelConfigs.yaml
1084: templates/prometheus/secret.yaml
1096: templates/prometheus-operator/psp-clusterrole.yaml
1116: templates/thanos-ruler/serviceaccount.yaml
1128: templates/prometheus-operator/admission-webhooks/_prometheus-operator-webhook.tpl
1140: templates/alertmanager/psp-rolebinding.yaml
1144: templates/prometheus/psp-clusterrole.yaml
1188: templates/prometheus-operator/admission-webhooks/job-patch/role.yaml
1212: templates/prometheus/serviceaccount.yaml
1228: templates/alertmanager/psp-role.yaml
1240: templates/prometheus-operator/admission-webhooks/deployment/serviceaccount.yaml
1252: templates/thanos-ruler/podDisruptionBudget.yaml
1264: templates/exporters/kube-scheduler/endpoints.yaml
1272: templates/alertmanager/podDisruptionBudget.yaml
1280: templates/alertmanager/serviceaccount.yaml
1296: templates/thanos-ruler/secret.yaml
1356: templates/exporters/core-dns/service.yaml
1360: templates/prometheus/additionalScrapeConfigs.yaml
1392: templates/prometheus-operator/admission-webhooks/job-patch/clusterrolebinding.yaml
1404: templates/exporters/kube-controller-manager/endpoints.yaml
1416: templates/prometheus-operator/networkpolicy.yaml
1436: templates/prometheus/podDisruptionBudget.yaml
1460: templates/prometheus-operator/admission-webhooks/job-patch/rolebinding.yaml
1464: templates/grafana/configmap-dashboards.yaml
1480: templates/exporters/kube-etcd/service.yaml
1504: templates/exporters/kube-proxy/service.yaml
1540: templates/exporters/kube-dns/service.yaml
1636: templates/alertmanager/secret.yaml
1656: templates/prometheus-operator/admission-webhooks/job-patch/serviceaccount.yaml
1716: templates/prometheus/ciliumnetworkpolicy.yaml
1716: templates/prometheus/podmonitors.yaml
1732: templates/prometheus/servicemonitors.yaml
1756: templates/alertmanager/psp.yaml
1756: templates/prometheus-operator/psp.yaml
1756: templates/prometheus/networkpolicy.yaml
1912: templates/prometheus-operator/admission-webhooks/job-patch/clusterrole.yaml
1980: templates/prometheus-operator/ciliumnetworkpolicy.yaml
1988: templates/thanos-ruler/route.yaml
2008: templates/prometheus/_rules.tpl
2024: templates/prometheus-operator/admission-webhooks/job-patch/networkpolicy-createSecret.yaml
2032: templates/prometheus-operator/admission-webhooks/job-patch/networkpolicy-patchWebhook.yaml
2048: templates/prometheus-operator/aggregate-clusterroles.yaml
2080: templates/prometheus/additionalPrometheusRules.yaml
2080: templates/prometheus/route.yaml
2096: templates/alertmanager/route.yaml
2096: templates/exporters/kube-scheduler/service.yaml
2208: templates/prometheus-operator/admission-webhooks/job-patch/psp.yaml
2300: templates/prometheus-operator/verticalpodautoscaler.yaml
2304: templates/prometheus/clusterrole.yaml
2344: templates/exporters/kube-controller-manager/service.yaml
2344: templates/prometheus-operator/admission-webhooks/job-patch/ciliumnetworkpolicy-createSecret.yaml
2352: templates/prometheus-operator/admission-webhooks/job-patch/ciliumnetworkpolicy-patchWebhook.yaml
2368: templates/prometheus-operator/clusterrole.yaml
2404: templates/prometheus/psp.yaml
2524: templates/prometheus-operator/servicemonitor.yaml
2684: templates/prometheus/servicemonitorThanosSidecar.yaml
2708: templates/prometheus/serviceThanosSidecar.yaml
2716: templates/alertmanager/serviceperreplica.yaml
2728: templates/exporters/core-dns/servicemonitor.yaml
2764: templates/exporters/kube-api-server/servicemonitor.yaml
2904: templates/prometheus/serviceThanosSidecarExternal.yaml
3008: templates/exporters/kube-proxy/servicemonitor.yaml
3072: templates/prometheus/rules-1.14/k8s.rules.container_memory_rss.yaml
3076: templates/prometheus/rules-1.14/kube-prometheus-general.rules.yaml
3084: templates/prometheus/rules-1.14/k8s.rules.container_memory_swap.yaml
3092: templates/prometheus/rules-1.14/k8s.rules.container_memory_cache.yaml
3096: templates/thanos-ruler/service.yaml
3176: templates/prometheus-operator/certmanager.yaml
3200: templates/prometheus-operator/service.yaml
3212: templates/prometheus/rules-1.14/k8s.rules.container_memory_working_set_bytes.yaml
3340: templates/prometheus/serviceperreplica.yaml
3412: templates/prometheus/rules-1.14/k8s.rules.container_cpu_usage_seconds_total.yaml
3428: templates/exporters/kube-dns/servicemonitor.yaml
3584: templates/prometheus/rules-1.14/kubernetes-system-kube-proxy.yaml
3732: templates/exporters/kube-scheduler/servicemonitor.yaml
3744: templates/prometheus/rules-1.14/node-network.yaml
3748: templates/exporters/kube-etcd/servicemonitor.yaml
3768: templates/prometheus/rules-1.14/kubernetes-system-scheduler.yaml
3796: templates/thanos-ruler/ingress.yaml
3892: templates/prometheus/ingress.yaml
3904: templates/prometheus/rules-1.14/kubernetes-system-controller-manager.yaml
3948: templates/prometheus/ingressThanosSidecar.yaml
3976: templates/alertmanager/service.yaml
3980: templates/prometheus/ingressperreplica.yaml
4000: templates/alertmanager/ingressperreplica.yaml
4016: templates/exporters/kube-controller-manager/servicemonitor.yaml
4016: templates/prometheus/rules-1.14/config-reloaders.yaml
4064: templates/prometheus/rules-1.14/kube-apiserver-histogram.rules.yaml
4168: templates/alertmanager/ingress.yaml
4348: templates/prometheus-operator/admission-webhooks/deployment/service.yaml
4676: templates/prometheus/service.yaml
4904: templates/prometheus/rules-1.14/k8s.rules.container_cpu_limits.yaml
4936: templates/prometheus/rules-1.14/k8s.rules.container_cpu_requests.yaml
4936: templates/thanos-ruler/servicemonitor.yaml
4952: templates/prometheus/rules-1.14/k8s.rules.container_memory_limits.yaml
4984: templates/prometheus/rules-1.14/k8s.rules.container_memory_requests.yaml
5020: templates/prometheus-operator/admission-webhooks/job-patch/job-createSecret.yaml
5120: templates/prometheus-operator/admission-webhooks/job-patch/job-patchWebhook.yaml
5356: templates/prometheus-operator/admission-webhooks/mutatingWebhookConfiguration.yaml
5376: templates/prometheus/rules-1.14/kubelet.rules.yaml
5396: templates/prometheus-operator/admission-webhooks/validatingWebhookConfiguration.yaml
5728: templates/prometheus/servicemonitor.yaml
6032: templates/grafana/configmaps-datasources.yaml
6320: templates/alertmanager/servicemonitor.yaml
6592: templates/prometheus/rules-1.14/kubernetes-system.yaml
6980: templates/prometheus/rules-1.14/k8s.rules.pod_owner.yaml
7088: templates/prometheus/rules-1.14/node.rules.yaml
7236: templates/prometheus/rules-1.14/kube-prometheus-node-recording.rules.yaml
8420: templates/prometheus/rules-1.14/general.rules.yaml
8820: templates/exporters/kubelet/servicemonitor.yaml
9232: templates/grafana/dashboards-1.14/k8s-windows-cluster-rsrc-use.yaml
10012: templates/prometheus/rules-1.14/kube-scheduler.rules.yaml
10040: templates/grafana/dashboards-1.14/alertmanager-overview.yaml
10200: templates/grafana/dashboards-1.14/persistentvolumesusage.yaml
11188: templates/grafana/dashboards-1.14/pod-total.yaml
11352: templates/prometheus-operator/admission-webhooks/deployment/deployment.yaml
11380: templates/prometheus/rules-1.14/node-exporter.rules.yaml
11624: templates/prometheus/rules-1.14/kube-apiserver-slos.yaml
11720: templates/grafana/dashboards-1.14/k8s-resources-windows-namespace.yaml
11928: templates/grafana/dashboards-1.14/grafana-overview.yaml
12052: templates/prometheus/rules-1.14/kube-state-metrics.yaml
12684: templates/grafana/dashboards-1.14/node-rsrc-use.yaml
13168: templates/thanos-ruler/ruler.yaml
13184: templates/grafana/dashboards-1.14/etcd.yaml
13404: templates/grafana/dashboards-1.14/k8s-resources-windows-pod.yaml
13412: templates/alertmanager/alertmanager.yaml
13740: templates/grafana/dashboards-1.14/controller-manager.yaml
13784: templates/grafana/dashboards-1.14/k8s-windows-node-rsrc-use.yaml
14088: templates/prometheus/rules-1.14/k8s.rules.container_resource.yaml
14264: templates/prometheus/rules-1.14/windows.pod.rules.yaml
14468: templates/grafana/dashboards-1.14/proxy.yaml
14756: templates/grafana/dashboards-1.14/k8s-resources-multicluster.yaml
15164: templates/grafana/dashboards-1.14/scheduler.yaml
15300: templates/grafana/dashboards-1.14/nodes-aix.yaml
15468: templates/grafana/dashboards-1.14/k8s-resources-windows-cluster.yaml
15964: templates/grafana/dashboards-1.14/k8s-resources-node.yaml
16056: templates/prometheus/rules-1.14/kubernetes-system-apiserver.yaml
16072: templates/grafana/dashboards-1.14/nodes.yaml
16696: templates/grafana/dashboards-1.14/workload-total.yaml
16964: templates/grafana/dashboards-1.14/nodes-darwin.yaml
17056: templates/_helpers.tpl
17872: templates/grafana/dashboards-1.14/namespace-by-pod.yaml
18356: templates/prometheus-operator/deployment.yaml
18380: templates/prometheus/rules-1.14/kubernetes-storage.yaml
18716: templates/grafana/dashboards-1.14/apiserver.yaml
21136: templates/prometheus/rules-1.14/windows.node.rules.yaml
21896: templates/prometheus/rules-1.14/prometheus-operator.yaml
22084: templates/prometheus/rules-1.14/kube-apiserver-availability.rules.yaml
22792: templates/prometheus/rules-1.14/kubernetes-resources.yaml
23004: templates/grafana/dashboards-1.14/cluster-total.yaml
23516: templates/prometheus/rules-1.14/alertmanager.rules.yaml
23988: templates/grafana/dashboards-1.14/prometheus.yaml
26832: templates/grafana/dashboards-1.14/namespace-by-workload.yaml
27556: templates/grafana/dashboards-1.14/kubelet.yaml
29856: templates/prometheus/rules-1.14/kubernetes-system-kubelet.yaml
30156: templates/grafana/dashboards-1.14/node-cluster-rsrc-use.yaml
31048: templates/prometheus/prometheus.yaml
31356: templates/grafana/dashboards-1.14/prometheus-remote-write.yaml
32912: templates/grafana/dashboards-1.14/k8s-coredns.yaml
33400: templates/prometheus/rules-1.14/etcd.yaml
33732: templates/grafana/dashboards-1.14/k8s-resources-workload.yaml
35584: templates/grafana/dashboards-1.14/k8s-resources-pod.yaml
36772: templates/prometheus/rules-1.14/kube-apiserver-burnrate.rules.yaml
37268: templates/grafana/dashboards-1.14/k8s-resources-namespace.yaml
37532: templates/grafana/dashboards-1.14/k8s-resources-cluster.yaml
37916: templates/grafana/dashboards-1.14/k8s-resources-workloads-namespace.yaml
43212: templates/prometheus/rules-1.14/kubernetes-apps.yaml
57888: templates/prometheus/rules-1.14/prometheus.yaml
60960: templates/prometheus/rules-1.14/node-exporter.yaml

@asherf
Copy link
Member

asherf commented Jan 22, 2025

IMO this should be a major version bump (68.x->69.x)
also updating the docs (README) is probably a good idea

Copy link
Member

@jkroepke jkroepke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dzirg44 in general, I'm open to include Grafana Operator CRDs into kube-prometheus-stack. However as a maintainer, our time is a bit limited and there are a lot diff and parallel discussions here.

I would kindly ask to split out this PR into 2 distinct PRs, one covers dashboard, one covers datasource topic.

From my point of view, it would really help, since the lines of diff are lower. + We can merge one PR, if we all think, it's ready while not waiting for the other topic.

WDYT?

@jkroepke
Copy link
Member

IMO this should be a major version bump (68.x->69.x) also updating the docs (README) is probably a good idea

Could you please explain, why this is a major bump? Which functionally is going to break?

@dzirg44
Copy link
Author

dzirg44 commented Jan 23, 2025

@dzirg44 in general, I'm open to include Grafana Operator CRDs into kube-prometheus-stack. However as a maintainer, our time is a bit limited and there are a lot diff and parallel discussions here.

I would kindly ask to split out this PR into 2 distinct PRs, one covers dashboard, one covers datasource topic.

From my point of view, it would really help, since the lines of diff are lower. + We can merge one PR, if we all think, it's ready while not waiting for the other topic.

WDYT?

No problem! I will create them. You can close this one. @jkroepke

@jkroepke jkroepke closed this Jan 23, 2025
@asherf
Copy link
Member

asherf commented Jan 23, 2025

IMO this should be a major version bump (68.x->69.x) also updating the docs (README) is probably a good idea

Could you please explain, why this is a major bump? Which functionally is going to break?

I think it is mostly requiring the user to install the grafana specific CRDs. Unless I am missing something

@jkroepke
Copy link
Member

IMO this should be a major version bump (68.x->69.x) also updating the docs (README) is probably a good idea

Could you please explain, why this is a major bump? Which functionally is going to break?

I think it is mostly requiring the user to install the grafana specific CRDs. Unless I am missing something

The options are not enabled by default. To use the custom resources, they require https://github.com/grafana/grafana-operator anyways. And the CRDs are part of the installation for grafana-operator .

kube-prometheus-stack will not deliver the CRDs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants