-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] Grafana operator integration #5216
Conversation
Signed-off-by: Oleg Tsymbal <[email protected]>
Signed-off-by: Oleg Tsymbal <[email protected]>
Signed-off-by: Oleg Tsymbal <[email protected]>
Signed-off-by: Oleg Tsymbal <[email protected]>
Signed-off-by: Oleg Tsymbal <[email protected]>
Signed-off-by: Oleg Tsymbal <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we have GrafanaDashboard
twice? One here and one in each file of dashboard-1.14
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for mention it! I had the same question! in essence I suppose we had to have
charts/kube-prometheus-stack/files/dashboards
but we already take all json dashboards via
sync_grafana_dashboards.py
so because I couldn't explain it and (probably) somebody made it intentionally (like if we put any *.json dashboards in the charts/kube-prometheus-stack/templates/grafana/dashboards-1.14
folder it will be created by charts/kube-prometheus-stack/templates/grafana/crd-dashboards.yaml
So, even if I can understand somehow it, I don't know if it was a kind of legacy or some improvements.
@jkroepke
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general the sync python script is aware of the extra json file in that directory.
However, it's not the intension to put additional json into this directory. Once packaged, the user is not able put additional files into that directory. I know, its possible by clone this chart, but I would not like to support this.
I would like to keep the same behavior has we have it already with ConfigMap. (no automatic generation of YAML objects based on the json files.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ref: #5233
app.kubernetes.io/instance: grafana | ||
app.kubernetes.io/name: grafana-operator | ||
{{- end }} | ||
datasource: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this config below datasource looks similar to https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml
can we move it an distinct template that the template can be shared for CRD, non CRD approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jkroepke Sure, do you prefer making the file like
datasource.yaml
and have ConfigMap(datasource) and GrafanaDatasource there ? Initially I made 1 file, but I found that it creates additional level of complexity, and probably people can find it more complex than it should be. But if you think that it is fine - I would be glad to make changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 distinct a fine, just move the shared part in a named templated, inside _helper.tpl
.
May create a new _helper.tpl
inside grafana directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jkroepke when I use a helper and create a list of dictionaries i perform tyYaml at the end, like
datasource.yaml: |
apiVersion: 1
datasources:
{{- (include "kube-prometheus-stack.datasources" . | fromYaml).datasources | toYaml | nindent 4 }}
but it breaks the order to
datasource.yaml: |
apiVersion: 1
datasources:
- isDefault: true
jsonData:
httpMethod: POST
timeInterval: 30s
timeout: null
name: Prometheus
type: prometheus
uid: prometheus
url: http://prometheus-stack-kube-prom-prometheus.default:9090/
- isDefault: false
jsonData:
handleGrafanaManagedAlerts: false
implementation: prometheus
name: Alertmanager
type: alertmanager
uid: alertmanager
url: http://prometheus-stack-kube-prom-alertmanager.default:9093/
Is it okay, or should I create not a list and only dict?
POC helper
{{- define "kube-prometheus-stack.datasources" -}}
{{- $scrapeInterval := .Values.grafana.sidecar.datasources.defaultDatasourceScrapeInterval | default .Values.prometheus.prometheusSpec.scrapeInterval | default "30s" }}
{{- $datasources := list }}
{{- if .Values.grafana.sidecar.datasources.defaultDatasourceEnabled }}
{{/* Create jsonData dictionary first */}}
{{- $jsonData := dict
"httpMethod" .Values.grafana.sidecar.datasources.httpMethod
"timeInterval" $scrapeInterval
"timeout" .Values.grafana.sidecar.datasources.timeout
}}
{{/* Conditionally add exemplarTraceIdDestinations */}}
{{- if .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations }}
{{- $_ := set $jsonData "exemplarTraceIdDestinations" (list (dict
"datasourceUid" .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.datasourceUid
"name" .Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.traceIdLabelName)) }}
{{- end }}
{{/* Create defaultDS with ordered fields */}}
{{- $defaultDS := dict }}
{{- $_ := set $defaultDS "name" .Values.grafana.sidecar.datasources.name }}
{{- $_ := set $defaultDS "type" "prometheus" }}
{{- $_ := set $defaultDS "uid" .Values.grafana.sidecar.datasources.uid }}
{{- $_ := set $defaultDS "url" (default (printf "http://%s-prometheus.%s:%v/%s"
(include "kube-prometheus-stack.fullname" .)
(include "kube-prometheus-stack.namespace" .)
.Values.prometheus.service.port
(trimPrefix "/" .Values.prometheus.prometheusSpec.routePrefix)
) .Values.grafana.sidecar.datasources.url) }}
{{- $_ := set $defaultDS "isDefault" .Values.grafana.sidecar.datasources.isDefaultDatasource }}
{{- $_ := set $defaultDS "jsonData" $jsonData }}
{{- $datasources = append $datasources $defaultDS }}
{{- end }}
{{/* Same pattern for replica datasources */}}
{{- if .Values.grafana.sidecar.datasources.createPrometheusReplicasDatasources }}
{{- range until (int .Values.prometheus.prometheusSpec.replicas) }}
{{- $replicaDS := dict }}
{{- $_ := set $replicaDS "name" (printf "%s-%d" $.Values.grafana.sidecar.datasources.name .) }}
{{- $_ := set $replicaDS "type" "prometheus" }}
{{- $_ := set $replicaDS "uid" (printf "%s-replica-%d" $.Values.grafana.sidecar.datasources.uid .) }}
{{- $_ := set $replicaDS "url" (printf "http://prometheus-%s-%d.prometheus-operated:9090/%s"
(include "kube-prometheus-stack.prometheus.crname" $)
.
(trimPrefix "/" $.Values.prometheus.prometheusSpec.routePrefix)) }}
{{- $_ := set $replicaDS "isDefault" false }}
{{- $_ := set $replicaDS "jsonData" (dict "timeInterval" $scrapeInterval) }}
{{- if $.Values.grafana.sidecar.datasources.exemplarTraceIdDestinations }}
{{- $_ := set $replicaDS "jsonData" (dict
"timeInterval" $scrapeInterval
"exemplarTraceIdDestinations" (list (dict
"datasourceUid" $.Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.datasourceUid
"name" $.Values.grafana.sidecar.datasources.exemplarTraceIdDestinations.traceIdLabelName))) }}
{{- end }}
{{- $datasources = append $datasources $replicaDS }}
{{- end }}
{{- end }}
{{/* And for alertmanager datasource */}}
{{- if .Values.grafana.sidecar.datasources.alertmanager.enabled }}
{{- $alertmanagerDS := dict }}
{{- $_ := set $alertmanagerDS "name" .Values.grafana.sidecar.datasources.alertmanager.name }}
{{- $_ := set $alertmanagerDS "type" "alertmanager" }}
{{- $_ := set $alertmanagerDS "uid" .Values.grafana.sidecar.datasources.alertmanager.uid }}
{{- $_ := set $alertmanagerDS "url" (default (printf "http://%s-alertmanager.%s:%v/%s"
(include "kube-prometheus-stack.fullname" .)
(include "kube-prometheus-stack.namespace" .)
.Values.alertmanager.service.port
(trimPrefix "/" .Values.alertmanager.alertmanagerSpec.routePrefix)
) .Values.grafana.sidecar.datasources.alertmanager.url) }}
{{- $_ := set $alertmanagerDS "isDefault" false }}
{{- $_ := set $alertmanagerDS "jsonData" (dict
"handleGrafanaManagedAlerts" .Values.grafana.sidecar.datasources.alertmanager.handleGrafanaManagedAlerts
"implementation" .Values.grafana.sidecar.datasources.alertmanager.implementation) }}
{{- $datasources = append $datasources $alertmanagerDS }}
{{- end }}
{{- if .Values.grafana.additionalDataSources }}
{{- $datasources = concat $datasources .Values.grafana.additionalDataSources }}
{{- end }}
{{- $result := dict "datasources" $datasources -}}
{{- $result | toYaml -}}
{{- end }}
charts/kube-prometheus-stack/templates/grafana/configmap-dashboards.yaml
Outdated
Show resolved
Hide resolved
I quite like the general approach here @jkroepke do you think it might be time to switch the dashboards to it's own subcharts? I'm a bit afraid about the possible implications about the helm secret size if everything is enabled. |
If yes, it should be a distinct PR before merging this. Here is list of template sizes, via kubectl get secrets sh.helm.release.v1.kube-prometheus-stack.v1 -o jsonpath='{.data.release}' | base64 -d | base64 -d | zcat - | jq -r '.chart.templates[] | (.data | length | tostring) + ": " + .name' release.json | sort -n Details
|
Signed-off-by: Oleg Tsymbal <[email protected]>
IMO this should be a major version bump (68.x->69.x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dzirg44 in general, I'm open to include Grafana Operator CRDs into kube-prometheus-stack. However as a maintainer, our time is a bit limited and there are a lot diff and parallel discussions here.
I would kindly ask to split out this PR into 2 distinct PRs, one covers dashboard, one covers datasource topic.
From my point of view, it would really help, since the lines of diff are lower. + We can merge one PR, if we all think, it's ready while not waiting for the other topic.
WDYT?
Could you please explain, why this is a major bump? Which functionally is going to break? |
No problem! I will create them. You can close this one. @jkroepke |
I think it is mostly requiring the user to install the grafana specific CRDs. Unless I am missing something |
The options are not enabled by default. To use the custom resources, they require https://github.com/grafana/grafana-operator anyways. And the CRDs are part of the installation for grafana-operator . kube-prometheus-stack will not deliver the CRDs. |
Grafana Operator Integration
This pr brings the integration with Grafana Operator, and in addition to configmaps creates CRD for Grafana (it should be installed independently)
it creates:
GrafanaDatasource
from the datasources (unfortunately we can't reuse configmaps)GrafanaDashboard
resources from the configmapssync_grafana_dashboards.py
to generateGrafanaDashboard
dynamicallyTested on my local Kubernetes cluster.
This is POC Pull Request, so if you already discussed it - sorry in advance.
Special notes for your reviewer
@andrewgkew @gianrubio @gkarthiks @GMartinez-Sisti @jkroepke @scottrigby @Xtigyro @QuentinBisson
Checklist
[prometheus-couchdb-exporter]
)