Skip to content

Conversation

pipelines-github-app[bot]
Copy link
Contributor

@pipelines-github-app pipelines-github-app bot commented Aug 9, 2025

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 75.18.1 -> 76.5.1

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

prometheus-community/helm-charts (kube-prometheus-stack)

v76.5.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6080

Full Changelog: prometheus-community/helm-charts@prometheus-ipmi-exporter-0.6.3...kube-prometheus-stack-76.5.1

v76.5.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-27.32.0...kube-prometheus-stack-76.5.0

v76.4.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6070

Full Changelog: prometheus-community/helm-charts@alertmanager-1.25.0...kube-prometheus-stack-76.4.1

v76.4.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6059

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-76.3.1...kube-prometheus-stack-76.4.0

v76.3.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@alertmanager-snmp-notifier-2.1.0...kube-prometheus-stack-76.3.1

v76.3.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-76.2.2...kube-prometheus-stack-76.3.0

v76.2.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [CI] Update actions/create-github-app-token action to v2.1.1 by @​renovate[bot] in #​6043
  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6046

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-5.1.0...kube-prometheus-stack-76.2.2

v76.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.30.0...kube-prometheus-stack-76.2.1

v76.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-5.0.0...kube-prometheus-stack-76.2.0

v76.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6032

Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.29.3...kube-prometheus-stack-76.1.0

v76.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.1.4...kube-prometheus-stack-76.0.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@pipelines-github-app pipelines-github-app bot added app/prometheus Changes made to Prometheus application env/genmachine Changes made in the Talos cluster renovate/helm Changes related to Helm Chart update type/major labels Aug 9, 2025
@pipelines-github-app
Copy link
Contributor Author

pipelines-github-app bot commented Aug 9, 2025

--- main/kube-prometheus-stack_gitops_manifests_prometheus_genmachine_manifest_main.yaml	2025-10-16 03:33:17.495689716 +0000
+++ pr/kube-prometheus-stack_gitops_manifests_prometheus_genmachine_manifest_pr.yaml	2025-10-16 03:33:10.018657981 +0000
@@ -1,31 +1,31 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.1.4
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "2.16.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
 ---
@@ -52,63 +52,63 @@
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     app.kubernetes.io/name: kube-prometheus-stack-alertmanager
     app.kubernetes.io/component: alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     app.kubernetes.io/name: kube-prometheus-stack-prometheus
     app.kubernetes.io/component: prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-blackbox-exporter
   namespace: default
@@ -133,21 +133,21 @@
   namespace: default
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/secret.yaml
 apiVersion: v1
 kind: Secret
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 type: Opaque
 data:
   
   admin-user: "YWRtaW4="
   admin-password: "cGFzc3dvcmQ="
   ldap-toml: ""
 ---
@@ -155,34 +155,34 @@
 apiVersion: v1
 kind: Secret
 metadata:
   name: alertmanager-kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   alertmanager.yaml: "Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0KaW5oaWJpdF9ydWxlczoKLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIC0gYWxlcnRuYW1lCiAgc291cmNlX21hdGNoZXJzOgogIC0gc2V2ZXJpdHkgPSBjcml0aWNhbAogIHRhcmdldF9tYXRjaGVyczoKICAtIHNldmVyaXR5ID1+IHdhcm5pbmd8aW5mbwotIGVxdWFsOgogIC0gbmFtZXNwYWNlCiAgLSBhbGVydG5hbWUKICBzb3VyY2VfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IHdhcm5pbmcKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIHNvdXJjZV9tYXRjaGVyczoKICAtIGFsZXJ0bmFtZSA9IEluZm9JbmhpYml0b3IKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBhbGVydG5hbWUgPSBJbmZvSW5oaWJpdG9yCnJlY2VpdmVyczoKLSBuYW1lOiAibnVsbCIKcm91dGU6CiAgZ3JvdXBfYnk6CiAgLSBuYW1lc3BhY2UKICBncm91cF9pbnRlcnZhbDogNW0KICBncm91cF93YWl0OiAzMHMKICByZWNlaXZlcjogIm51bGwiCiAgcmVwZWF0X2ludGVydmFsOiAxMmgKICByb3V0ZXM6CiAgLSBtYXRjaGVyczoKICAgIC0gYWxlcnRuYW1lID0gIldhdGNoZG9nIgogICAgcmVjZWl2ZXI6ICJudWxsIgp0ZW1wbGF0ZXM6Ci0gL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbA=="
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap-dashboard-provider.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana-config-dashboards
   namespace: default
 data:
   provider.yaml: |-
     apiVersion: 1
     providers:
       - name: 'sidecarProvider'
@@ -195,21 +195,21 @@
           foldersFromFilesStructure: true
           path: /tmp/dashboards
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 data:
   
   plugins: grafana-piechart-panel,grafana-polystat-panel,grafana-clock-panel
   grafana.ini: |
     [analytics]
     check_for_updates = true
     [grafana_net]
@@ -418,103 +418,103 @@
       "https://raw.githubusercontent.com/spegel-org/spegel/refs/heads/main/charts/spegel/monitoring/grafana-dashboard.json" \
     > "/var/lib/grafana/dashboards/grafana-dashboards-system/spegel.json"
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-argocd
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-argocd
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-kubernetes
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-kubernetes
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-network
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-network
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-storage
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-storage
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-system
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-system
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-datasource
   namespace: default
   labels:
     grafana_datasource: "1"
     app: kube-prometheus-stack-grafana
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   datasource.yaml: |-
     apiVersion: 1
     datasources:
     - access: proxy
       isDefault: true
       name: Prometheus
       type: prometheus
@@ -550,36 +550,36 @@
           - HTTP/1.1
           - HTTP/2.0
         prober: http
         timeout: 5s
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrole.yaml
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana-clusterrole
 rules:
   - apiGroups: [""] # "" indicates the core API group
     resources: ["configmaps", "secrets"]
     verbs: ["get", "watch", "list"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.1.4
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "2.16.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 rules:
 
@@ -725,23 +725,23 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 rules:
 - apiGroups:
   - monitoring.coreos.com
   resources:
   - alertmanagers
@@ -835,23 +835,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 rules:
 # These permissions (to examine all namespaces) are not in the kube-prometheus repo.
 # They're grabbed from https://github.com/prometheus/prometheus/blob/master/documentation/examples/rbac-setup.yml
 # kube-prometheus deliberately defaults to a more restrictive setup that is not appropriate for our general audience.
 - apiGroups: [""]
   resources:
   - nodes
   - nodes/metrics
@@ -870,39 +870,39 @@
   verbs: ["get", "list", "watch"]
 - nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
   verbs: ["get"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrolebinding.yaml
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: kube-prometheus-stack-grafana-clusterrolebinding
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-grafana
     namespace: default
 roleRef:
   kind: ClusterRole
   name: kube-prometheus-stack-grafana-clusterrole
   apiGroup: rbac.authorization.k8s.io
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.1.4
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "2.16.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 roleRef:
   apiGroup: rbac.authorization.k8s.io
@@ -915,23 +915,23 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-operator
 subjects:
@@ -942,97 +942,97 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-prometheus
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-prometheus
     namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 rules: []
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/rolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: kube-prometheus-stack-grafana
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-grafana
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 spec:
   type: ClusterIP
   ports:
     - name: http-web
       port: 80
       protocol: TCP
-      targetPort: 3000
+      targetPort: grafana
   selector:
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.1.4
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "2.16.0"
     release: kube-prometheus-stack
   annotations:
 spec:
   type: "ClusterIP"
@@ -1080,23 +1080,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9093
     targetPort: 9093
     protocol: TCP
   - name: reloader-web
     appProtocol: http
@@ -1112,23 +1112,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-coredns
   labels:
     app: kube-prometheus-stack-coredns
     jobLabel: coredns
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 9153
       protocol: TCP
       targetPort: 9153
@@ -1139,23 +1139,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-controller-manager
   labels:
     app: kube-prometheus-stack-kube-controller-manager
     jobLabel: kube-controller-manager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10257
       protocol: TCP
       targetPort: 10257
@@ -1167,23 +1167,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-proxy
   labels:
     app: kube-prometheus-stack-kube-proxy
     jobLabel: kube-proxy
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10249
       protocol: TCP
       targetPort: 10249
@@ -1195,23 +1195,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-scheduler
   labels:
     app: kube-prometheus-stack-kube-scheduler
     jobLabel: kube-scheduler
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10259
       protocol: TCP
       targetPort: 10259
@@ -1222,23 +1222,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   ports:
   - name: https
     port: 443
     targetPort: https
@@ -1252,23 +1252,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9090
     targetPort: 9090
   - name: reloader-web
     appProtocol: http
     port: 8080
@@ -1460,43 +1460,43 @@
           hostPath:
             path: /
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app.kubernetes.io/name: grafana
       app.kubernetes.io/instance: kube-prometheus-stack
   strategy:
     type: RollingUpdate
   template:
     metadata:
       labels:
-        helm.sh/chart: grafana-9.3.1
+        helm.sh/chart: grafana-9.3.4
         app.kubernetes.io/name: grafana
         app.kubernetes.io/instance: kube-prometheus-stack
         app.kubernetes.io/version: "12.2.0"
       annotations:
         checksum/config: 897ab1f752c697c1ab43eee339bf6f8dc6322024f28d9b1fdb5358180b60b4ea
-        checksum/dashboards-json-config: 23eeea2cb683331d3da8550d49072918736ba33525738d8ded28f40ff01c7ea9
+        checksum/dashboards-json-config: b2990b3e4b28549511dc8dd8a6ef24e6ebefe8dcea6447c807ba7d7616c418e4
         checksum/sc-dashboard-provider-config: e3aca4961a8923a0814f12363c5e5e10511bb1deb6cd4e0cbe138aeee493354f
         checksum/secret: 7590fe10cbd3ae3e92a60625ff270e3e7d404731e1c73aaa2df1a78dab2c7768
         kubectl.kubernetes.io/default-container: grafana
     spec:
       
       serviceAccountName: kube-prometheus-stack-grafana
       automountServiceAccountToken: true
       shareProcessNamespace: false
       securityContext:
         fsGroup: 472
@@ -1713,41 +1713,41 @@
         - name: sc-datasources-volume
           emptyDir: {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.1.4
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "2.16.0"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:      
       app.kubernetes.io/name: kube-state-metrics
       app.kubernetes.io/instance: kube-prometheus-stack
   replicas: 1
   strategy:
     type: RollingUpdate
   revisionHistoryLimit: 10
   template:
     metadata:
       labels:        
-        helm.sh/chart: kube-state-metrics-6.1.0
+        helm.sh/chart: kube-state-metrics-6.1.4
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: kube-state-metrics
         app.kubernetes.io/name: kube-state-metrics
         app.kubernetes.io/instance: kube-prometheus-stack
         app.kubernetes.io/version: "2.16.0"
         release: kube-prometheus-stack
     spec:
       automountServiceAccountToken: true
       hostNetwork: false
@@ -1804,60 +1804,60 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: kube-prometheus-stack-operator
       release: "kube-prometheus-stack"
   template:
     metadata:
       labels:
         
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "75.18.1"
+        app.kubernetes.io/version: "76.5.1"
         app.kubernetes.io/part-of: kube-prometheus-stack
-        chart: kube-prometheus-stack-75.18.1
+        chart: kube-prometheus-stack-76.5.1
         release: "kube-prometheus-stack"
         heritage: "Helm"
         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
         - name: kube-prometheus-stack
           image: "quay.io/prometheus-operator/prometheus-operator:v0.86.1"
           imagePullPolicy: "IfNotPresent"
           args:
             - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
             - --kubelet-endpoints=true
             - --kubelet-endpointslice=false
             - --localhost=127.0.0.1
-            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.83.0
+            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.84.1
             - --config-reloader-cpu-request=0
             - --config-reloader-cpu-limit=0
             - --config-reloader-memory-request=0
             - --config-reloader-memory-limit=0
             - --thanos-default-base-image=quay.io/thanos/thanos:v0.39.2
             - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
             - --web.enable-tls=true
             - --web.cert-file=/cert/cert
             - --web.key-file=/cert/key
             - --web.listen-address=:10250
@@ -2063,21 +2063,21 @@
         - name: storage-volume
           emptyDir: {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/ingress.yaml
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-9.3.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   annotations:
     cert-manager.io/cluster-issuer: "fredcorp-ca"
     cert-manager.io/common-name: "grafana.talos-genmachine.fredcorp.com"
     traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
     traefik.ingress.kubernetes.io/service.scheme: "https"
 spec:
   ingressClassName: traefik
@@ -2106,23 +2106,23 @@
     cert-manager.io/common-name: prometheus.talos-genmachine.fredcorp.com
     traefik.ingress.kubernetes.io/router.entrypoints: websecure
     traefik.ingress.kubernetes.io/service.scheme: https
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ingressClassName: traefik
   rules:
     - host: "prometheus.talos-genmachine.fredcorp.com"
       http:
         paths:
           - path: /
             pathType: Prefix
@@ -2210,23 +2210,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Alertmanager
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   image: "quay.io/prometheus/alertmanager:v0.28.1"
   imagePullPolicy: "IfNotPresent"
   version: v0.28.1
   replicas: 1
   listenLocal: false
   serviceAccountName: kube-prometheus-stack-alertmanager
   automountServiceAccountToken: true
@@ -2263,23 +2263,23 @@
 kind: MutatingWebhookConfiguration
 metadata:
   name:  kube-prometheus-stack-admission
   annotations:
     
   labels:
     app: kube-prometheus-stack-admission
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator-webhook
 webhooks:
   - name: prometheusrulemutate.monitoring.coreos.com
     failurePolicy: Ignore
     rules:
       - apiGroups:
           - monitoring.coreos.com
@@ -2303,23 +2303,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Prometheus
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   automountServiceAccountToken: true
   alerting:
     alertmanagers:
       - namespace: default
         name: kube-prometheus-stack-alertmanager
         port: http-web
         pathPrefix: "/"
@@ -2392,23 +2392,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-alertmanager.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: alertmanager.rules
     rules:
     - alert: AlertmanagerFailedReload
       annotations:
         description: Configuration has failed to load for {{ $labels.namespace }}/{{ $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
@@ -2535,23 +2535,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-config-reloaders
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: config-reloaders
     rules:
     - alert: ConfigReloaderSidecarErrors
       annotations:
         description: 'Errors encountered while the {{$labels.pod}} config-reloader sidecar attempts to sync config in {{$labels.namespace}} namespace.
 
@@ -2567,23 +2567,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-general.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: general.rules
     rules:
     - alert: TargetDown
       annotations:
         description: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/general/targetdown
@@ -2635,23 +2635,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-cpu-usage-seconds-tot
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_cpu_usage_seconds_total
     rules:
     - expr: |-
         sum by (cluster, namespace, pod, container) (
           rate(container_cpu_usage_seconds_total{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}[5m])
         ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
@@ -2670,23 +2670,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-cache
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_cache
     rules:
     - expr: |-
         container_memory_cache{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2697,23 +2697,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-rss
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_rss
     rules:
     - expr: |-
         container_memory_rss{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2724,23 +2724,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-swap
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_swap
     rules:
     - expr: |-
         container_memory_swap{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2751,23 +2751,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-working-set-by
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_working_set_bytes
     rules:
     - expr: |-
         container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2778,23 +2778,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-resource
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_resource
     rules:
     - expr: |-
         kube_pod_container_resource_requests{resource="memory",job="kube-state-metrics"}  * on (namespace, pod, cluster)
         group_left() max by (namespace, pod, cluster) (
           (kube_pod_status_phase{phase=~"Pending|Running"} == 1)
@@ -2867,23 +2867,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.pod-owner
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.pod_owner
     rules:
     - expr: |-
         max by (cluster, namespace, workload, pod) (
           label_replace(
             label_replace(
@@ -3015,23 +3015,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-availability.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - interval: 3m
     name: kube-apiserver-availability.rules
     rules:
     - expr: avg_over_time(code_verb:apiserver_request_total:increase1h[30d]) * 24 * 30
       record: code_verb:apiserver_request_total:increase30d
     - expr: sum by (cluster, code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"})
@@ -3137,23 +3137,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-burnrate.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-burnrate.rules
     rules:
     - expr: |-
         (
           (
             # too slow
@@ -3459,23 +3459,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-histogram.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-histogram.rules
     rules:
     - expr: histogram_quantile(0.99, sum by (cluster, le, resource) (rate(apiserver_request_sli_duration_seconds_bucket{job="apiserver",verb=~"LIST|GET",subresource!~"proxy|attach|log|exec|portforward"}[5m]))) > 0
       labels:
         quantile: '0.99'
         verb: read
@@ -3490,23 +3490,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-slos
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-slos
     rules:
     - alert: KubeAPIErrorBudgetBurn
       annotations:
         description: The API server is burning too much error budget on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeapierrorbudgetburn
@@ -3567,23 +3567,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-prometheus-general.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-prometheus-general.rules
     rules:
     - expr: count without(instance, pod, node) (up == 1)
       record: count:up1
     - expr: count without(instance, pod, node) (up == 0)
       record: count:up0
@@ -3592,23 +3592,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-prometheus-node-recording.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-prometheus-node-recording.rules
     rules:
     - expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[3m])) BY (instance)
       record: instance:node_cpu:rate:sum
     - expr: sum(rate(node_network_receive_bytes_total[3m])) BY (instance)
       record: instance:node_network_receive_bytes:rate:sum
@@ -3625,23 +3625,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-scheduler.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-scheduler.rules
     rules:
     - expr: histogram_quantile(0.99, sum(rate(scheduler_e2e_scheduling_duration_seconds_bucket{job="kube-scheduler"}[5m])) without(instance, pod))
       labels:
         quantile: '0.99'
       record: cluster_quantile:scheduler_e2e_scheduling_duration_seconds:histogram_quantile
@@ -3682,23 +3682,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-state-metrics
     rules:
     - alert: KubeStateMetricsListErrors
       annotations:
         description: kube-state-metrics is experiencing errors at an elevated rate in list operations. This is likely causing it to not be able to expose metrics about Kubernetes objects correctly or at all.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kube-state-metrics/kubestatemetricslisterrors
@@ -3751,23 +3751,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubelet.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubelet.rules
     rules:
     - expr: |-
         histogram_quantile(
           0.99,
           sum(rate(kubelet_pleg_relist_duration_seconds_bucket{job="kubelet", metrics_path="/metrics"}[5m])) by (cluster, instance, le)
@@ -3802,23 +3802,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-apps
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-apps
     rules:
     - alert: KubePodCrashLooping
       annotations:
         description: 'Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is in waiting state (reason: "CrashLoopBackOff") on cluster {{ $labels.cluster }}.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubepodcrashlooping
@@ -4076,92 +4076,124 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-resources
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-resources
     rules:
     - alert: KubeCPUOvercommit
       annotations:
         description: Cluster {{ $labels.cluster }} has overcommitted CPU resource requests for Pods by {{ printf "%.2f" $value }} CPU shares and cannot tolerate node failure.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubecpuovercommit
         summary: Cluster has overcommitted CPU resource requests.
       expr: |-
-        (sum(namespace_cpu:kube_pod_container_resource_requests:sum{}) by (cluster) -
-        sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) > 0
-        and
-        count by (cluster) (max by (cluster, node) (kube_node_role{job="kube-state-metrics", role="control-plane"})) < 3)
+        # Non-HA clusters.
+        (
+          (
+            sum by (cluster) (namespace_cpu:kube_pod_container_resource_requests:sum{})
+            -
+            sum by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) > 0
+          )
+          and
+          count by (cluster) (max by (cluster, node) (kube_node_role{job="kube-state-metrics", role="control-plane"})) < 3
+        )
         or
-        (sum(namespace_cpu:kube_pod_container_resource_requests:sum{}) by (cluster) -
-        (sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) -
-        max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0
-        and
-        (sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) -
-        max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0)
+        # HA clusters.
+        (
+          sum by (cluster) (namespace_cpu:kube_pod_container_resource_requests:sum{})
+          -
+          (
+            # Skip clusters with only one allocatable node.
+            (
+              sum by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"})
+              -
+              max by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"})
+            ) > 0
+          ) > 0
+        )
       for: 10m
       labels:
         severity: warning
     - alert: KubeMemoryOvercommit
       annotations:
         description: Cluster {{ $labels.cluster }} has overcommitted memory resource requests for Pods by {{ $value | humanize }} bytes and cannot tolerate node failure.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubememoryovercommit
         summary: Cluster has overcommitted memory resource requests.
       expr: |-
-        (sum(namespace_memory:kube_pod_container_resource_requests:sum{}) by (cluster) -
-        sum(kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}) by (cluster) > 0
-        and
-        count by (cluster) (max by (cluster, node) (kube_node_role{job="kube-state-metrics", role="control-plane"})) < 3)
+        # Non-HA clusters.
+        (
+          (
+            sum by (cluster) (namespace_memory:kube_pod_container_resource_requests:sum{})
+            -
+            sum by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="memory"}) > 0
+          )
+          and
+          count by (cluster) (max by (cluster, node) (kube_node_role{job="kube-state-metrics", role="control-plane"})) < 3
+        )
         or
-        (sum(namespace_memory:kube_pod_container_resource_requests:sum{}) by (cluster) -
-        (sum(kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}) by (cluster) -
-        max(kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}) by (cluster)) > 0
-        and
-        (sum(kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}) by (cluster) -
-        max(kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}) by (cluster)) > 0)
+        # HA clusters.
+        (
+          sum by (cluster) (namespace_memory:kube_pod_container_resource_requests:sum{})
+          -
+          (
+            # Skip clusters with only one allocatable node.
+            (
+              sum by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="memory"})
+              -
+              max by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="memory"})
+            ) > 0
+          ) > 0
+        )
       for: 10m
       labels:
         severity: warning
     - alert: KubeCPUQuotaOvercommit
       annotations:
-        description: Cluster {{ $labels.cluster }}  has overcommitted CPU resource requests for Namespaces.
+        description: Cluster {{ $labels.cluster }} has overcommitted CPU resource requests for Namespaces.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubecpuquotaovercommit
         summary: Cluster has overcommitted CPU resource requests.
       expr: |-
-        sum(min without(resource) (kube_resourcequota{job="kube-state-metrics", type="hard", resource=~"(cpu|requests.cpu)"})) by (cluster)
-          /
-        sum(kube_node_status_allocatable{resource="cpu", job="kube-state-metrics"}) by (cluster)
-          > 1.5
+        sum by (cluster) (
+          min without(resource) (kube_resourcequota{job="kube-state-metrics", type="hard", resource=~"(cpu|requests.cpu)"})
+        )
+        /
+        sum by (cluster) (
+          kube_node_status_allocatable{resource="cpu", job="kube-state-metrics"}
+        ) > 1.5
       for: 5m
       labels:
         severity: warning
     - alert: KubeMemoryQuotaOvercommit
       annotations:
         description: Cluster {{ $labels.cluster }} has overcommitted memory resource requests for Namespaces.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubememoryquotaovercommit
         summary: Cluster has overcommitted memory resource requests.
       expr: |-
-        sum(min without(resource) (kube_resourcequota{job="kube-state-metrics", type="hard", resource=~"(memory|requests.memory)"})) by (cluster)
-          /
-        sum(kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}) by (cluster)
-          > 1.5
+        sum by (cluster) (
+          min without(resource) (kube_resourcequota{job="kube-state-metrics", type="hard", resource=~"(memory|requests.memory)"})
+        )
+        /
+        sum by (cluster) (
+          kube_node_status_allocatable{resource="memory", job="kube-state-metrics"}
+        ) > 1.5
       for: 5m
       labels:
         severity: warning
     - alert: KubeQuotaAlmostFull
       annotations:
         description: Namespace {{ $labels.namespace }} is using {{ $value | humanizePercentage }} of its {{ $labels.resource }} quota on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubequotaalmostfull
         summary: Namespace quota is going to be full.
       expr: |-
         kube_resourcequota{job="kube-state-metrics", type="used"}
@@ -4215,23 +4247,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-storage
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-storage
     rules:
     - alert: KubePersistentVolumeFillingUp
       annotations:
         description: The PersistentVolume claimed by {{ $labels.persistentvolumeclaim }} in Namespace {{ $labels.namespace }} {{ with $labels.cluster -}} on Cluster {{ . }} {{- end }} is only {{ $value | humanizePercentage }} free.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubepersistentvolumefillingup
@@ -4329,23 +4361,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-system-apiserver
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "76.5.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-76.5.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-system-apiserver
     rules:
     - alert: KubeClientCertificateExpiration
       annotations:
         description: A client certificate used to authenticate to kubernetes apiserver is expiring in less than 7.0 days on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeclientcertificateexpiration
@@ -4410,23 +4442,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-system-controller-manager
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kuberne
[Truncated: Diff output was too large]
 

@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 3 times, most recently from b5470c7 to a5977a1 Compare August 11, 2025 04:01
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.2.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.2.1) Aug 12, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 3 times, most recently from c5a3d1d to f0fdf3b Compare August 13, 2025 03:23
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.2.1) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.3.0) Aug 13, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 4 times, most recently from fc36a4d to e25b423 Compare August 16, 2025 03:18
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.3.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.4.0) Aug 16, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 6 times, most recently from 2c63fb7 to efe65b9 Compare August 22, 2025 03:12
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.4.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.4.1) Aug 22, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 2 times, most recently from 1303ba1 to 8e4301d Compare August 23, 2025 03:06
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.4.1) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 76.5.1) Aug 23, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 2 times, most recently from b512b3f to b1cc1b8 Compare August 26, 2025 03:12
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 8 times, most recently from 4399b9d to 6b492ed Compare September 22, 2025 03:34
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 6 times, most recently from ce42ba1 to 6f43bf8 Compare September 29, 2025 03:33
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 7 times, most recently from a7a21da to 4b6f03e Compare October 7, 2025 03:25
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch 6 times, most recently from 2c8a83b to c55e77f Compare October 14, 2025 03:29
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch from c55e77f to 25bc018 Compare October 15, 2025 03:33
| datasource | package               | from    | to     |
| ---------- | --------------------- | ------- | ------ |
| helm       | kube-prometheus-stack | 75.18.1 | 76.5.1 |


Co-authored-by: renovate[bot] <[email protected]>
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-76-prometheus-genmachine branch from 25bc018 to 02447a8 Compare October 16, 2025 03:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

app/prometheus Changes made to Prometheus application env/genmachine Changes made in the Talos cluster renovate/helm Changes related to Helm Chart update type/major

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants