Skip to content

Conversation

pipelines-github-app[bot]
Copy link
Contributor

@pipelines-github-app pipelines-github-app bot commented Aug 24, 2025

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 75.18.1 -> 77.14.0

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

prometheus-community/helm-charts (kube-prometheus-stack)

v77.14.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6193

Full Changelog: prometheus-community/helm-charts@alertmanager-1.27.0...kube-prometheus-stack-77.14.0

v77.13.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6186

Full Changelog: prometheus-community/helm-charts@prometheus-smartctl-exporter-0.16.0...kube-prometheus-stack-77.13.0

v77.12.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Support supplying jsonData on the default datasource by @​ba-work in #​6179

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-postgres-exporter-7.3.0...kube-prometheus-stack-77.12.1

v77.12.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.11.1...kube-prometheus-stack-77.12.0

v77.11.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6165

Full Changelog: prometheus-community/helm-charts@prometheus-postgres-exporter-7.2.0...kube-prometheus-stack-77.11.1

v77.11.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6158

Full Changelog: prometheus-community/helm-charts@prometheus-stackdriver-exporter-4.11.0...kube-prometheus-stack-77.11.0

v77.10.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6145

Full Changelog: prometheus-community/helm-charts@prometheus-27.38.0...kube-prometheus-stack-77.10.0

v77.9.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.9.0...kube-prometheus-stack-77.9.1

v77.9.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.8.0...kube-prometheus-stack-77.9.0

v77.8.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.7.0...kube-prometheus-stack-77.8.0

v77.7.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6137

Full Changelog: prometheus-community/helm-charts@prometheus-kafka-exporter-2.17.0...kube-prometheus-stack-77.7.0

v77.6.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prom-label-proxy-0.15.1...kube-prometheus-stack-77.6.2

v77.6.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6123

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.6.0...kube-prometheus-stack-77.6.1

v77.6.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6122

Full Changelog: prometheus-community/helm-charts@prometheus-conntrack-stats-exporter-0.5.27...kube-prometheus-stack-77.6.0

v77.5.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Added attachMetadata option to additionalServiceMonitors and additionalPodMonitors by @​christophemorio in #​6106

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.34.0...kube-prometheus-stack-77.5.0

v77.4.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update Helm release kube-state-metrics to v6.3.0 by @​renovate[bot] in #​6111

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.3.0...kube-prometheus-stack-77.4.0

v77.3.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.2.1...kube-prometheus-stack-77.3.0

v77.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] set expected jobLabel for node-exporter PodMonitor by @​z0rc in #​6101

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.33.0...kube-prometheus-stack-77.2.1

v77.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update Helm release kube-state-metrics to v6.2.0 by @​renovate[bot] in #​6104

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.2.0...kube-prometheus-stack-77.2.0

v77.1.3

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] support encoded string for thanos sidecar secret by @​trouaux in #​5999

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.1.2...kube-prometheus-stack-77.1.3

v77.1.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6100

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.1.1...kube-prometheus-stack-77.1.2

v77.1.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6099

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.1.0...kube-prometheus-stack-77.1.1

v77.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6094

Full Changelog: prometheus-community/helm-charts@prometheus-mongodb-exporter-3.13.0...kube-prometheus-stack-77.1.0

v77.0.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6088

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.1.5...kube-prometheus-stack-77.0.2

v77.0.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6085

Full Changelog: prometheus-community/helm-charts@prometheus-blackbox-exporter-11.3.1...kube-prometheus-stack-77.0.1

v77.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-redis-exporter-6.16.0...kube-prometheus-stack-77.0.0

v76.5.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6080

Full Changelog: prometheus-community/helm-charts@prometheus-ipmi-exporter-0.6.3...kube-prometheus-stack-76.5.1

v76.5.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-27.32.0...kube-prometheus-stack-76.5.0

v76.4.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6070

Full Changelog: prometheus-community/helm-charts@alertmanager-1.25.0...kube-prometheus-stack-76.4.1

v76.4.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6059

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-76.3.1...kube-prometheus-stack-76.4.0

v76.3.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@alertmanager-snmp-notifier-2.1.0...kube-prometheus-stack-76.3.1

v76.3.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-76.2.2...kube-prometheus-stack-76.3.0

v76.2.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [CI] Update actions/create-github-app-token action to v2.1.1 by @​renovate[bot] in #​6043
  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6046

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-5.1.0...kube-prometheus-stack-76.2.2

v76.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.30.0...kube-prometheus-stack-76.2.1

v76.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-5.0.0...kube-prometheus-stack-76.2.0

v76.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6032

Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.29.3...kube-prometheus-stack-76.1.0

v76.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.1.4...kube-prometheus-stack-76.0.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@pipelines-github-app pipelines-github-app bot added app/prometheus Changes made to Prometheus application env/genmachine Changes made in the Talos cluster renovate/helm Changes related to Helm Chart update type/major labels Aug 24, 2025
@pipelines-github-app
Copy link
Contributor Author

pipelines-github-app bot commented Aug 24, 2025

--- main/kube-prometheus-stack_gitops_manifests_prometheus_genmachine_manifest_main.yaml	2025-10-16 03:33:48.645820927 +0000
+++ pr/kube-prometheus-stack_gitops_manifests_prometheus_genmachine_manifest_pr.yaml	2025-10-16 03:33:40.888759351 +0000
@@ -1,49 +1,49 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: default
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.47.3
+    helm.sh/chart: prometheus-node-exporter-4.48.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "1.9.1"
     release: kube-prometheus-stack
 automountServiceAccountToken: false
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/alertmanager/serviceaccount.yaml
@@ -52,63 +52,63 @@
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     app.kubernetes.io/name: kube-prometheus-stack-alertmanager
     app.kubernetes.io/component: alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     app.kubernetes.io/name: kube-prometheus-stack-prometheus
     app.kubernetes.io/component: prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-blackbox-exporter
   namespace: default
@@ -133,21 +133,21 @@
   namespace: default
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/secret.yaml
 apiVersion: v1
 kind: Secret
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 type: Opaque
 data:
   
   admin-user: "YWRtaW4="
   admin-password: "cGFzc3dvcmQ="
   ldap-toml: ""
 ---
@@ -155,34 +155,34 @@
 apiVersion: v1
 kind: Secret
 metadata:
   name: alertmanager-kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   alertmanager.yaml: "Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0KaW5oaWJpdF9ydWxlczoKLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIC0gYWxlcnRuYW1lCiAgc291cmNlX21hdGNoZXJzOgogIC0gc2V2ZXJpdHkgPSBjcml0aWNhbAogIHRhcmdldF9tYXRjaGVyczoKICAtIHNldmVyaXR5ID1+IHdhcm5pbmd8aW5mbwotIGVxdWFsOgogIC0gbmFtZXNwYWNlCiAgLSBhbGVydG5hbWUKICBzb3VyY2VfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IHdhcm5pbmcKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIHNvdXJjZV9tYXRjaGVyczoKICAtIGFsZXJ0bmFtZSA9IEluZm9JbmhpYml0b3IKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBhbGVydG5hbWUgPSBJbmZvSW5oaWJpdG9yCnJlY2VpdmVyczoKLSBuYW1lOiAibnVsbCIKcm91dGU6CiAgZ3JvdXBfYnk6CiAgLSBuYW1lc3BhY2UKICBncm91cF9pbnRlcnZhbDogNW0KICBncm91cF93YWl0OiAzMHMKICByZWNlaXZlcjogIm51bGwiCiAgcmVwZWF0X2ludGVydmFsOiAxMmgKICByb3V0ZXM6CiAgLSBtYXRjaGVyczoKICAgIC0gYWxlcnRuYW1lID0gIldhdGNoZG9nIgogICAgcmVjZWl2ZXI6ICJudWxsIgp0ZW1wbGF0ZXM6Ci0gL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbA=="
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap-dashboard-provider.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana-config-dashboards
   namespace: default
 data:
   provider.yaml: |-
     apiVersion: 1
     providers:
       - name: 'sidecarProvider'
@@ -195,21 +195,21 @@
           foldersFromFilesStructure: true
           path: /tmp/dashboards
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 data:
   
   plugins: grafana-piechart-panel,grafana-polystat-panel,grafana-clock-panel
   grafana.ini: |
     [analytics]
     check_for_updates = true
     [grafana_net]
@@ -418,103 +418,103 @@
       "https://raw.githubusercontent.com/spegel-org/spegel/refs/heads/main/charts/spegel/monitoring/grafana-dashboard.json" \
     > "/var/lib/grafana/dashboards/grafana-dashboards-system/spegel.json"
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-argocd
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-argocd
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-kubernetes
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-kubernetes
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-network
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-network
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-storage
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-storage
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-system
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-system
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-datasource
   namespace: default
   labels:
     grafana_datasource: "1"
     app: kube-prometheus-stack-grafana
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   datasource.yaml: |-
     apiVersion: 1
     datasources:
     - access: proxy
       isDefault: true
       name: Prometheus
       type: prometheus
@@ -550,42 +550,42 @@
           - HTTP/1.1
           - HTTP/2.0
         prober: http
         timeout: 5s
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrole.yaml
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana-clusterrole
 rules:
   - apiGroups: [""] # "" indicates the core API group
     resources: ["configmaps", "secrets"]
     verbs: ["get", "watch", "list"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 rules:
 
 - apiGroups: ["certificates.k8s.io"]
   resources:
   - certificatesigningrequests
   verbs: ["list", "watch"]
 
 - apiGroups: [""]
@@ -725,23 +725,23 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 rules:
 - apiGroups:
   - monitoring.coreos.com
   resources:
   - alertmanagers
@@ -752,20 +752,21 @@
   - prometheuses/finalizers
   - prometheuses/status
   - prometheusagents
   - prometheusagents/finalizers
   - prometheusagents/status
   - thanosrulers
   - thanosrulers/finalizers
   - thanosrulers/status
   - scrapeconfigs
   - servicemonitors
+  - servicemonitors/status
   - podmonitors
   - probes
   - prometheusrules
   verbs:
   - '*'
 - apiGroups:
   - apps
   resources:
   - statefulsets
   verbs:
@@ -835,23 +836,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 rules:
 # These permissions (to examine all namespaces) are not in the kube-prometheus repo.
 # They're grabbed from https://github.com/prometheus/prometheus/blob/master/documentation/examples/rbac-setup.yml
 # kube-prometheus deliberately defaults to a more restrictive setup that is not appropriate for our general audience.
 - apiGroups: [""]
   resources:
   - nodes
   - nodes/metrics
@@ -870,68 +871,68 @@
   verbs: ["get", "list", "watch"]
 - nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
   verbs: ["get"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrolebinding.yaml
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: kube-prometheus-stack-grafana-clusterrolebinding
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-grafana
     namespace: default
 roleRef:
   kind: ClusterRole
   name: kube-prometheus-stack-grafana-clusterrole
   apiGroup: rbac.authorization.k8s.io
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-kube-state-metrics
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-operator
 subjects:
@@ -942,125 +943,125 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-prometheus
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-prometheus
     namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 rules: []
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/rolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: kube-prometheus-stack-grafana
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-grafana
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 spec:
   type: ClusterIP
   ports:
     - name: http-web
       port: 80
       protocol: TCP
-      targetPort: 3000
+      targetPort: grafana
   selector:
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   annotations:
 spec:
   type: "ClusterIP"
   ports:
-  - name: "http"
+  - name: http
     protocol: TCP
     port: 8080
-    targetPort: 8080
+    targetPort: http
   
   selector:    
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: default
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.47.3
+    helm.sh/chart: prometheus-node-exporter-4.48.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "1.9.1"
     release: kube-prometheus-stack
     jobLabel: node-exporter
   annotations:
     prometheus.io/scrape: "true"
@@ -1080,23 +1081,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9093
     targetPort: 9093
     protocol: TCP
   - name: reloader-web
     appProtocol: http
@@ -1112,23 +1113,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-coredns
   labels:
     app: kube-prometheus-stack-coredns
     jobLabel: coredns
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 9153
       protocol: TCP
       targetPort: 9153
@@ -1139,23 +1140,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-controller-manager
   labels:
     app: kube-prometheus-stack-kube-controller-manager
     jobLabel: kube-controller-manager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10257
       protocol: TCP
       targetPort: 10257
@@ -1167,23 +1168,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-proxy
   labels:
     app: kube-prometheus-stack-kube-proxy
     jobLabel: kube-proxy
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10249
       protocol: TCP
       targetPort: 10249
@@ -1195,23 +1196,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-scheduler
   labels:
     app: kube-prometheus-stack-kube-scheduler
     jobLabel: kube-scheduler
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10259
       protocol: TCP
       targetPort: 10259
@@ -1222,23 +1223,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   ports:
   - name: https
     port: 443
     targetPort: https
@@ -1252,23 +1253,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9090
     targetPort: 9090
   - name: reloader-web
     appProtocol: http
     port: 8080
@@ -1327,21 +1328,21 @@
     app.kubernetes.io/name: prometheus-pushgateway
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/daemonset.yaml
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: default
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.47.3
+    helm.sh/chart: prometheus-node-exporter-4.48.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "1.9.1"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:
@@ -1350,21 +1351,21 @@
   revisionHistoryLimit: 10
   updateStrategy:
     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
         cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
       labels:
-        helm.sh/chart: prometheus-node-exporter-4.47.3
+        helm.sh/chart: prometheus-node-exporter-4.48.0
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: prometheus-node-exporter
         app.kubernetes.io/name: prometheus-node-exporter
         app.kubernetes.io/instance: kube-prometheus-stack
         app.kubernetes.io/version: "1.9.1"
         release: kube-prometheus-stack
         jobLabel: node-exporter
     spec:
       automountServiceAccountToken: false
@@ -1460,43 +1461,43 @@
           hostPath:
             path: /
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app.kubernetes.io/name: grafana
       app.kubernetes.io/instance: kube-prometheus-stack
   strategy:
     type: RollingUpdate
   template:
     metadata:
       labels:
-        helm.sh/chart: grafana-9.3.1
+        helm.sh/chart: grafana-10.0.0
         app.kubernetes.io/name: grafana
         app.kubernetes.io/instance: kube-prometheus-stack
         app.kubernetes.io/version: "12.2.0"
       annotations:
         checksum/config: 897ab1f752c697c1ab43eee339bf6f8dc6322024f28d9b1fdb5358180b60b4ea
-        checksum/dashboards-json-config: 23eeea2cb683331d3da8550d49072918736ba33525738d8ded28f40ff01c7ea9
+        checksum/dashboards-json-config: a91d55622bea3f397ca09a45a6498966150222f0a1ad35dab5ab96eca934fefe
         checksum/sc-dashboard-provider-config: e3aca4961a8923a0814f12363c5e5e10511bb1deb6cd4e0cbe138aeee493354f
         checksum/secret: 7590fe10cbd3ae3e92a60625ff270e3e7d404731e1c73aaa2df1a78dab2c7768
         kubectl.kubernetes.io/default-container: grafana
     spec:
       
       serviceAccountName: kube-prometheus-stack-grafana
       automountServiceAccountToken: true
       shareProcessNamespace: false
       securityContext:
         fsGroup: 472
@@ -1519,21 +1520,21 @@
               type: RuntimeDefault
           volumeMounts:
             - name: config
               mountPath: "/etc/grafana/download_dashboards.sh"
               subPath: download_dashboards.sh
             - name: storage
               mountPath: "/var/lib/grafana"
       enableServiceLinks: true
       containers:
         - name: grafana-sc-dashboard
-          image: "quay.io/kiwigrid/k8s-sidecar:1.30.3"
+          image: "quay.io/kiwigrid/k8s-sidecar:1.30.10"
           imagePullPolicy: IfNotPresent
           env:
             - name: METHOD
               value: WATCH
             - name: LABEL
               value: "grafana_dashboard"
             - name: LABEL_VALUE
               value: "1"
             - name: FOLDER
               value: "/tmp/dashboards"
@@ -1561,21 +1562,21 @@
             allowPrivilegeEscalation: false
             capabilities:
               drop:
               - ALL
             seccompProfile:
               type: RuntimeDefault
           volumeMounts:
             - name: sc-dashboard-volume
               mountPath: "/tmp/dashboards"
         - name: grafana-sc-datasources
-          image: "quay.io/kiwigrid/k8s-sidecar:1.30.3"
+          image: "quay.io/kiwigrid/k8s-sidecar:1.30.10"
           imagePullPolicy: IfNotPresent
           env:
             - name: METHOD
               value: WATCH
             - name: LABEL
               value: "grafana_datasource"
             - name: LABEL_VALUE
               value: "1"
             - name: FOLDER
               value: "/etc/grafana/provisioning/datasources"
@@ -1713,70 +1714,70 @@
         - name: sc-datasources-volume
           emptyDir: {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:      
       app.kubernetes.io/name: kube-state-metrics
       app.kubernetes.io/instance: kube-prometheus-stack
   replicas: 1
   strategy:
     type: RollingUpdate
   revisionHistoryLimit: 10
   template:
     metadata:
       labels:        
-        helm.sh/chart: kube-state-metrics-6.1.0
+        helm.sh/chart: kube-state-metrics-6.3.0
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: kube-state-metrics
         app.kubernetes.io/name: kube-state-metrics
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "2.16.0"
+        app.kubernetes.io/version: "2.17.0"
         release: kube-prometheus-stack
     spec:
       automountServiceAccountToken: true
       hostNetwork: false
       serviceAccountName: kube-prometheus-stack-kube-state-metrics
       securityContext:
         fsGroup: 65534
         runAsGroup: 65534
         runAsNonRoot: true
         runAsUser: 65534
         seccompProfile:
           type: RuntimeDefault
       dnsPolicy: ClusterFirst
       containers:
       - name: kube-state-metrics
         args:
         - --port=8080
         - --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
         imagePullPolicy: IfNotPresent
-        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.16.0
+        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.17.0
         ports:
         - containerPort: 8080
-          name: "http"
+          name: http
         livenessProbe:
           failureThreshold: 3
           httpGet:
             httpHeaders:
             path: /livez
             port: 8080
             scheme: HTTP
           initialDelaySeconds: 5
           periodSeconds: 10
           successThreshold: 1
@@ -1804,60 +1805,60 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: kube-prometheus-stack-operator
       release: "kube-prometheus-stack"
   template:
     metadata:
       labels:
         
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "75.18.1"
+        app.kubernetes.io/version: "77.14.0"
         app.kubernetes.io/part-of: kube-prometheus-stack
-        chart: kube-prometheus-stack-75.18.1
+        chart: kube-prometheus-stack-77.14.0
         release: "kube-prometheus-stack"
         heritage: "Helm"
         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
         - name: kube-prometheus-stack
           image: "quay.io/prometheus-operator/prometheus-operator:v0.86.1"
           imagePullPolicy: "IfNotPresent"
           args:
             - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
             - --kubelet-endpoints=true
             - --kubelet-endpointslice=false
             - --localhost=127.0.0.1
-            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.83.0
+            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.85.0
             - --config-reloader-cpu-request=0
             - --config-reloader-cpu-limit=0
             - --config-reloader-memory-request=0
             - --config-reloader-memory-limit=0
             - --thanos-default-base-image=quay.io/thanos/thanos:v0.39.2
             - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
             - --web.enable-tls=true
             - --web.cert-file=/cert/cert
             - --web.key-file=/cert/key
             - --web.listen-address=:10250
@@ -2063,21 +2064,21 @@
         - name: storage-volume
           emptyDir: {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/ingress.yaml
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.0.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   annotations:
     cert-manager.io/cluster-issuer: "fredcorp-ca"
     cert-manager.io/common-name: "grafana.talos-genmachine.fredcorp.com"
     traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
     traefik.ingress.kubernetes.io/service.scheme: "https"
 spec:
   ingressClassName: traefik
@@ -2106,23 +2107,23 @@
     cert-manager.io/common-name: prometheus.talos-genmachine.fredcorp.com
     traefik.ingress.kubernetes.io/router.entrypoints: websecure
     traefik.ingress.kubernetes.io/service.scheme: https
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ingressClassName: traefik
   rules:
     - host: "prometheus.talos-genmachine.fredcorp.com"
       http:
         paths:
           - path: /
             pathType: Prefix
@@ -2210,23 +2211,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Alertmanager
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   image: "quay.io/prometheus/alertmanager:v0.28.1"
   imagePullPolicy: "IfNotPresent"
   version: v0.28.1
   replicas: 1
   listenLocal: false
   serviceAccountName: kube-prometheus-stack-alertmanager
   automountServiceAccountToken: true
@@ -2263,23 +2264,23 @@
 kind: MutatingWebhookConfiguration
 metadata:
   name:  kube-prometheus-stack-admission
   annotations:
     
   labels:
     app: kube-prometheus-stack-admission
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator-webhook
 webhooks:
   - name: prometheusrulemutate.monitoring.coreos.com
     failurePolicy: Ignore
     rules:
       - apiGroups:
           - monitoring.coreos.com
@@ -2303,23 +2304,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Prometheus
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   automountServiceAccountToken: true
   alerting:
     alertmanagers:
       - namespace: default
         name: kube-prometheus-stack-alertmanager
         port: http-web
         pathPrefix: "/"
@@ -2392,23 +2393,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-alertmanager.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: alertmanager.rules
     rules:
     - alert: AlertmanagerFailedReload
       annotations:
         description: Configuration has failed to load for {{ $labels.namespace }}/{{ $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
@@ -2535,23 +2536,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-config-reloaders
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: config-reloaders
     rules:
     - alert: ConfigReloaderSidecarErrors
       annotations:
         description: 'Errors encountered while the {{$labels.pod}} config-reloader sidecar attempts to sync config in {{$labels.namespace}} namespace.
 
@@ -2567,23 +2568,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-general.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: general.rules
     rules:
     - alert: TargetDown
       annotations:
         description: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/general/targetdown
@@ -2635,23 +2636,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-cpu-usage-seconds-tot
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_cpu_usage_seconds_total
     rules:
     - expr: |-
         sum by (cluster, namespace, pod, container) (
           rate(container_cpu_usage_seconds_total{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}[5m])
         ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
@@ -2670,23 +2671,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-cache
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_cache
     rules:
     - expr: |-
         container_memory_cache{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2697,23 +2698,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-rss
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_rss
     rules:
     - expr: |-
         container_memory_rss{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2724,23 +2725,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-swap
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_swap
     rules:
     - expr: |-
         container_memory_swap{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2751,23 +2752,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-working-set-by
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_working_set_bytes
     rules:
     - expr: |-
         container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2778,23 +2779,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-resource
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_resource
     rules:
     - expr: |-
         kube_pod_container_resource_requests{resource="memory",job="kube-state-metrics"}  * on (namespace, pod, cluster)
         group_left() max by (namespace, pod, cluster) (
           (kube_pod_status_phase{phase=~"Pending|Running"} == 1)
@@ -2867,23 +2868,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.pod-owner
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.pod_owner
     rules:
     - expr: |-
         max by (cluster, namespace, workload, pod) (
           label_replace(
             label_replace(
@@ -3015,23 +3016,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-availability.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - interval: 3m
     name: kube-apiserver-availability.rules
     rules:
     - expr: avg_over_time(code_verb:apiserver_request_total:increase1h[30d]) * 24 * 30
       record: code_verb:apiserver_request_total:increase30d
     - expr: sum by (cluster, code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"})
@@ -3137,23 +3138,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-burnrate.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-burnrate.rules
     rules:
     - expr: |-
         (
           (
             # too slow
@@ -3459,23 +3460,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-histogram.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-histogram.rules
     rules:
     - expr: histogram_quantile(0.99, sum by (cluster, le, resource) (rate(apiserver_request_sli_duration_seconds_bucket{job="apiserver",verb=~"LIST|GET",subresource!~"proxy|attach|log|exec|portforward"}[5m]))) > 0
       labels:
         quantile: '0.99'
         verb: read
@@ -3490,23 +3491,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-slos
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-slos
     rules:
     - alert: KubeAPIErrorBudgetBurn
       annotations:
         description: The API server is burning too much error budget on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeapierrorbudgetburn
@@ -3567,23 +3568,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-prometheus-general.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-prometheus-general.rules
     rules:
     - expr: count without(instance, pod, node) (up == 1)
       record: count:up1
     - expr: count without(instance, pod, node) (up == 0)
       record: count:up0
@@ -3592,23 +3593,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-prometheus-node-recording.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-prometheus-node-recording.rules
     rules:
     - expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[3m])) BY (instance)
       record: instance:node_cpu:rate:sum
     - expr: sum(rate(node_network_receive_bytes_total[3m])) BY (instance)
       record: instance:node_network_receive_bytes:rate:sum
@@ -3625,23 +3626,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-scheduler.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-scheduler.rules
     rules:
     - expr: histogram_quantile(0.99, sum(rate(scheduler_e2e_scheduling_duration_seconds_bucket{job="kube-scheduler"}[5m])) without(instance, pod))
       labels:
         quantile: '0.99'
       record: cluster_quantile:scheduler_e2e_scheduling_duration_seconds:histogram_quantile
@@ -3682,23 +3683,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-state-metrics
     rules:
     - alert: KubeStateMetricsListErrors
       annotations:
         description: kube-state-metrics is experiencing errors at an elevated rate in list operations. This is likely causing it to not be able to expose metrics about Kubernetes objects correctly or at all.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kube-state-metrics/kubestatemetricslisterrors
@@ -3751,23 +3752,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubelet.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubelet.rules
     rules:
     - expr: |-
         histogram_quantile(
           0.99,
           sum(rate(kubelet_pleg_relist_duration_seconds_bucket{job="kubelet", metrics_path="/metrics"}[5m])) by (cluster, instance, le)
@@ -3802,23 +3803,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-apps
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-apps
     rules:
     - alert: KubePodCrashLooping
       annotations:
         description: 'Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is in waiting state (reason: "CrashLoopBackOff") on cluster {{ $labels.cluster }}.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubepodcrashlooping
@@ -4076,92 +4077,124 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-resources
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "77.14.0"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-77.14.0
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-resources
     rules:
     - alert: KubeCPUOvercommit
       annotations:
         description: Cluster {{ $labels.cluster }} has overcommitted CPU resource requests for Pods by {{ printf "%.2f" $value }} CPU shares and cannot tolerate node failure.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubecpuovercommit
         summary: Cluster has overcommitted CPU resource requests.
       expr: |-
-        (sum(namespace_cpu:kube_pod_container_resource_requests:sum{}) by (cluster) -
-        sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) > 0
-        and
-        count by (cluster) (max by (cluster, node) (kube_node_role{job="kube-state-metrics", role="control-plane"})) < 3)
+        # Non-HA clusters.
+        (
+          (
+            sum by (cluster) (namespace_cpu:kube_pod_container_resource_requests:sum{})
+            -
+            sum by (cluster) (kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) > 0
+          )
+          and
+          count by (cluster) (max by (cluster, node) (kube_node_role{job="kube-state-metrics", role="control-plane"})) < 3
+        )
         or
-        (sum(namespace_cpu:kube_pod_container_resource_requests:sum{}) by (cluster) -
-        (sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) -
-        max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0
-        and
-        (sum(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster) -
-        max(kube_node_status_allocatable{job="kube-state-metrics",resource="cpu"}) by (cluster)) > 0)
+        # HA clusters.
+        (
+          sum by (cluster) (namespace_cpu:kube_pod_container_resource_requests:sum{})
+          -
+          (
+            # Skip clusters with only one allocatable node.
+            (
+              sum by (cluster) (kube_node_statu
[Truncated: Diff output was too large]
 

@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.0.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.0.1) Aug 26, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 3 times, most recently from 11172c7 to dc813ad Compare August 27, 2025 03:05
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.0.1) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.0.2) Aug 27, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 4 times, most recently from 1001eef to 93d16e5 Compare August 30, 2025 02:59
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.0.2) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.1.0) Aug 30, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 2 times, most recently from 2e7b1f5 to afd284b Compare August 31, 2025 03:33
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.1.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.1.1) Sep 1, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 2 times, most recently from f9bda8b to b5647c3 Compare September 1, 2025 03:47
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.1.1) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.2.0) Sep 2, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 3 times, most recently from 00b628a to be4a98a Compare September 3, 2025 02:57
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.2.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.3.0) Sep 3, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 2 times, most recently from 8d85800 to 2eedd62 Compare September 4, 2025 02:58
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.10.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.11.0) Sep 24, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 3 times, most recently from 8dbf0e1 to 7cd9d97 Compare September 26, 2025 03:04
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.11.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.11.1) Sep 26, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 3 times, most recently from bb0074b to d5a8d27 Compare September 28, 2025 03:11
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.11.1) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.12.0) Sep 28, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 5 times, most recently from 0e22369 to 6e1544a Compare October 4, 2025 02:54
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.12.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.13.0) Oct 4, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 5 times, most recently from 4998460 to cca391a Compare October 8, 2025 03:01
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.13.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 77.14.0) Oct 8, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch 6 times, most recently from aa3da1e to a8c5dfd Compare October 14, 2025 03:29
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch from a8c5dfd to 07133ed Compare October 15, 2025 03:33
| datasource | package               | from    | to      |
| ---------- | --------------------- | ------- | ------- |
| helm       | kube-prometheus-stack | 75.18.1 | 77.14.0 |


Co-authored-by: renovate[bot] <[email protected]>
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-77-prometheus-genmachine branch from 07133ed to a512443 Compare October 16, 2025 03:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

app/prometheus Changes made to Prometheus application env/genmachine Changes made in the Talos cluster renovate/helm Changes related to Helm Chart update type/major

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants