Skip to content

Conversation

pipelines-github-app[bot]
Copy link
Contributor

@pipelines-github-app pipelines-github-app bot commented Oct 10, 2025

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 75.18.1 -> 78.2.1

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

prometheus-community/helm-charts (kube-prometheus-stack)

v78.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-78.2.0...kube-prometheus-stack-78.2.1

v78.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] explicitly allow a null Prometheus ruleSelector by @​ba-work in #​6178

Full Changelog: prometheus-community/helm-charts@prometheus-druid-exporter-1.2.0...kube-prometheus-stack-78.2.0

v78.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6226

Full Changelog: prometheus-community/helm-charts@prometheus-pgbouncer-exporter-0.9.0...kube-prometheus-stack-78.1.0

v78.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-27.40.0...kube-prometheus-stack-78.0.0

v77.14.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6193

Full Changelog: prometheus-community/helm-charts@alertmanager-1.27.0...kube-prometheus-stack-77.14.0

v77.13.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6186

Full Changelog: prometheus-community/helm-charts@prometheus-smartctl-exporter-0.16.0...kube-prometheus-stack-77.13.0

v77.12.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Support supplying jsonData on the default datasource by @​ba-work in #​6179

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-postgres-exporter-7.3.0...kube-prometheus-stack-77.12.1

v77.12.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.11.1...kube-prometheus-stack-77.12.0

v77.11.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6165

Full Changelog: prometheus-community/helm-charts@prometheus-postgres-exporter-7.2.0...kube-prometheus-stack-77.11.1

v77.11.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6158

Full Changelog: prometheus-community/helm-charts@prometheus-stackdriver-exporter-4.11.0...kube-prometheus-stack-77.11.0

v77.10.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6145

Full Changelog: prometheus-community/helm-charts@prometheus-27.38.0...kube-prometheus-stack-77.10.0

v77.9.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.9.0...kube-prometheus-stack-77.9.1

v77.9.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.8.0...kube-prometheus-stack-77.9.0

v77.8.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.7.0...kube-prometheus-stack-77.8.0

v77.7.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6137

Full Changelog: prometheus-community/helm-charts@prometheus-kafka-exporter-2.17.0...kube-prometheus-stack-77.7.0

v77.6.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prom-label-proxy-0.15.1...kube-prometheus-stack-77.6.2

v77.6.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6123

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.6.0...kube-prometheus-stack-77.6.1

v77.6.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6122

Full Changelog: prometheus-community/helm-charts@prometheus-conntrack-stats-exporter-0.5.27...kube-prometheus-stack-77.6.0

v77.5.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Added attachMetadata option to additionalServiceMonitors and additionalPodMonitors by @​christophemorio in #​6106

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.34.0...kube-prometheus-stack-77.5.0

v77.4.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update Helm release kube-state-metrics to v6.3.0 by @​renovate[bot] in #​6111

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.3.0...kube-prometheus-stack-77.4.0

v77.3.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.2.1...kube-prometheus-stack-77.3.0

v77.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] set expected jobLabel for node-exporter PodMonitor by @​z0rc in #​6101

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.33.0...kube-prometheus-stack-77.2.1

v77.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update Helm release kube-state-metrics to v6.2.0 by @​renovate[bot] in #​6104

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.2.0...kube-prometheus-stack-77.2.0

v77.1.3

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] support encoded string for thanos sidecar secret by @​trouaux in #​5999

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.1.2...kube-prometheus-stack-77.1.3

v77.1.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6100

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.1.1...kube-prometheus-stack-77.1.2

v77.1.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6099

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-77.1.0...kube-prometheus-stack-77.1.1

v77.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6094

Full Changelog: prometheus-community/helm-charts@prometheus-mongodb-exporter-3.13.0...kube-prometheus-stack-77.1.0

v77.0.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6088

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.1.5...kube-prometheus-stack-77.0.2

v77.0.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6085

Full Changelog: prometheus-community/helm-charts@prometheus-blackbox-exporter-11.3.1...kube-prometheus-stack-77.0.1

v77.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-redis-exporter-6.16.0...kube-prometheus-stack-77.0.0

v76.5.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6080

Full Changelog: prometheus-community/helm-charts@prometheus-ipmi-exporter-0.6.3...kube-prometheus-stack-76.5.1

v76.5.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-27.32.0...kube-prometheus-stack-76.5.0

v76.4.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6070

Full Changelog: prometheus-community/helm-charts@alertmanager-1.25.0...kube-prometheus-stack-76.4.1

v76.4.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6059

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-76.3.1...kube-prometheus-stack-76.4.0

v76.3.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@alertmanager-snmp-notifier-2.1.0...kube-prometheus-stack-76.3.1

v76.3.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-76.2.2...kube-prometheus-stack-76.3.0

v76.2.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [CI] Update actions/create-github-app-token action to v2.1.1 by @​renovate[bot] in #​6043
  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6046

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-5.1.0...kube-prometheus-stack-76.2.2

v76.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-27.30.0...kube-prometheus-stack-76.2.1

v76.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-5.0.0...kube-prometheus-stack-76.2.0

v76.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

  • [kube-prometheus-stack] Update kube-prometheus-stack dependency non-major updates by @​renovate[bot] in #​6032

Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.29.3...kube-prometheus-stack-76.1.0

v76.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-state-metrics-6.1.4...kube-prometheus-stack-76.0.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@pipelines-github-app pipelines-github-app bot added app/prometheus Changes made to Prometheus application env/genmachine Changes made in the Talos cluster renovate/helm Changes related to Helm Chart update type/major labels Oct 10, 2025
@pipelines-github-app
Copy link
Contributor Author

pipelines-github-app bot commented Oct 10, 2025

--- main/kube-prometheus-stack_gitops_manifests_prometheus_genmachine_manifest_main.yaml	2025-10-16 03:33:51.672840468 +0000
+++ pr/kube-prometheus-stack_gitops_manifests_prometheus_genmachine_manifest_pr.yaml	2025-10-16 03:33:44.325860139 +0000
@@ -1,49 +1,49 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: default
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.47.3
+    helm.sh/chart: prometheus-node-exporter-4.48.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "1.9.1"
     release: kube-prometheus-stack
 automountServiceAccountToken: false
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/alertmanager/serviceaccount.yaml
@@ -52,63 +52,63 @@
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     app.kubernetes.io/name: kube-prometheus-stack-alertmanager
     app.kubernetes.io/component: alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     app.kubernetes.io/name: kube-prometheus-stack-prometheus
     app.kubernetes.io/component: prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-blackbox-exporter
   namespace: default
@@ -133,21 +133,21 @@
   namespace: default
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/secret.yaml
 apiVersion: v1
 kind: Secret
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 type: Opaque
 data:
   
   admin-user: "YWRtaW4="
   admin-password: "cGFzc3dvcmQ="
   ldap-toml: ""
 ---
@@ -155,34 +155,34 @@
 apiVersion: v1
 kind: Secret
 metadata:
   name: alertmanager-kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   alertmanager.yaml: "Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0KaW5oaWJpdF9ydWxlczoKLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIC0gYWxlcnRuYW1lCiAgc291cmNlX21hdGNoZXJzOgogIC0gc2V2ZXJpdHkgPSBjcml0aWNhbAogIHRhcmdldF9tYXRjaGVyczoKICAtIHNldmVyaXR5ID1+IHdhcm5pbmd8aW5mbwotIGVxdWFsOgogIC0gbmFtZXNwYWNlCiAgLSBhbGVydG5hbWUKICBzb3VyY2VfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IHdhcm5pbmcKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIHNvdXJjZV9tYXRjaGVyczoKICAtIGFsZXJ0bmFtZSA9IEluZm9JbmhpYml0b3IKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBhbGVydG5hbWUgPSBJbmZvSW5oaWJpdG9yCnJlY2VpdmVyczoKLSBuYW1lOiAibnVsbCIKcm91dGU6CiAgZ3JvdXBfYnk6CiAgLSBuYW1lc3BhY2UKICBncm91cF9pbnRlcnZhbDogNW0KICBncm91cF93YWl0OiAzMHMKICByZWNlaXZlcjogIm51bGwiCiAgcmVwZWF0X2ludGVydmFsOiAxMmgKICByb3V0ZXM6CiAgLSBtYXRjaGVyczoKICAgIC0gYWxlcnRuYW1lID0gIldhdGNoZG9nIgogICAgcmVjZWl2ZXI6ICJudWxsIgp0ZW1wbGF0ZXM6Ci0gL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbA=="
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap-dashboard-provider.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana-config-dashboards
   namespace: default
 data:
   provider.yaml: |-
     apiVersion: 1
     providers:
       - name: 'sidecarProvider'
@@ -195,21 +195,21 @@
           foldersFromFilesStructure: true
           path: /tmp/dashboards
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 data:
   
   plugins: grafana-piechart-panel,grafana-polystat-panel,grafana-clock-panel
   grafana.ini: |
     [analytics]
     check_for_updates = true
     [grafana_net]
@@ -418,103 +418,103 @@
       "https://raw.githubusercontent.com/spegel-org/spegel/refs/heads/main/charts/spegel/monitoring/grafana-dashboard.json" \
     > "/var/lib/grafana/dashboards/grafana-dashboards-system/spegel.json"
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-argocd
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-argocd
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-kubernetes
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-kubernetes
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-network
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-network
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-storage
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-storage
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-system
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
     dashboard-provider: grafana-dashboards-system
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-datasource
   namespace: default
   labels:
     grafana_datasource: "1"
     app: kube-prometheus-stack-grafana
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   datasource.yaml: |-
     apiVersion: 1
     datasources:
     - access: proxy
       isDefault: true
       name: Prometheus
       type: prometheus
@@ -550,42 +550,42 @@
           - HTTP/1.1
           - HTTP/2.0
         prober: http
         timeout: 5s
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrole.yaml
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   name: kube-prometheus-stack-grafana-clusterrole
 rules:
   - apiGroups: [""] # "" indicates the core API group
     resources: ["configmaps", "secrets"]
     verbs: ["get", "watch", "list"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 rules:
 
 - apiGroups: ["certificates.k8s.io"]
   resources:
   - certificatesigningrequests
   verbs: ["list", "watch"]
 
 - apiGroups: [""]
@@ -725,23 +725,23 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 rules:
 - apiGroups:
   - monitoring.coreos.com
   resources:
   - alertmanagers
@@ -751,23 +751,27 @@
   - prometheuses
   - prometheuses/finalizers
   - prometheuses/status
   - prometheusagents
   - prometheusagents/finalizers
   - prometheusagents/status
   - thanosrulers
   - thanosrulers/finalizers
   - thanosrulers/status
   - scrapeconfigs
+  - scrapeconfigs/status
   - servicemonitors
+  - servicemonitors/status
   - podmonitors
+  - podmonitors/status
   - probes
+  - probes/status
   - prometheusrules
   verbs:
   - '*'
 - apiGroups:
   - apps
   resources:
   - statefulsets
   verbs:
   - '*'
 - apiGroups:
@@ -835,23 +839,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 rules:
 # These permissions (to examine all namespaces) are not in the kube-prometheus repo.
 # They're grabbed from https://github.com/prometheus/prometheus/blob/master/documentation/examples/rbac-setup.yml
 # kube-prometheus deliberately defaults to a more restrictive setup that is not appropriate for our general audience.
 - apiGroups: [""]
   resources:
   - nodes
   - nodes/metrics
@@ -870,68 +874,68 @@
   verbs: ["get", "list", "watch"]
 - nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
   verbs: ["get"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrolebinding.yaml
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: kube-prometheus-stack-grafana-clusterrolebinding
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-grafana
     namespace: default
 roleRef:
   kind: ClusterRole
   name: kube-prometheus-stack-grafana-clusterrole
   apiGroup: rbac.authorization.k8s.io
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-kube-state-metrics
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-operator
 subjects:
@@ -942,125 +946,125 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-prometheus
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-prometheus
     namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 rules: []
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/rolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: kube-prometheus-stack-grafana
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-grafana
   namespace: default
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 spec:
   type: ClusterIP
   ports:
     - name: http-web
       port: 80
       protocol: TCP
-      targetPort: 3000
+      targetPort: grafana
   selector:
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
   annotations:
 spec:
   type: "ClusterIP"
   ports:
-  - name: "http"
+  - name: http
     protocol: TCP
     port: 8080
-    targetPort: 8080
+    targetPort: http
   
   selector:    
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: default
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.47.3
+    helm.sh/chart: prometheus-node-exporter-4.48.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "1.9.1"
     release: kube-prometheus-stack
     jobLabel: node-exporter
   annotations:
     prometheus.io/scrape: "true"
@@ -1080,23 +1084,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9093
     targetPort: 9093
     protocol: TCP
   - name: reloader-web
     appProtocol: http
@@ -1112,23 +1116,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-coredns
   labels:
     app: kube-prometheus-stack-coredns
     jobLabel: coredns
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 9153
       protocol: TCP
       targetPort: 9153
@@ -1139,23 +1143,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-controller-manager
   labels:
     app: kube-prometheus-stack-kube-controller-manager
     jobLabel: kube-controller-manager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10257
       protocol: TCP
       targetPort: 10257
@@ -1167,23 +1171,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-proxy
   labels:
     app: kube-prometheus-stack-kube-proxy
     jobLabel: kube-proxy
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10249
       protocol: TCP
       targetPort: 10249
@@ -1195,23 +1199,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-scheduler
   labels:
     app: kube-prometheus-stack-kube-scheduler
     jobLabel: kube-scheduler
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10259
       protocol: TCP
       targetPort: 10259
@@ -1222,23 +1226,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   ports:
   - name: https
     port: 443
     targetPort: https
@@ -1252,23 +1256,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9090
     targetPort: 9090
   - name: reloader-web
     appProtocol: http
     port: 8080
@@ -1327,21 +1331,21 @@
     app.kubernetes.io/name: prometheus-pushgateway
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/daemonset.yaml
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: default
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.47.3
+    helm.sh/chart: prometheus-node-exporter-4.48.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "1.9.1"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:
@@ -1350,21 +1354,21 @@
   revisionHistoryLimit: 10
   updateStrategy:
     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
         cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
       labels:
-        helm.sh/chart: prometheus-node-exporter-4.47.3
+        helm.sh/chart: prometheus-node-exporter-4.48.0
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: prometheus-node-exporter
         app.kubernetes.io/name: prometheus-node-exporter
         app.kubernetes.io/instance: kube-prometheus-stack
         app.kubernetes.io/version: "1.9.1"
         release: kube-prometheus-stack
         jobLabel: node-exporter
     spec:
       automountServiceAccountToken: false
@@ -1460,43 +1464,43 @@
           hostPath:
             path: /
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app.kubernetes.io/name: grafana
       app.kubernetes.io/instance: kube-prometheus-stack
   strategy:
     type: RollingUpdate
   template:
     metadata:
       labels:
-        helm.sh/chart: grafana-9.3.1
+        helm.sh/chart: grafana-10.1.0
         app.kubernetes.io/name: grafana
         app.kubernetes.io/instance: kube-prometheus-stack
         app.kubernetes.io/version: "12.2.0"
       annotations:
         checksum/config: 897ab1f752c697c1ab43eee339bf6f8dc6322024f28d9b1fdb5358180b60b4ea
-        checksum/dashboards-json-config: 23eeea2cb683331d3da8550d49072918736ba33525738d8ded28f40ff01c7ea9
+        checksum/dashboards-json-config: 025a79db5888d323ee72e82fecc8ec5551db9167e93c134578c3fdc06ebf5332
         checksum/sc-dashboard-provider-config: e3aca4961a8923a0814f12363c5e5e10511bb1deb6cd4e0cbe138aeee493354f
         checksum/secret: 7590fe10cbd3ae3e92a60625ff270e3e7d404731e1c73aaa2df1a78dab2c7768
         kubectl.kubernetes.io/default-container: grafana
     spec:
       
       serviceAccountName: kube-prometheus-stack-grafana
       automountServiceAccountToken: true
       shareProcessNamespace: false
       securityContext:
         fsGroup: 472
@@ -1519,21 +1523,21 @@
               type: RuntimeDefault
           volumeMounts:
             - name: config
               mountPath: "/etc/grafana/download_dashboards.sh"
               subPath: download_dashboards.sh
             - name: storage
               mountPath: "/var/lib/grafana"
       enableServiceLinks: true
       containers:
         - name: grafana-sc-dashboard
-          image: "quay.io/kiwigrid/k8s-sidecar:1.30.3"
+          image: "quay.io/kiwigrid/k8s-sidecar:1.30.10"
           imagePullPolicy: IfNotPresent
           env:
             - name: METHOD
               value: WATCH
             - name: LABEL
               value: "grafana_dashboard"
             - name: LABEL_VALUE
               value: "1"
             - name: FOLDER
               value: "/tmp/dashboards"
@@ -1561,21 +1565,21 @@
             allowPrivilegeEscalation: false
             capabilities:
               drop:
               - ALL
             seccompProfile:
               type: RuntimeDefault
           volumeMounts:
             - name: sc-dashboard-volume
               mountPath: "/tmp/dashboards"
         - name: grafana-sc-datasources
-          image: "quay.io/kiwigrid/k8s-sidecar:1.30.3"
+          image: "quay.io/kiwigrid/k8s-sidecar:1.30.10"
           imagePullPolicy: IfNotPresent
           env:
             - name: METHOD
               value: WATCH
             - name: LABEL
               value: "grafana_datasource"
             - name: LABEL_VALUE
               value: "1"
             - name: FOLDER
               value: "/etc/grafana/provisioning/datasources"
@@ -1713,70 +1717,70 @@
         - name: sc-datasources-volume
           emptyDir: {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: default
   labels:    
-    helm.sh/chart: kube-state-metrics-6.1.0
+    helm.sh/chart: kube-state-metrics-6.3.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.16.0"
+    app.kubernetes.io/version: "2.17.0"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:      
       app.kubernetes.io/name: kube-state-metrics
       app.kubernetes.io/instance: kube-prometheus-stack
   replicas: 1
   strategy:
     type: RollingUpdate
   revisionHistoryLimit: 10
   template:
     metadata:
       labels:        
-        helm.sh/chart: kube-state-metrics-6.1.0
+        helm.sh/chart: kube-state-metrics-6.3.0
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: kube-state-metrics
         app.kubernetes.io/name: kube-state-metrics
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "2.16.0"
+        app.kubernetes.io/version: "2.17.0"
         release: kube-prometheus-stack
     spec:
       automountServiceAccountToken: true
       hostNetwork: false
       serviceAccountName: kube-prometheus-stack-kube-state-metrics
       securityContext:
         fsGroup: 65534
         runAsGroup: 65534
         runAsNonRoot: true
         runAsUser: 65534
         seccompProfile:
           type: RuntimeDefault
       dnsPolicy: ClusterFirst
       containers:
       - name: kube-state-metrics
         args:
         - --port=8080
         - --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
         imagePullPolicy: IfNotPresent
-        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.16.0
+        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.17.0
         ports:
         - containerPort: 8080
-          name: "http"
+          name: http
         livenessProbe:
           failureThreshold: 3
           httpGet:
             httpHeaders:
             path: /livez
             port: 8080
             scheme: HTTP
           initialDelaySeconds: 5
           periodSeconds: 10
           successThreshold: 1
@@ -1804,60 +1808,60 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-operator
   namespace: default
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: kube-prometheus-stack-operator
       release: "kube-prometheus-stack"
   template:
     metadata:
       labels:
         
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "75.18.1"
+        app.kubernetes.io/version: "78.2.1"
         app.kubernetes.io/part-of: kube-prometheus-stack
-        chart: kube-prometheus-stack-75.18.1
+        chart: kube-prometheus-stack-78.2.1
         release: "kube-prometheus-stack"
         heritage: "Helm"
         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
         - name: kube-prometheus-stack
           image: "quay.io/prometheus-operator/prometheus-operator:v0.86.1"
           imagePullPolicy: "IfNotPresent"
           args:
             - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
             - --kubelet-endpoints=true
             - --kubelet-endpointslice=false
             - --localhost=127.0.0.1
-            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.83.0
+            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.86.0
             - --config-reloader-cpu-request=0
             - --config-reloader-cpu-limit=0
             - --config-reloader-memory-request=0
             - --config-reloader-memory-limit=0
             - --thanos-default-base-image=quay.io/thanos/thanos:v0.39.2
             - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
             - --web.enable-tls=true
             - --web.cert-file=/cert/cert
             - --web.key-file=/cert/key
             - --web.listen-address=:10250
@@ -2063,21 +2067,21 @@
         - name: storage-volume
           emptyDir: {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/ingress.yaml
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: default
   labels:
-    helm.sh/chart: grafana-9.3.1
+    helm.sh/chart: grafana-10.1.0
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
     app.kubernetes.io/version: "12.2.0"
   annotations:
     cert-manager.io/cluster-issuer: "fredcorp-ca"
     cert-manager.io/common-name: "grafana.talos-genmachine.fredcorp.com"
     traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
     traefik.ingress.kubernetes.io/service.scheme: "https"
 spec:
   ingressClassName: traefik
@@ -2106,23 +2110,23 @@
     cert-manager.io/common-name: prometheus.talos-genmachine.fredcorp.com
     traefik.ingress.kubernetes.io/router.entrypoints: websecure
     traefik.ingress.kubernetes.io/service.scheme: https
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ingressClassName: traefik
   rules:
     - host: "prometheus.talos-genmachine.fredcorp.com"
       http:
         paths:
           - path: /
             pathType: Prefix
@@ -2210,23 +2214,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Alertmanager
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: default
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   image: "quay.io/prometheus/alertmanager:v0.28.1"
   imagePullPolicy: "IfNotPresent"
   version: v0.28.1
   replicas: 1
   listenLocal: false
   serviceAccountName: kube-prometheus-stack-alertmanager
   automountServiceAccountToken: true
@@ -2263,23 +2267,23 @@
 kind: MutatingWebhookConfiguration
 metadata:
   name:  kube-prometheus-stack-admission
   annotations:
     
   labels:
     app: kube-prometheus-stack-admission
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator-webhook
 webhooks:
   - name: prometheusrulemutate.monitoring.coreos.com
     failurePolicy: Ignore
     rules:
       - apiGroups:
           - monitoring.coreos.com
@@ -2303,42 +2307,42 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Prometheus
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: default
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   automountServiceAccountToken: true
   alerting:
     alertmanagers:
       - namespace: default
         name: kube-prometheus-stack-alertmanager
         port: http-web
         pathPrefix: "/"
         apiVersion: v2
   image: "quay.io/prometheus/prometheus:v3.7.0"
   imagePullPolicy: "IfNotPresent"
   version: v3.7.0
   externalUrl: "http://prometheus.talos-genmachine.fredcorp.com/"
   paused: false
   replicas: 1
   shards: 1
-  logLevel:  info
+  logLevel:  "info"
   logFormat:  logfmt
   listenLocal: false
   enableOTLPReceiver: false
   enableAdminAPI: false
   scrapeInterval: 30s
   retention: "7d"
   tsdb:
     outOfOrderTimeWindow: 0s
   walCompression: true
   routePrefix: "/"
@@ -2392,166 +2396,166 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-alertmanager.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: alertmanager.rules
     rules:
     - alert: AlertmanagerFailedReload
       annotations:
         description: Configuration has failed to load for {{ $labels.namespace }}/{{ $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
         summary: Reloading an Alertmanager configuration has failed.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",namespace="default"}[5m]) == 0
+        max_over_time(alertmanager_config_last_reload_successful{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[5m]) == 0
       for: 10m
       labels:
         severity: critical
     - alert: AlertmanagerMembersInconsistent
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} has only found {{ $value }} members of the {{$labels.job}} cluster.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagermembersinconsistent
         summary: A member of an Alertmanager cluster has not found all other cluster members.
       expr: |-
         # Without max_over_time, failed scrapes could create false negatives, see
         # https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0 for details.
-          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="default"}[5m])
+          max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[5m])
         < on (namespace,service,cluster) group_left
-          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",namespace="default"}[5m]))
+          count by (namespace,service,cluster) (max_over_time(alertmanager_cluster_members{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[5m]))
       for: 15m
       labels:
         severity: critical
     - alert: AlertmanagerFailedToSendAlerts
       annotations:
         description: Alertmanager {{ $labels.namespace }}/{{ $labels.pod}} failed to send {{ $value | humanizePercentage }} of notifications to {{ $labels.integration }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedtosendalerts
         summary: An Alertmanager instance failed to send notifications.
       expr: |-
         (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="default"}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="default"}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerClusterFailedToSendAlerts
       annotations:
         description: The minimum notification failure rate to {{ $labels.integration }} sent from any instance in the {{$labels.job}} cluster is {{ $value | humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications to a critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="default", integration=~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default", integration=~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="default", integration=~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default", integration=~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: critical
     - alert: AlertmanagerClusterFailedToSendAlerts
       annotations:
         description: The minimum notification failure rate to {{ $labels.integration }} sent from any instance in the {{$labels.job}} cluster is {{ $value | humanizePercentage }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterfailedtosendalerts
         summary: All Alertmanager instances in a cluster failed to send notifications to a non-critical integration.
       expr: |-
         min by (namespace,service, integration) (
-          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",namespace="default", integration!~`.*`}[15m])
+          rate(alertmanager_notifications_failed_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default", integration!~`.*`}[15m])
         /
-          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",namespace="default", integration!~`.*`}[15m])
+          ignoring (reason) group_left rate(alertmanager_notifications_total{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default", integration!~`.*`}[15m])
         )
         > 0.01
       for: 5m
       labels:
         severity: warning
     - alert: AlertmanagerConfigInconsistent
       annotations:
         description: Alertmanager instances within the {{$labels.job}} cluster have different configurations.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerconfiginconsistent
         summary: Alertmanager instances within the same cluster have different configurations.
       expr: |-
         count by (namespace,service,cluster) (
-          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",namespace="default"})
+          count_values by (namespace,service,cluster) ("config_hash", alertmanager_config_hash{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"})
         )
         != 1
       for: 20m
       labels:
         severity: critical
     - alert: AlertmanagerClusterDown
       annotations:
         description: '{{ $value | humanizePercentage }} of Alertmanager instances within the {{$labels.job}} cluster have been up for less than half of the last 5m.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclusterdown
         summary: Half or more of the Alertmanager instances within the same cluster are down.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            avg_over_time(up{job="kube-prometheus-stack-alertmanager",namespace="default"}[5m]) < 0.5
+            avg_over_time(up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[5m]) < 0.5
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="default"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical
     - alert: AlertmanagerClusterCrashlooping
       annotations:
         description: '{{ $value | humanizePercentage }} of Alertmanager instances within the {{$labels.job}} cluster have restarted at least 5 times in the last 10m.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerclustercrashlooping
         summary: Half or more of the Alertmanager instances within the same cluster are crashlooping.
       expr: |-
         (
           count by (namespace,service,cluster) (
-            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",namespace="default"}[10m]) > 4
+            changes(process_start_time_seconds{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}[10m]) > 4
           )
         /
           count by (namespace,service,cluster) (
-            up{job="kube-prometheus-stack-alertmanager",namespace="default"}
+            up{job="kube-prometheus-stack-alertmanager",container="alertmanager",namespace="default"}
           )
         )
         >= 0.5
       for: 5m
       labels:
         severity: critical
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/rules-1.14/config-reloaders.yaml
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-config-reloaders
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: config-reloaders
     rules:
     - alert: ConfigReloaderSidecarErrors
       annotations:
         description: 'Errors encountered while the {{$labels.pod}} config-reloader sidecar attempts to sync config in {{$labels.namespace}} namespace.
 
@@ -2567,23 +2571,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-general.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: general.rules
     rules:
     - alert: TargetDown
       annotations:
         description: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/general/targetdown
@@ -2635,23 +2639,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-cpu-usage-seconds-tot
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_cpu_usage_seconds_total
     rules:
     - expr: |-
         sum by (cluster, namespace, pod, container) (
           rate(container_cpu_usage_seconds_total{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}[5m])
         ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
@@ -2670,23 +2674,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-cache
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_cache
     rules:
     - expr: |-
         container_memory_cache{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2697,23 +2701,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-rss
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_rss
     rules:
     - expr: |-
         container_memory_rss{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2724,23 +2728,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-swap
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_swap
     rules:
     - expr: |-
         container_memory_swap{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2751,23 +2755,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-working-set-by
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_working_set_bytes
     rules:
     - expr: |-
         container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2778,23 +2782,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-resource
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_resource
     rules:
     - expr: |-
         kube_pod_container_resource_requests{resource="memory",job="kube-state-metrics"}  * on (namespace, pod, cluster)
         group_left() max by (namespace, pod, cluster) (
           (kube_pod_status_phase{phase=~"Pending|Running"} == 1)
@@ -2867,23 +2871,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.pod-owner
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.pod_owner
     rules:
     - expr: |-
         max by (cluster, namespace, workload, pod) (
           label_replace(
             label_replace(
@@ -3015,23 +3019,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-availability.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - interval: 3m
     name: kube-apiserver-availability.rules
     rules:
     - expr: avg_over_time(code_verb:apiserver_request_total:increase1h[30d]) * 24 * 30
       record: code_verb:apiserver_request_total:increase30d
     - expr: sum by (cluster, code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"})
@@ -3137,23 +3141,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-burnrate.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-burnrate.rules
     rules:
     - expr: |-
         (
           (
             # too slow
@@ -3459,23 +3463,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-histogram.rules
   namespace: default
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "75.18.1"
+    app.kubernetes.io/version: "78.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-75.18.1
+    chart: kube-prometheus-stack-78.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"

[Truncated: Diff output was too large]
 

@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-78-prometheus-genmachine branch from 80adac2 to 514f240 Compare October 10, 2025 03:29
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 78.0.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 78.1.0) Oct 11, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-78-prometheus-genmachine branch 3 times, most recently from 9904153 to fea2063 Compare October 13, 2025 03:13
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 78.1.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 78.2.0) Oct 13, 2025
@pipelines-github-app pipelines-github-app bot changed the title feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 78.2.0) feat(helm)!: Update Chart kube-prometheus-stack (75.18.1 → 78.2.1) Oct 14, 2025
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-78-prometheus-genmachine branch 3 times, most recently from a0c082c to 56bdbc7 Compare October 15, 2025 03:33
| datasource | package               | from    | to     |
| ---------- | --------------------- | ------- | ------ |
| helm       | kube-prometheus-stack | 75.18.1 | 78.2.1 |


Co-authored-by: renovate[bot] <[email protected]>
@pipelines-github-app pipelines-github-app bot force-pushed the renovate/major-78-prometheus-genmachine branch from 56bdbc7 to 5fac43c Compare October 16, 2025 03:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

app/prometheus Changes made to Prometheus application env/genmachine Changes made in the Talos cluster renovate/helm Changes related to Helm Chart update type/major

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants