Skip to content

Conversation

@ixxeL2097
Copy link
Member

@ixxeL2097 ixxeL2097 commented Mar 16, 2025

This PR contains the following updates:

Package Update Change
kube-prometheus-stack (source) major 69.2.4 -> 70.2.1

Release Notes

prometheus-community/helm-charts (kube-prometheus-stack)

v70.2.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-snmp-exporter-9.1.0...kube-prometheus-stack-70.2.1

v70.2.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-27.6.0...kube-prometheus-stack-70.2.0

v70.1.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-70.1.0...kube-prometheus-stack-70.1.1

v70.1.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-70.0.3...kube-prometheus-stack-70.1.0

v70.0.3

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-cloudwatch-exporter-0.27.0...kube-prometheus-stack-70.0.3

v70.0.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-70.0.1...kube-prometheus-stack-70.0.2

v70.0.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-windows-exporter-0.9.1...kube-prometheus-stack-70.0.1

v70.0.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.8.2...kube-prometheus-stack-70.0.0

v69.8.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@alertmanager-1.15.2...kube-prometheus-stack-69.8.2

v69.8.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.8.0...kube-prometheus-stack-69.8.1

v69.8.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.7.4...kube-prometheus-stack-69.8.0

v69.7.4

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-pingdom-exporter-3.0.3...kube-prometheus-stack-69.7.4

v69.7.3

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-adapter-4.13.0...kube-prometheus-stack-69.7.3

v69.7.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.7.1...kube-prometheus-stack-69.7.2

v69.7.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-pingdom-exporter-3.0.2...kube-prometheus-stack-69.7.1

v69.7.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.6.1...kube-prometheus-stack-69.7.0

v69.6.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.20.0...kube-prometheus-stack-69.6.1

v69.6.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-snmp-exporter-7.0.1...kube-prometheus-stack-69.6.0

v69.5.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prom-label-proxy-0.10.2...kube-prometheus-stack-69.5.2

v69.5.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-operator-crds-18.0.1...kube-prometheus-stack-69.5.1

v69.5.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-operator-admission-webhook-0.19.0...kube-prometheus-stack-69.5.0

v69.4.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.4.0...kube-prometheus-stack-69.4.1

v69.4.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-rabbitmq-exporter-2.1.1...kube-prometheus-stack-69.4.0

v69.3.3

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-rabbitmq-exporter-2.1.0...kube-prometheus-stack-69.3.3

v69.3.2

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

Full Changelog: prometheus-community/helm-charts@prometheus-elasticsearch-exporter-6.6.1...kube-prometheus-stack-69.3.2

v69.3.1

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@kube-prometheus-stack-69.3.0...kube-prometheus-stack-69.3.1

v69.3.0

Compare Source

kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

What's Changed

New Contributors

Full Changelog: prometheus-community/helm-charts@prometheus-json-exporter-0.16.0...kube-prometheus-stack-69.3.0


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@github-actions
Copy link

github-actions bot commented Mar 16, 2025

--- main/kube-prometheus-stack_talos_manifests_prom-stack_prod_manifest_main.yaml	2025-03-23 01:07:01.025781703 +0000
+++ pr/kube-prometheus-stack_talos_manifests_prom-stack_prod_manifest_pr.yaml	2025-03-23 01:06:52.653754949 +0000
@@ -1,114 +1,114 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
   name: kube-prometheus-stack-grafana
   namespace: github-runner
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 automountServiceAccountToken: true
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-5.29.0
+    helm.sh/chart: kube-state-metrics-5.31.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.14.0"
+    app.kubernetes.io/version: "2.15.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
   namespace: github-runner
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: github-runner
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.43.1
+    helm.sh/chart: prometheus-node-exporter-4.45.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "1.8.2"
+    app.kubernetes.io/version: "1.9.0"
     release: kube-prometheus-stack
 automountServiceAccountToken: false
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/alertmanager/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-alertmanager
     app.kubernetes.io/name: kube-prometheus-stack-alertmanager
     app.kubernetes.io/component: alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-operator
   namespace: github-runner
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-prometheus
     app.kubernetes.io/name: kube-prometheus-stack-prometheus
     app.kubernetes.io/component: prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 automountServiceAccountToken: true
 ---
 # Source: kube-prometheus-stack/charts/prometheus-blackbox-exporter/templates/serviceaccount.yaml
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: kube-prometheus-stack-prometheus-blackbox-exporter
   namespace: github-runner
@@ -119,59 +119,59 @@
     app.kubernetes.io/version: "v0.26.0"
     app.kubernetes.io/managed-by: Helm
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/secret.yaml
 apiVersion: v1
 kind: Secret
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 type: Opaque
 data:
   
   admin-user: "YWRtaW4="
   admin-password: "cGFzc3dvcmQ="
   ldap-toml: ""
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/alertmanager/secret.yaml
 apiVersion: v1
 kind: Secret
 metadata:
   name: alertmanager-kube-prometheus-stack-alertmanager
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   alertmanager.yaml: "Z2xvYmFsOgogIHJlc29sdmVfdGltZW91dDogNW0KaW5oaWJpdF9ydWxlczoKLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIC0gYWxlcnRuYW1lCiAgc291cmNlX21hdGNoZXJzOgogIC0gc2V2ZXJpdHkgPSBjcml0aWNhbAogIHRhcmdldF9tYXRjaGVyczoKICAtIHNldmVyaXR5ID1+IHdhcm5pbmd8aW5mbwotIGVxdWFsOgogIC0gbmFtZXNwYWNlCiAgLSBhbGVydG5hbWUKICBzb3VyY2VfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IHdhcm5pbmcKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSBlcXVhbDoKICAtIG5hbWVzcGFjZQogIHNvdXJjZV9tYXRjaGVyczoKICAtIGFsZXJ0bmFtZSA9IEluZm9JbmhpYml0b3IKICB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBzZXZlcml0eSA9IGluZm8KLSB0YXJnZXRfbWF0Y2hlcnM6CiAgLSBhbGVydG5hbWUgPSBJbmZvSW5oaWJpdG9yCnJlY2VpdmVyczoKLSBuYW1lOiAibnVsbCIKcm91dGU6CiAgZ3JvdXBfYnk6CiAgLSBuYW1lc3BhY2UKICBncm91cF9pbnRlcnZhbDogNW0KICBncm91cF93YWl0OiAzMHMKICByZWNlaXZlcjogIm51bGwiCiAgcmVwZWF0X2ludGVydmFsOiAxMmgKICByb3V0ZXM6CiAgLSBtYXRjaGVyczoKICAgIC0gYWxlcnRuYW1lID0gIldhdGNoZG9nIgogICAgcmVjZWl2ZXI6ICJudWxsIgp0ZW1wbGF0ZXM6Ci0gL2V0Yy9hbGVydG1hbmFnZXIvY29uZmlnLyoudG1wbA=="
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap-dashboard-provider.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
   name: kube-prometheus-stack-grafana-config-dashboards
   namespace: github-runner
 data:
   provider.yaml: |-
     apiVersion: 1
     providers:
       - name: 'sidecarProvider'
         orgId: 1
         type: file
         disableDeletion: false
@@ -181,24 +181,24 @@
           foldersFromFilesStructure: true
           path: /tmp/dashboards
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 data:
   
   plugins: grafana-piechart-panel,grafana-polystat-panel,grafana-clock-panel
   grafana.ini: |
     [analytics]
     check_for_updates = true
     [grafana_net]
     url = https://grafana.net
     [log]
     mode = console
@@ -306,58 +306,58 @@
       "https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/master/dashboards/k8s-views-pods.json" \
     > "/var/lib/grafana/dashboards/grafana-dashboards-kubernetes/k8s-views-pods.json"
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-argocd
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
     dashboard-provider: grafana-dashboards-argocd
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/dashboards-json-configmap.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-dashboards-grafana-dashboards-kubernetes
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
     dashboard-provider: grafana-dashboards-kubernetes
 data:
   {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/grafana/configmaps-datasources.yaml
 apiVersion: v1
 kind: ConfigMap
 metadata:
   name: kube-prometheus-stack-grafana-datasource
   namespace: github-runner
   labels:
     grafana_datasource: "1"
     app: kube-prometheus-stack-grafana
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 data:
   datasource.yaml: |-
     apiVersion: 1
     datasources:
     - name: "Prometheus"
       type: prometheus
       uid: prometheus
       url: http://kube-prometheus-stack-prometheus.github-runner:9090/
@@ -398,42 +398,42 @@
           - HTTP/1.1
           - HTTP/2.0
         prober: http
         timeout: 5s
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrole.yaml
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
   name: kube-prometheus-stack-grafana-clusterrole
 rules:
   - apiGroups: [""] # "" indicates the core API group
     resources: ["configmaps", "secrets"]
     verbs: ["get", "watch", "list"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-5.29.0
+    helm.sh/chart: kube-state-metrics-5.31.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.14.0"
+    app.kubernetes.io/version: "2.15.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 rules:
 
 - apiGroups: ["certificates.k8s.io"]
   resources:
   - certificatesigningrequests
   verbs: ["list", "watch"]
 
 - apiGroups: [""]
@@ -573,23 +573,23 @@
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 rules:
 - apiGroups:
   - monitoring.coreos.com
   resources:
   - alertmanagers
@@ -683,23 +683,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrole.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 rules:
 # This permission are not in the kube-prometheus repo
 # they're grabbed from https://github.com/prometheus/prometheus/blob/master/documentation/examples/rbac-setup.yml
 - apiGroups: [""]
   resources:
   - nodes
   - nodes/metrics
   - services
@@ -717,68 +717,68 @@
   verbs: ["get", "list", "watch"]
 - nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
   verbs: ["get"]
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/clusterrolebinding.yaml
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: kube-prometheus-stack-grafana-clusterrolebinding
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-grafana
     namespace: github-runner
 roleRef:
   kind: ClusterRole
   name: kube-prometheus-stack-grafana-clusterrole
   apiGroup: rbac.authorization.k8s.io
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   labels:    
-    helm.sh/chart: kube-state-metrics-5.29.0
+    helm.sh/chart: kube-state-metrics-5.31.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.14.0"
+    app.kubernetes.io/version: "2.15.0"
     release: kube-prometheus-stack
   name: kube-prometheus-stack-kube-state-metrics
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-kube-state-metrics
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-kube-state-metrics
   namespace: github-runner
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-operator
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-operator
 subjects:
@@ -789,103 +789,103 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus/clusterrolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: kube-prometheus-stack-prometheus
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: kube-prometheus-stack-prometheus
 subjects:
   - kind: ServiceAccount
     name: kube-prometheus-stack-prometheus
     namespace: github-runner
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/role.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 rules: []
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/rolebinding.yaml
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: Role
   name: kube-prometheus-stack-grafana
 subjects:
 - kind: ServiceAccount
   name: kube-prometheus-stack-grafana
   namespace: github-runner
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 spec:
   type: ClusterIP
   ports:
     - name: http-web
       port: 80
       protocol: TCP
       targetPort: 3000
   selector:
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: github-runner
   labels:    
-    helm.sh/chart: kube-state-metrics-5.29.0
+    helm.sh/chart: kube-state-metrics-5.31.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.14.0"
+    app.kubernetes.io/version: "2.15.0"
     release: kube-prometheus-stack
   annotations:
 spec:
   type: "ClusterIP"
   ports:
   - name: "http"
     protocol: TCP
     port: 8080
     targetPort: 8080
   
@@ -893,27 +893,27 @@
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: github-runner
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.43.1
+    helm.sh/chart: prometheus-node-exporter-4.45.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "1.8.2"
+    app.kubernetes.io/version: "1.9.0"
     release: kube-prometheus-stack
     jobLabel: node-exporter
   annotations:
     prometheus.io/scrape: "true"
 spec:
   type: ClusterIP
   ports:
     - port: 9100
       targetPort: 9100
       protocol: TCP
@@ -927,23 +927,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-alertmanager
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9093
     targetPort: 9093
     protocol: TCP
   - name: reloader-web
     appProtocol: http
@@ -959,23 +959,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-coredns
   labels:
     app: kube-prometheus-stack-coredns
     jobLabel: coredns
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 9153
       protocol: TCP
       targetPort: 9153
@@ -986,23 +986,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-controller-manager
   labels:
     app: kube-prometheus-stack-kube-controller-manager
     jobLabel: kube-controller-manager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10257
       protocol: TCP
       targetPort: 10257
@@ -1014,23 +1014,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-proxy
   labels:
     app: kube-prometheus-stack-kube-proxy
     jobLabel: kube-proxy
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10249
       protocol: TCP
       targetPort: 10249
@@ -1042,23 +1042,23 @@
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-kube-scheduler
   labels:
     app: kube-prometheus-stack-kube-scheduler
     jobLabel: kube-scheduler
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
   namespace: kube-system
 spec:
   clusterIP: None
   ports:
     - name: http-metrics
       port: 10259
       protocol: TCP
       targetPort: 10259
@@ -1069,23 +1069,23 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/service.yaml
 apiVersion: v1
 kind: Service
 metadata:
   name: kube-prometheus-stack-operator
   namespace: github-runner
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   ports:
   - name: https
     port: 443
     targetPort: https
@@ -1099,23 +1099,23 @@
 kind: Service
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-prometheus
     self-monitor: "true"
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ports:
   - name: http-web
     port: 9090
     targetPort: 9090
   - name: reloader-web
     appProtocol: http
     port: 8080
@@ -1151,63 +1151,63 @@
     app.kubernetes.io/name: prometheus-blackbox-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/prometheus-node-exporter/templates/daemonset.yaml
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   name: kube-prometheus-stack-prometheus-node-exporter
   namespace: github-runner
   labels:
-    helm.sh/chart: prometheus-node-exporter-4.43.1
+    helm.sh/chart: prometheus-node-exporter-4.45.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: prometheus-node-exporter
     app.kubernetes.io/name: prometheus-node-exporter
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "1.8.2"
+    app.kubernetes.io/version: "1.9.0"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:
       app.kubernetes.io/name: prometheus-node-exporter
       app.kubernetes.io/instance: kube-prometheus-stack
   revisionHistoryLimit: 10
   updateStrategy:
     rollingUpdate:
       maxUnavailable: 1
     type: RollingUpdate
   template:
     metadata:
       annotations:
         cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
       labels:
-        helm.sh/chart: prometheus-node-exporter-4.43.1
+        helm.sh/chart: prometheus-node-exporter-4.45.0
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: prometheus-node-exporter
         app.kubernetes.io/name: prometheus-node-exporter
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "1.8.2"
+        app.kubernetes.io/version: "1.9.0"
         release: kube-prometheus-stack
         jobLabel: node-exporter
     spec:
       automountServiceAccountToken: false
       securityContext:
         fsGroup: 65534
         runAsGroup: 65534
         runAsNonRoot: true
         runAsUser: 65534
       serviceAccountName: kube-prometheus-stack-prometheus-node-exporter
       containers:
         - name: node-exporter
-          image: quay.io/prometheus/node-exporter:v1.8.2
+          image: quay.io/prometheus/node-exporter:v1.9.0
           imagePullPolicy: IfNotPresent
           args:
             - --path.procfs=/host/proc
             - --path.sysfs=/host/sys
             - --path.rootfs=/host/root
             - --path.udev.data=/host/root/run/udev/data
             - --web.listen-address=[$(HOST_IP)]:9100
             - --collector.filesystem.mount-points-exclude=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)
             - --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
           securityContext:
@@ -1284,50 +1284,51 @@
           hostPath:
             path: /
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app.kubernetes.io/name: grafana
       app.kubernetes.io/instance: kube-prometheus-stack
   strategy:
     type: RollingUpdate
   template:
     metadata:
       labels:
-        helm.sh/chart: grafana-8.9.0
+        helm.sh/chart: grafana-8.10.4
         app.kubernetes.io/name: grafana
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "11.5.1"
+        app.kubernetes.io/version: "11.5.2"
       annotations:
         checksum/config: 66aa9decfacf413aeb07dd69a61ae5c3027b9f6e8f27e212bed467d4a235d5d8
-        checksum/dashboards-json-config: f3f3881e62b00bf2df5ad970735e46633e85c169a792d4f9b388904dc3a599cb
+        checksum/dashboards-json-config: 69670aaf1eb11bca09e5c39950d7f73a2a16807c3cc6f72a90bd7729ac1325b9
         checksum/sc-dashboard-provider-config: e3aca4961a8923a0814f12363c5e5e10511bb1deb6cd4e0cbe138aeee493354f
         checksum/secret: 7590fe10cbd3ae3e92a60625ff270e3e7d404731e1c73aaa2df1a78dab2c7768
         kubectl.kubernetes.io/default-container: grafana
     spec:
       
       serviceAccountName: kube-prometheus-stack-grafana
       automountServiceAccountToken: true
+      shareProcessNamespace: false
       securityContext:
         fsGroup: 472
         runAsGroup: 472
         runAsNonRoot: true
         runAsUser: 472
       initContainers:
         - name: download-dashboards
           image: "docker.io/curlimages/curl:8.9.1"
           imagePullPolicy: IfNotPresent
           command: ["/bin/sh"]
@@ -1342,21 +1343,21 @@
               type: RuntimeDefault
           volumeMounts:
             - name: config
               mountPath: "/etc/grafana/download_dashboards.sh"
               subPath: download_dashboards.sh
             - name: storage
               mountPath: "/var/lib/grafana"
       enableServiceLinks: true
       containers:
         - name: grafana-sc-dashboard
-          image: "quay.io/kiwigrid/k8s-sidecar:1.28.0"
+          image: "quay.io/kiwigrid/k8s-sidecar:1.30.0"
           imagePullPolicy: IfNotPresent
           env:
             - name: METHOD
               value: WATCH
             - name: LABEL
               value: "grafana_dashboard"
             - name: LABEL_VALUE
               value: "1"
             - name: FOLDER
               value: "/tmp/dashboards"
@@ -1384,21 +1385,21 @@
             allowPrivilegeEscalation: false
             capabilities:
               drop:
               - ALL
             seccompProfile:
               type: RuntimeDefault
           volumeMounts:
             - name: sc-dashboard-volume
               mountPath: "/tmp/dashboards"
         - name: grafana-sc-datasources
-          image: "quay.io/kiwigrid/k8s-sidecar:1.28.0"
+          image: "quay.io/kiwigrid/k8s-sidecar:1.30.0"
           imagePullPolicy: IfNotPresent
           env:
             - name: METHOD
               value: WATCH
             - name: LABEL
               value: "grafana_datasource"
             - name: LABEL_VALUE
               value: "1"
             - name: FOLDER
               value: "/etc/grafana/provisioning/datasources"
@@ -1422,21 +1423,21 @@
             allowPrivilegeEscalation: false
             capabilities:
               drop:
               - ALL
             seccompProfile:
               type: RuntimeDefault
           volumeMounts:
             - name: sc-datasources-volume
               mountPath: "/etc/grafana/provisioning/datasources"
         - name: grafana
-          image: "docker.io/grafana/grafana:11.5.1"
+          image: "docker.io/grafana/grafana:11.5.2"
           imagePullPolicy: IfNotPresent
           securityContext:
             allowPrivilegeEscalation: false
             capabilities:
               drop:
               - ALL
             seccompProfile:
               type: RuntimeDefault
           volumeMounts:
             - name: config
@@ -1528,66 +1529,66 @@
           emptyDir:
             {}
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/kube-state-metrics/templates/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: github-runner
   labels:    
-    helm.sh/chart: kube-state-metrics-5.29.0
+    helm.sh/chart: kube-state-metrics-5.31.0
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/component: metrics
     app.kubernetes.io/part-of: kube-state-metrics
     app.kubernetes.io/name: kube-state-metrics
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "2.14.0"
+    app.kubernetes.io/version: "2.15.0"
     release: kube-prometheus-stack
 spec:
   selector:
     matchLabels:      
       app.kubernetes.io/name: kube-state-metrics
       app.kubernetes.io/instance: kube-prometheus-stack
   replicas: 1
   strategy:
     type: RollingUpdate
   revisionHistoryLimit: 10
   template:
     metadata:
       labels:        
-        helm.sh/chart: kube-state-metrics-5.29.0
+        helm.sh/chart: kube-state-metrics-5.31.0
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/component: metrics
         app.kubernetes.io/part-of: kube-state-metrics
         app.kubernetes.io/name: kube-state-metrics
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "2.14.0"
+        app.kubernetes.io/version: "2.15.0"
         release: kube-prometheus-stack
     spec:
       automountServiceAccountToken: true
       hostNetwork: false
       serviceAccountName: kube-prometheus-stack-kube-state-metrics
       securityContext:
         fsGroup: 65534
         runAsGroup: 65534
         runAsNonRoot: true
         runAsUser: 65534
         seccompProfile:
           type: RuntimeDefault
       containers:
       - name: kube-state-metrics
         args:
         - --port=8080
         - --resources=certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
         imagePullPolicy: IfNotPresent
-        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.14.0
+        image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0
         ports:
         - containerPort: 8080
           name: "http"
         livenessProbe:
           failureThreshold: 3
           httpGet:
             httpHeaders:
             path: /livez
             port: 8080
             scheme: HTTP
@@ -1618,60 +1619,60 @@
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/templates/prometheus-operator/deployment.yaml
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: kube-prometheus-stack-operator
   namespace: github-runner
   labels:
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app: kube-prometheus-stack-operator
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator
 spec:
   replicas: 1
   revisionHistoryLimit: 10
   selector:
     matchLabels:
       app: kube-prometheus-stack-operator
       release: "kube-prometheus-stack"
   template:
     metadata:
       labels:
         
         app.kubernetes.io/managed-by: Helm
         app.kubernetes.io/instance: kube-prometheus-stack
-        app.kubernetes.io/version: "69.2.4"
+        app.kubernetes.io/version: "70.2.1"
         app.kubernetes.io/part-of: kube-prometheus-stack
-        chart: kube-prometheus-stack-69.2.4
+        chart: kube-prometheus-stack-70.2.1
         release: "kube-prometheus-stack"
         heritage: "Helm"
         app: kube-prometheus-stack-operator
         app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
         app.kubernetes.io/component: prometheus-operator
     spec:
       containers:
         - name: kube-prometheus-stack
-          image: "quay.io/prometheus-operator/prometheus-operator:v0.80.0"
+          image: "quay.io/prometheus-operator/prometheus-operator:v0.81.0"
           imagePullPolicy: "IfNotPresent"
           args:
             - --kubelet-service=kube-system/kube-prometheus-stack-kubelet
             - --kubelet-endpoints=true
             - --kubelet-endpointslice=false
             - --localhost=127.0.0.1
-            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.80.0
+            - --prometheus-config-reloader=quay.io/prometheus-operator/prometheus-config-reloader:v0.81.0
             - --config-reloader-cpu-request=0
             - --config-reloader-cpu-limit=0
             - --config-reloader-memory-request=0
             - --config-reloader-memory-limit=0
             - --thanos-default-base-image=quay.io/thanos/thanos:v0.37.2
             - --secret-field-selector=type!=kubernetes.io/dockercfg,type!=kubernetes.io/service-account-token,type!=helm.sh/release.v1
             - --web.enable-tls=true
             - --web.cert-file=/cert/cert
             - --web.key-file=/cert/key
             - --web.listen-address=:10250
@@ -1810,24 +1811,24 @@
         configMap:
           name: kube-prometheus-stack-prometheus-blackbox-exporter
 ---
 # Source: kube-prometheus-stack/charts/kube-prometheus-stack/charts/grafana/templates/ingress.yaml
 apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: kube-prometheus-stack-grafana
   namespace: github-runner
   labels:
-    helm.sh/chart: grafana-8.9.0
+    helm.sh/chart: grafana-8.10.4
     app.kubernetes.io/name: grafana
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "11.5.1"
+    app.kubernetes.io/version: "11.5.2"
   annotations:
     cert-manager.io/cluster-issuer: "vault-issuer"
     cert-manager.io/common-name: "grafana.k8s-infra.fredcorp.com"
 spec:
   ingressClassName: nginx
   tls:
     - hosts:
       - grafana.k8s-infra.fredcorp.com
       secretName: grafana-tls-cert
   rules:
@@ -1849,23 +1850,23 @@
   annotations:
     cert-manager.io/cluster-issuer: vault-issuer
     cert-manager.io/common-name: prometheus.k8s-infra.fredcorp.com
   name: kube-prometheus-stack-prometheus
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   ingressClassName: nginx
   rules:
     - host: "prometheus.k8s-infra.fredcorp.com"
       http:
         paths:
           - path: /
             pathType: Prefix
@@ -1916,28 +1917,28 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Alertmanager
 metadata:
   name: kube-prometheus-stack-alertmanager
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-alertmanager
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
-  image: "quay.io/prometheus/alertmanager:v0.28.0"
-  version: v0.28.0
+  image: "quay.io/prometheus/alertmanager:v0.28.1"
+  version: v0.28.1
   replicas: 1
   listenLocal: false
   serviceAccountName: kube-prometheus-stack-alertmanager
   automountServiceAccountToken: true
   externalUrl: http://kube-prometheus-stack-alertmanager.github-runner:9093
   paused: false
   logFormat: "logfmt"
   logLevel:  "info"
   retention: "120h"
   alertmanagerConfigSelector: {}
@@ -1967,23 +1968,23 @@
 kind: MutatingWebhookConfiguration
 metadata:
   name:  kube-prometheus-stack-admission
   annotations:
     
   labels:
     app: kube-prometheus-stack-admission
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
     app.kubernetes.io/name: kube-prometheus-stack-prometheus-operator
     app.kubernetes.io/component: prometheus-operator-webhook
 webhooks:
   - name: prometheusrulemutate.monitoring.coreos.com
     failurePolicy: Ignore
     rules:
       - apiGroups:
           - monitoring.coreos.com
@@ -2007,36 +2008,36 @@
 apiVersion: monitoring.coreos.com/v1
 kind: Prometheus
 metadata:
   name: kube-prometheus-stack-prometheus
   namespace: github-runner
   labels:
     app: kube-prometheus-stack-prometheus
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   automountServiceAccountToken: true
   alerting:
     alertmanagers:
       - namespace: github-runner
         name: kube-prometheus-stack-alertmanager
         port: http-web
         pathPrefix: "/"
         apiVersion: v2
-  image: "quay.io/prometheus/prometheus:v3.1.0"
-  version: v3.1.0
+  image: "quay.io/prometheus/prometheus:v3.2.1"
+  version: v3.2.1
   externalUrl: "http://prometheus.k8s-infra.fredcorp.com/"
   paused: false
   replicas: 1
   shards: 1
   logLevel:  info
   logFormat:  logfmt
   listenLocal: false
   enableAdminAPI: false
   scrapeInterval: 30s
   retention: "7d"
@@ -2094,23 +2095,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-alertmanager.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: alertmanager.rules
     rules:
     - alert: AlertmanagerFailedReload
       annotations:
         description: Configuration has failed to load for {{ $labels.namespace }}/{{ $labels.pod}}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/alertmanager/alertmanagerfailedreload
@@ -2237,23 +2238,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-config-reloaders
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: config-reloaders
     rules:
     - alert: ConfigReloaderSidecarErrors
       annotations:
         description: 'Errors encountered while the {{$labels.pod}} config-reloader sidecar attempts to sync config in {{$labels.namespace}} namespace.
 
@@ -2269,23 +2270,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-general.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: general.rules
     rules:
     - alert: TargetDown
       annotations:
         description: '{{ printf "%.4g" $value }}% of the {{ $labels.job }}/{{ $labels.service }} targets in {{ $labels.namespace }} namespace are down.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/general/targetdown
@@ -2337,23 +2338,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-cpu-usage-seconds-tot
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_cpu_usage_seconds_total
     rules:
     - expr: |-
         sum by (cluster, namespace, pod, container) (
           irate(container_cpu_usage_seconds_total{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}[5m])
         ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
@@ -2365,23 +2366,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-cache
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_cache
     rules:
     - expr: |-
         container_memory_cache{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2392,23 +2393,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-rss
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_rss
     rules:
     - expr: |-
         container_memory_rss{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2419,23 +2420,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-swap
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_swap
     rules:
     - expr: |-
         container_memory_swap{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2446,23 +2447,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-memory-working-set-by
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_memory_working_set_bytes
     rules:
     - expr: |-
         container_memory_working_set_bytes{job="kubelet", metrics_path="/metrics/cadvisor", image!=""}
         * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (1,
           max by (cluster, namespace, pod, node) (kube_pod_info{node!=""})
@@ -2473,23 +2474,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.container-resource
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.container_resource
     rules:
     - expr: |-
         kube_pod_container_resource_requests{resource="memory",job="kube-state-metrics"}  * on (namespace, pod, cluster)
         group_left() max by (namespace, pod, cluster) (
           (kube_pod_status_phase{phase=~"Pending|Running"} == 1)
@@ -2562,23 +2563,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-k8s.rules.pod-owner
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: k8s.rules.pod_owner
     rules:
     - expr: |-
         max by (cluster, namespace, workload, pod) (
           label_replace(
             label_replace(
@@ -2630,23 +2631,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-availability.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - interval: 3m
     name: kube-apiserver-availability.rules
     rules:
     - expr: avg_over_time(code_verb:apiserver_request_total:increase1h[30d]) * 24 * 30
       record: code_verb:apiserver_request_total:increase30d
     - expr: sum by (cluster, code) (code_verb:apiserver_request_total:increase30d{verb=~"LIST|GET"})
@@ -2760,23 +2761,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-burnrate.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-burnrate.rules
     rules:
     - expr: |-
         (
           (
             # too slow
@@ -3082,23 +3083,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-histogram.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-histogram.rules
     rules:
     - expr: histogram_quantile(0.99, sum by (cluster, le, resource) (rate(apiserver_request_sli_duration_seconds_bucket{job="apiserver",verb=~"LIST|GET",subresource!~"proxy|attach|log|exec|portforward"}[5m]))) > 0
       labels:
         quantile: '0.99'
         verb: read
@@ -3113,23 +3114,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-apiserver-slos
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-apiserver-slos
     rules:
     - alert: KubeAPIErrorBudgetBurn
       annotations:
         description: The API server is burning too much error budget on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeapierrorbudgetburn
@@ -3190,23 +3191,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-prometheus-general.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-prometheus-general.rules
     rules:
     - expr: count without(instance, pod, node) (up == 1)
       record: count:up1
     - expr: count without(instance, pod, node) (up == 0)
       record: count:up0
@@ -3215,23 +3216,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-prometheus-node-recording.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-prometheus-node-recording.rules
     rules:
     - expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[3m])) BY (instance)
       record: instance:node_cpu:rate:sum
     - expr: sum(rate(node_network_receive_bytes_total[3m])) BY (instance)
       record: instance:node_network_receive_bytes:rate:sum
@@ -3248,23 +3249,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-scheduler.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-scheduler.rules
     rules:
     - expr: histogram_quantile(0.99, sum(rate(scheduler_e2e_scheduling_duration_seconds_bucket{job="kube-scheduler"}[5m])) without(instance, pod))
       labels:
         quantile: '0.99'
       record: cluster_quantile:scheduler_e2e_scheduling_duration_seconds:histogram_quantile
@@ -3305,23 +3306,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kube-state-metrics
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kube-state-metrics
     rules:
     - alert: KubeStateMetricsListErrors
       annotations:
         description: kube-state-metrics is experiencing errors at an elevated rate in list operations. This is likely causing it to not be able to expose metrics about Kubernetes objects correctly or at all.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kube-state-metrics/kubestatemetricslisterrors
@@ -3374,23 +3375,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubelet.rules
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubelet.rules
     rules:
     - expr: histogram_quantile(0.99, sum(rate(kubelet_pleg_relist_duration_seconds_bucket{job="kubelet", metrics_path="/metrics"}[5m])) by (cluster, instance, le) * on (cluster, instance) group_left(node) kubelet_node_name{job="kubelet", metrics_path="/metrics"})
       labels:
         quantile: '0.99'
       record: node_quantile:kubelet_pleg_relist_duration_seconds:histogram_quantile
@@ -3407,23 +3408,23 @@
 apiVersion: monitoring.coreos.com/v1
 kind: PrometheusRule
 metadata:
   name: kube-prometheus-stack-kubernetes-apps
   namespace: github-runner
   labels:
     app: kube-prometheus-stack
     
     app.kubernetes.io/managed-by: Helm
     app.kubernetes.io/instance: kube-prometheus-stack
-    app.kubernetes.io/version: "69.2.4"
+    app.kubernetes.io/version: "70.2.1"
     app.kubernetes.io/part-of: kube-prometheus-stack
-    chart: kube-prometheus-stack-69.2.4
+    chart: kube-prometheus-stack-70.2.1
     release: "kube-prometheus-stack"
     heritage: "Helm"
 spec:
   groups:
   - name: kubernetes-apps
     rules:
     - alert: KubePodCrashLooping
       annotations:
         description: 'Pod {{ $labels.namespace }}/{{ $labels.pod }} ({{ $labels.container }}) is in waiting state (reason: "CrashLoopBackOff") on cluster {{ $labels.cluster }}.'
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubepodcrashlooping
@@ -3491,21 +3492,21 @@
         severity: warning
     - alert: KubeStatefulSetReplicasMismatch
       annotations:
         description: StatefulSet {{ $labels.namespace }}/{{ $labels.statefulset }} has not matched the expected number of replicas for longer than 15 minutes on cluster {{ $labels.cluster }}.
         runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubestatefulsetreplicasmismatch
         summary: StatefulSet has not matched the expected number of replicas.
       expr: |-
         (
           kube_statefulset_status_replicas_ready{job="kube-state-metrics", namespace=~".*"}
             !=
-          kube_statefulset_status_replicas{job="kube-state-metrics", namespace=~".*"}
+          kube_
[Truncated: Diff output was too large]
 

@ixxeL2097 ixxeL2097 force-pushed the renovate/helm/major-prom-stack-prod branch 8 times, most recently from a1e85aa to b4fe7b5 Compare March 22, 2025 01:06
@ixxeL2097 ixxeL2097 force-pushed the renovate/helm/major-prom-stack-prod branch from b4fe7b5 to dce07e7 Compare March 23, 2025 01:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants