Replies: 1 comment
-
|
As for the rest of the cluster its managed by Flux and Gitops so that side works amazingly. It's only the persistence of volumes which is insanely hard to manage. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I came into a quite frustrating issue with etcd (which for some reason was enabled by default on NixOS) where it kept one of the CPU threads locked at 100%. The solution I've found is to make it use sqlite instead, but doing so I have found a major pain-point of Kubernetes in general.
Since I'm working with a single-node setup, there is absolutely no need to use cloud-backed PVs, hence early on I decided to use rancher's own local-path provisioner to handle PVCs.
However, I did run into a issue: How can I sanely back up PVC to PV mappings after I rebuild a cluster. As in, how can I tell k3s "oh that PVC named X is at location Y backed by storageClass Z...etc". Is there a sane way to restore this information.
So far I've tried:
kubectl get pv -o yaml > backup_pv.yamland then restoring said file on the new cluster with no luck. The PV stays at Released and wont let new PVCs bind to it.Any advice when it comes to this is appreciated.
PS: Also I have considered
volsyncbut restoring stuff in the order of multiple of hundreds of gigabytes per cluster rebuild is wildly unreasonable.Beta Was this translation helpful? Give feedback.
All reactions