You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When there is too many events , even 35gb vol fills up to 100% capacity. The vol had way too many snapshot (< 5-7 days) ~ 1GB, unfortunately still waiting for default 14 days recycle period for cleanup which is the expected behaviour I guess. A Storage cleanup action based on configurable % usage that can be set via helm chart will be of gr8 help 🙏🙏
To Reproduce
Create a scenario where lot of events are generated, ideally more than one name space. This could be scenarios like crashing pods/deployment, frequent jobs etc. that causes State change of multiple API resources in the cluster that are watched by Kubevious
Expected behavior
Kubevious should be able to clean up the mysql dumps Best on configurable % usage supported via helm chart
Screenshots
Snapshot from the mysql STS pod
Filesystem Type 1M-blocks Used Available Use% Mounted on
overlay overlay 47229 29084 16155 65% /
tmpfs tmpfs 64 0 64 0% /dev
/dev/nvme0n1p9 ext4 47229 29084 16155 65% /etc/hosts
shm tmpfs 64 0 64 0% /dev/shm
/dev/nvme1n1 ext4 35102 35086 0 100% /var/lib/mysql <-- 35 Gig PV vol
tmpfs tmpfs 15207 1 15207 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs tmpfs 7854 0 7854 0% /proc/acpi
tmpfs tmpfs 7854 0 7854 0% /proc/scsi
tmpfs tmpfs 7854 0 7854 0% /sys/firmware```
bash-4.4# cd /var/lib/mysql
bash-4.4# ls -lart
total 26229972
drwxr-xr-x. 1 root root 4096 Oct 7 2022 ..
-rw-r-----. 1 mysql mysql 56 Jun 16 08:18 auto.cnf
drwxr-x---. 2 mysql mysql 4096 Jun 16 08:18 performance_schema
-rw-------. 1 mysql mysql 1680 Jun 16 08:18 private_key.pem
drwxr-x---. 2 mysql mysql 4096 Jun 16 08:18 mysql
drwxr-x---. 2 mysql mysql 4096 Jun 16 08:19 sys
-rw-r-----. 1 mysql mysql 16215 Jun 20 16:09 ib_buffer_pool
-rw-r-----. 1 mysql mysql 1082819304 Jun 28 14:42 kubevious-mysql-0-bin.000063
-rw-r-----. 1 mysql mysql 1082819304 Jun 28 17:58 kubevious-mysql-0-bin.000064
-rw-r-----. 1 mysql mysql 1082819304 Jun 28 21:13 kubevious-mysql-0-bin.000065
-rw-r-----. 1 mysql mysql 1093170684 Jun 29 00:21 kubevious-mysql-0-bin.000066
-rw-r-----. 1 mysql mysql 1085515899 Jun 29 03:36 kubevious-mysql-0-bin.000067
-rw-r-----. 1 mysql mysql 1086384294 Jun 29 06:51 kubevious-mysql-0-bin.000068
-rw-r-----. 1 mysql mysql 1086384291 Jun 29 10:07 kubevious-mysql-0-bin.000069
-rw-r-----. 1 mysql mysql 1086384294 Jun 29 13:22 kubevious-mysql-0-bin.000070
-rw-r-----. 1 mysql mysql 1086384294 Jun 29 16:37 kubevious-mysql-0-bin.000071
-rw-r-----. 1 mysql mysql 1078615188 Jun 29 19:28 kubevious-mysql-0-bin.000072
-rw-r-----. 1 mysql mysql 1082663746 Jun 29 22:18 kubevious-mysql-0-bin.000073
-rw-r-----. 1 mysql mysql 1091156917 Jun 30 01:12 kubevious-mysql-0-bin.000074
-rw-r-----. 1 mysql mysql 1083332656 Jun 30 04:32 kubevious-mysql-0-bin.000075
-rw-r-----. 1 mysql mysql 1088276340 Jun 30 07:48 kubevious-mysql-0-bin.000076
-rw-r-----. 1 mysql mysql 1090787594 Jun 30 10:58 kubevious-mysql-0-bin.000077
-rw-r-----. 1 mysql mysql 1090928788 Jun 30 14:03 kubevious-mysql-0-bin.000078
-rw-r-----. 1 mysql mysql 1088870295 Jun 30 17:04 kubevious-mysql-0-bin.000079
-rw-r-----. 1 mysql mysql 1082569311 Jun 30 20:09 kubevious-mysql-0-bin.000080
-rw-r-----. 1 mysql mysql 1097170062 Jun 30 23:34 kubevious-mysql-0-bin.000081
drwxr-x---. 2 mysql mysql 4096 Jul 1 00:04 kubevious
-rw-r-----. 1 mysql mysql 1090095428 Jul 1 02:46 kubevious-mysql-0-bin.000082
-rw-r-----. 1 mysql mysql 1086163515 Jul 1 06:01 kubevious-mysql-0-bin.000083
-rw-r-----. 1 mysql mysql 1077141984 Jul 1 09:12 kubevious-mysql-0-bin.000084
-rw-r-----. 1 mysql mysql 1111411863 Jul 1 12:08 kubevious-mysql-0-bin.000085
-rw-r-----. 1 mysql mysql 1092864528 Jul 1 14:32 kubevious-mysql-0-bin.000086
-rw-r-----. 1 mysql mysql 775 Jul 1 14:32 kubevious-mysql-0-bin.index
-rw-r-----. 1 mysql mysql 8585216 Jul 1 15:57 '#ib_16384_1.dblwr'
drwxr-x---. 2 mysql mysql 4096 Jul 5 15:18 '#innodb_redo'
-rw-r-----. 1 mysql mysql 12582912 Jul 5 15:18 ibtmp1
-rw-r-----. 1 mysql mysql 588439495 Jul 5 15:19 kubevious-mysql-0-bin.000087
-rw-r-----. 1 mysql mysql 79691776 Jul 5 15:19 ibdata1
-rw-r-----. 1 mysql mysql 16777216 Jul 5 15:19 undo_001
-rw-r-----. 1 mysql mysql 16777216 Jul 5 15:19 undo_002
-rw-r-----. 1 mysql mysql 196608 Jul 5 15:19 '#ib_16384_0.dblwr'
-rw-r-----. 1 mysql mysql 31457280 Jul 5 15:19 mysql.ibd
### Environment Details:
- Any platform K8s version 1.21 +
Ideally should be reproducible in any K8s 1.21 + version/browser
The text was updated successfully, but these errors were encountered:
Describe the bug
When there is too many events , even 35gb vol fills up to 100% capacity. The vol had way too many snapshot (< 5-7 days) ~ 1GB, unfortunately still waiting for default 14 days recycle period for cleanup which is the expected behaviour I guess. A Storage cleanup action based on configurable % usage that can be set via helm chart will be of gr8 help 🙏🙏
To Reproduce
Create a scenario where lot of events are generated, ideally more than one name space. This could be scenarios like crashing pods/deployment, frequent jobs etc. that causes State change of multiple API resources in the cluster that are watched by Kubevious
Expected behavior
Kubevious should be able to clean up the mysql dumps Best on configurable % usage supported via helm chart
Screenshots
Snapshot from the mysql STS pod
The text was updated successfully, but these errors were encountered: