Replies: 2 comments
-
|
Sounds like you removed a wrong drive, which failed intermittently, e.g. loose cable. The other drive which you removed should have all the data you lost :) |
Beta Was this translation helpful? Give feedback.
-
|
It may sound like that, but I am sure the correct drive has been removed. However I'll check again as soon as i have the drive in my hands again. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm trying to understand what went wrong on my server.
Already tried to find it out with claude code on the server, but no luck yet.
Years ago I installed Ubuntu 22.04 LTS with zfs mirrored on 2 boot drives.
Just added luks crypt to the data partition. The boot partition was also zfs but not encrypted.
Server booted fine from either drive.
On december 10th 15:03 one drive failed.
In that time the server kept running fine in a degraded state.
A few terabytes were written in that period on the working drive.
Eventually on december 27th I realized that a drive had failed.
Got a new drive, and shutdown the server to replace the drive.
Shutdown completed and server did not crash, but did not see if there were error messages during shutdown.
Replaced the faulty drive und booted again.
System booted from the still working mirror drive.
But it rolled back all changes that were made after the drive failed on december 10th 15:03.
All logs on the server show nothing after 15:03, just a gap till the reboot.
Also no errors can be found anywhere. The errors were in the logs after 15:03, since I've seen them prior to reboot myself.
The server kept running fine in that time, since we worked daily on it without problems.
It did not slow down or anything. Behaved just fine.
Any ideas why that rollback could have happened?
`zdb -l /dev/mapper/root2_crypt
LABEL 0
`
Beta Was this translation helpful? Give feedback.
All reactions