-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add systemd branch #61
base: master
Are you sure you want to change the base?
Conversation
…ously with other daily/weekly/etc. events.
… to /lib insted of /etc
Thanks for creating this! Seems a lot cleaner to integrate all the functionality into a single target this way, rather than my original method of individually installing the systemd units. One question though. Would this method prevent the user from enabling or disabling units on a per-timer basis? It seems plausible to me for instance that some users might want to snap weekly or monthly, but on a rapidly-changing dataset might want to avoid hourly or daily snaphots. Overall I like your method, so I might ask you to make a PR I can pull into my fork if the devs aren't interested in integrating this PR into the main branch. |
PR sent! Working with systemd last week, I’ve learned that the “recommended way” is to override the installed systemd units in Method 1: override
|
Of course... The definition of a target is an excellent idea. I like this. Can different snapshot-timers interfere with each other. Considering the following situation:
|
Yes, I've noticed that auto snapshot on all ZFS datasets was overdoing it. By default, make install doesn't activate the target. I definitely recommend anyone with more specific requirements than "snapshot everything" to create a custom zfs-auto-snapshot.target in /etc/systemd/system before activating it. There is also the --default-exclude switch for the snapshot script to only snapshot datasets that hav been explicitly tagged with the appropriate properties.
… On Feb 22, 2017, at 7:07 AM, gaerfield ***@***.***> wrote:
Of course... The definition of a target is an excellent idea. I like this.
Can different snapshot-timers interfere with each other. Considering the following situation:
an software knows about zfs as storage-backend and because of this is doing reguarly snapshots by itself
additionally the timer-scripts performing auto-snapshots
If this could lead into problems, then maybe activating snapshots for all pools by default is not a good idea.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Looks like the Makefile might need tweaking to account for individual differences between distros. The I know they were planning on merging So since this layout is going to be standard for at least a few more years, ideally somehow we'd be able to support it. |
I thinking that making a .deb package would make it “proper” to install to /lib/systemd/system/ . I would love some help to put that together.
… On Feb 22, 2017, at 10:10 AM, Alex Haydock ***@***.***> wrote:
Looks like the Makefile might need tweaking to account for individual differences between distros. The /usr/local/lib/systemd/system directory seems to still be /lib/systemd/system/ on Debian Jessie.
I know they were planning on merging /usr in time for stretch to be released, but it looks like they reverted that plan <https://lists.debian.org/debian-devel-announce/2017/01/msg00004.html> due to unforeseen bugs that might not be fixed in time for stretch's upcoming release.
So since this layout is going to be standard for at least a few more years, ideally somehow we'd be able to support it.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub <#61 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ASXH3N18lq7oTgrw63_c3nI1gsexSLIAks5rfHofgaJpZM4MFdkO>.
|
A bit off-topic: Is it possible to determine, if changes had happened to a dataset within a timeframe?
The algorithm could be:
Also: Once a frequent snapshot happens, automatically all other subsequent snapshots would be happening also. I where looking into inotify(), probably that's a bit over the top (because there is no need to know, which files are changed, only that changes had happened). Could zed-deamon asked for this kind of info, or does this usecase need an own daemon for this? @jakelee8 I would be happily to help, but am off-the-grid for the next 3 weeks ;) (also have zero-experience in building a .deb-package, so wouldn't be much helpful) |
@gaerfield Assuming we are talking about userspace, I think this should be possible. Basically, see how much space the current snapshot is taking up. If zero, do not create a new one. Where you run into trouble, though, is with expiration. Let's say you want hourly snapshots going back a day. You take a snapshot, and nothing changes for 24 hours. Now you have one hourly snapshot and it is a day old. Do you delete it? Let's say you answered no. It gets worse. Now, make a change, wait an hour. Now you have two snapshots, one from now and one from 25 hours ago. Do you delete the one from 25 hours ago? It still represents the state from an hour ago. Given that ZFS snapshots are basically free, the naive approach is simple and effective. The only downside I've seen (besides cluttering up the output of |
@rlaager Because the auto-fs-snapshot-script does automatically destroy more-frequent snapshots between two less-frequent snapshots, I don't see this as a problem (or haven't understood the use-case correct). Example:
If I need to rollback before I had done my changes, I could use the snapshot from 10:00am. If I need to rollback to in-between, I could use the 7pm snapshot. All smaller changes that where happening between 6:00 and 7:00 are lost. This is ok, because this is the default-behaviour. If I don't take daily/monthly/yearly snapshots, than the system is configured to keep no snapshots, that are older than 24 hours (or whatever you have configured as max-amount for hourly snapshots). So yes... last snapshot is 25 hours left, no changes happened between last 24 hours, I'm not interested in keeping 25 hours old snapshots -> delete it. Another Question:
Wouldn't this again spin up my disks? (I'm really interested that my disks get their sleep at night) |
@gaerfield Thank you for offering your help. I'm not familiar with deb packages either, hence the ask for help! There is a debian branch that I assume builds a deb package. For the sake of providing independent changes, I left the deb package feature for another pull request. Very interesting comments on the issues surrounding ZFS auto snapshots. If you find it useful, here's the script I use to backup the snapshots onto a ZFS RAID array. In the I configured a Zdev cache on my SSD. Maybe that could help keep disks powered down when nothing is written? Then again, the caching algorithm may not be smart enough; after all, it doesn't have the necessary prior knowledge to always keep snapshot sizes in the cache. Also, I haven't configured my HDD to spin down when idle yet, so I can't confirm if it keeps them spun down at night. #!/usr/bin/env bash
DEFAULT_EXCLUDE=
function zfs_exists() {
zfs list "$1" &> /dev/null
}
function zfs_create_if_not_exist() {
if ! zfs_exists "$1"; then
zfs create -o canmount=off -o com.sun:auto-snapshot=false "$1" || exit 1
else
zfs set canmount=off "$1" && \
zfs set com.sun:auto-snapshot=false "$1"
fi
}
function zfs_list_filesystems_for_backup() {
local line
zfs list -t filesystem -o name,com.sun:auto-snapshot -rH "$1" \
| grep -v "@zfs-auto-snap_frequent" \
| while read line; do
local autosnap=`echo "$line" | cut -f 2`
if [ "$autosnap" == 'false' ] || \
([ "$autosnap" == '-' ] && [ ! -z "$DEFAULT_EXCLUDE" ]); then
continue
fi
echo "$line" | cut -f 1
done
}
function zfs_list_snapshots() {
zfs list -t snapshot -d 1 -o name -pH rpool/ROOT/ubuntu | sed 's/^.*@/@/'
}
function zfs_get_earliest_snapshot() {
zfs_list_snapshots | head -n 1
}
function zfs_get_latest_snapshot() {
zfs_list_snapshots | tail -n 1
}
function remove_zero_sized_backups() {
zfs list -t snapshot -o name,used -pH -r $1 \
| grep -P '\t0$' \
| cut -f 1 \
| xargs -rl -- zfs destroy -d
}
function backup_snapshots() {
local SOURCE=$1
local SINK=$2
local ROOT="$2/$1"
local SNAPSHOT_FROM
local SNAPSHOT_UNTIL
local ds
if [ -z "$SOURCE" ] || [ -z "$SINK" ]; then
echo "Usage: $0 SOURCE SINK" 1>2
exit 1
fi
if ! zfs_exists "$SINK"; then
zfs create \
-o canmount=off \
-o mountpoint=none \
-o com.sun:auto-snapshot=false "$SINK" || exit 1
fi
zfs_list_filesystems_for_backup "$SOURCE" | while read ds; do
zfs_create_if_not_exist "$SINK/$ds" || exit 1
SNAPSHOT_FROM=`zfs_get_latest_snapshot "$SINK/$ds"`
SNAPSHOT_UNTIL=`zfs_get_latest_snapshot "$ds"`
if [ -z "$SNAPSHOT_FROM" ]; then
zfs send -eL "$ds" | zfs recv -duvF "$ROOT" && \
zfs set canmount=off "$SINK/$ds" && \
zfs set com.sun:auto-snapshot=false "$SINK/$ds"
elif [ "$SNAPSHOT_FROM" != "$SNAPSHOT_UNTIL" ]; then
zfs send -epL -I "$SNAPSHOT_FROM" "$ds$SNAPSHOT_UNTIL" | zfs recv -duv "$ROOT"
fi
done
}
backup_snapshots rpool vpool/BACKUP
remove_zero_sized_backups |
Uuuuh... big script. I definitely trying this out later (in 3 weeks - dam'n holidays :D). As for the ssd-cache, the idea is not bad at all... If the cache changed, then read/writes must have happened (disks had spun up). Execute an snapshot: when the size is 0, then only reads has happened. The setup is complicated and not very generic... Hmm... Btw: I had a look into the zed-daemon yesterday, but this seems to be an dead end. Sadly the deamon Diagnostic-Events (disk is broken, successful scrubbing), which trigger zedlets (mail, SMS, or smth like this). Thanks for the script and tipps. |
Looks like #59 does what you want: the |
I'm a little busy at the moment, but I might have the resources to look into making The |
Can someone explain me the advantages of using systemd instead of cron for that? |
@mookie- Isn't stackeroverflow more appropriate to answer this question for you? Like cron-vs-systemd-timers. If you ask me, there's no absolute right or wrong to your question. For new systems I prefer the systemd-way (ignoring discussions, why systemd is bad or good or whatever) because:
|
@gaerfield I didn't ask about "is systemd better than non-systemd". I was interested in the advatages of handling this task with systemd because it's more effort to handle two branches instead of one. And (as I don't know much about systemd) I thought simple cronjobs are sufficient for that.. But you are right, stackoverflow is a better place for that question. And thank you for the link. |
So what's going on with this? :) I stumbled across https://briankoopman.com/zfs-automated-snapshots/ which seemed to suggest that this stuff used systemd, but it doesn't seem like it to me. Perhaps they used an Arch modified version if this repository. Anyway, would be nice if this could be merged at some point. |
This PR is very old and I haven't looked at it for a long time so it's quite possibly out of date now. I've been using Sanoid in the meantime as it's very easy to adapt to use with systemd timers. |
This pull request is for adding a
systemd
branch for zfs-auto-snapshot based on the work of @ajhaydock and @gaerfield . Thezfs-auto-snapshot.target
configures all the equivalent cron scripts from master.P.S. Anyone seen https://github.com/zfsnap/zfsnap ?