Zfs Remote Backup
28 Feb 2015Since no one bought my N54L NAS I need to do something with it. So my first guess was a remote backup, and thats exactly what I did.
So thats why I visited @ronyspitzer this weekend (well some weekend in the past (ages ago), since I failed to finish this). So I grab my hardware and thats how it looks:

Maybe I should do finally my driving licence, or stop transporting so much stuff from A to B.
But lets talk about the setup. The N54L is loaded with 3 x 2TB drives and 1 TB for the system. So the first step was to install FreeBSD with root on zfs which is really easy with the FreeBSD 10 installer. With the other drives I build a raidz.
zpool create -O utf8only=on -O normalization=formD -O casesensitivity=mixed -O aclinherit=passthrough tank raidz ada0 ada1 ada2
This is basically the same setup like my Dell T20. And a very usefull hint for me was the sysctl for geom debugflags, becaue I used disks with old partition tables on it and I got allways a error like "Device Busy" so you can force to create a
zfs volume anyway with sysctl kern.geom.debugflags=16.
With the pool in place, I enable ssh on my NAS with a passwordless key login. Maybe I write a blog post about that to. (Probably not, but you can find how that is done on teh interwebz)

After all this is done, I can finally use my 'master' backup scripts. Well you probably don't have a user to receive. But ZFS is nice so there is a nice way for this:
sudo zfs allow -u l33tname create,receive,mount,userprop,destroy,send,hold,compression,aclinherit tank
This allow everything which is necessary to receive snapshots on tank. You can check your config with zfs allow tank.
true
Because you probably won't send everytime the entire dataset you can use the incremental script. That's what I do. Every night with cron.
30 2 * * * /root/backup/backup_incremental » /root/backup/backup.log
true
Actually I did this before I blog about it.