ZFS remote replication script with reporting

In my experimentation with FreeNAS one thing I found lacking was the quality of reports it generated. I suppose one philosophy is that the smaller the e-mail the better, but my philosophy is that the e-mail should still be legible. Some of the e-mails I get from FreeNAS are simply bizarre and cryptic.

FreeNAS has an option to replicate your ZFS volumes to a remote source for backup. As far as I can tell there is no report e-mail when the replication is done, although there may be a cryptic e-mail if anything failed. I have grown used to daily status e-mails from my previous NAS solution (Debian with homegrown scripts.) I set out to do this with FreeNAS and added a few added features along the way.

My script requires that you have already created an appropriate user and private/public key pair for both the source and destination machines (to allow for passwordless logins.) Instructions on how to do this are detailed below. You can download the script here.

Notes and observations

I learned quite a bit when creating this script. The end result is a script that e-mails me a beautiful report telling me anything that was added or removed since the last backup.

  • I used dd for greater speed as suggested here
  • I learned from here that the -R switch for ZFS send sends the entire snapshot tree.
  • The ZFS diff command currently has a bug where it does not always report deleted files / folders. It was opened two years ago, closed, and then recently re-opened as it is still an issue. It is the reason my script uses both ZFS dff and rsync – so I can continually see the bug in action.
  • When dealing with rsync, remember the / at the end!
  • In bash you can pipe output from a command to a variable.
  • When echoing above variable, make sure you enclose it in quotes to preserve formatting.
  • Use the -r flag in sed -r for extended regex functions
  • In my testing the built in freeNAS replication script didn’t appear to replicated the latest snapshot. Interesting…

Below are the preliminary steps that are needed in order for the script to run properly.

Configure a user for replication

Create users

Either manually or through the FreeNAS UI, create a user that will run the backup script. Create that same user on the remote box (backup server.)

Generate RSA keys

Log into local host and generate RSA keys to allow for passwordless login to the system

cd .ssh
ssh-keygen

Make note of the filenames you gave it (the default is id-rsa and id-rsa.pub)

Authorize the resulting public key

Log into remote host and add the public key of local host in ~/username/.ssh/authorized_keys where username is the user you created above. One way to accomplish this is to copy the public key on the main server and paste it into the authorized keys file of the backup server.

On the main server

(assuming the keyfile name is id-rsa)

cd .ssh
less id-rsa.pub

Copy the output on the screen in its entirety

On the backup server

Paste that public key into the authorized_keys file of the backup user

cd .ssh
vi authorized_keys

Allow the new user to mount filesystems

FreeNAS requires you to specifically allow regular users to mount filesystems as described here.

  1. In the web interface under System > Sysctls > Add sysctl:
    Variable: vfs.usermount
    Value: 1
    Enabled: yes

Grant ZFS permissions to the new user

In order for the dataset creation (full backup) feature to work the user we’ve created needs to have specific ZFS permissions granted as outlined here.

Run this command on both the main and backup servers:

zfs allow backup create,destroy,snapshot,rollback,clone,promote,rename,mount,send,receive,quota,reservation,hold storage

where backup is the new user and storage is the dataset name. I’m pretty sure you can make those permissions a little more fine grained but I threw a bunch of them in there for simplicity’s sake.

Configure HP iLo (optional)

My current backup server is an old HP Proliant server equipped with HP iLo. I decided to add a section in my script that, when enabled in the variables section, would have the script use iLo to power the machine on. If you do not have / wish to use iLo to control your backup server you can skip this section.

First, create a user in ILo and grant it Virtual Power and Reset permissions (all the rest can be denied.)

Next, copy the .pub file you created earlier to your computer so you can go into iLo web interface and upload it. Make sure an iLo user exists and the last part  (the username) of the public key file matches exactly with the user you created in HP iLo.

When I first tried this no matter what I tried I couldn’t get passwordless login to work. After much weeping, wailing, and gnashing of teeth. I finally discovered from here that the -f and -C options of the ssh-keygen command are required for iLo to accept the key. I had to regenerate a private/public key pair via the following options, where backup is the user I created in iLo:

ssh-keygen -b 1024 -f backup -C backup

Refresh owncloud file cache

I came across an issue with owncloud where I had manually placed files in my user directory but the files were not showing up in owncloud. I found from here that you can access the owncloud console directly and trigger a re-scan of your files.

To trigger a re-scan, open up a terminal session to your owncloud server and run the following command:

php /path/of/owncloud/console.php files:scan --all

This will trigger a re-scan of all files for all users. You can replace –all with a userid if you just want to scan a specific user’s folder instead.

Xenserver – The uploaded patch file is invalid

It has been six months since I’ve applied any patches to my Citrix Xenserver hypervisor. Shame on me for not checking for updates. The thing has been humming along without any issues so it was easy to forget about.

In trying to install xenserver patches today I kept getting this error message no matter what I tried:

The uploaded patch file is invalid

After deleting everything I could (including files hanging out in /var/patch) I realized that I was simply Doing It Wrong™. D’oh!

When applying xenserver updates, the expected file extension is .xsupdate. I had been trying to xe patch-upload the downloaded zip file, whereas I was supposed to have extracted those zips before trying to upload them.  This quick little line unzipped all my patch ZIP files for me in one swoop:

find *.zip -exec unzip {} \;

Once everything was unzipped I was able to upload and apply the resulting .xsupdate files without issue.

Find out free disk space from the command line

The du command can be used in any Unix shell to determine how much space a folder is taking. It’s a quick way to determine which directory is using the most space.

If you run du on the root drive “/” then you will get an idea of how much space the drive is using. One unfortunate side effect of that command is if you have any mounted drives or other filesystems, it will search the disk usage of those folders as well. Aside from taking a long time that method provides data we don’t necessarily want.

Fortunately, there are a few switches you can use to fix this problem. The -x switch tells du to ignore other filesystems. Perfect.

There are a few other switches that prove useful. Below is a list of my favorites:

  • -x Ignore other filesystems
  • -h Use human readable numbers (kilobytes, megabytes, and gigabytes) instead of raw bytes
  • -s Provide a summary of the folder instead of listing the size of each file inside the folder
  • * (not a switch, but rather an argument to put after your directory.) This summarizes subfolders in that directory, as opposed to simply returning the size of that directory.

For troubleshooting low disk space errors, the following command will give you a good place to start:

du -hsx /*

You can then dig further by altering the above command to reflect a directory instead of /, or simply do * if you want a breakdown of the directory you are currently in.

Compare two latest ZFS snapshots for differences

In my previous post about ZFS snapshots I discussed how to get the latest snapshot name. I came across a need to get the name of the second to last snapshot and then compare that with the latest. A little CLI kung-fu is required for this but nothing too scary.

The command of the day is: zfs diff.

zfs diff storage/mythTV@auto-20141007.1248-2h storage/mythTV@auto-20141007.1323-2h

If you get an error using zfs diff, you aren’t running as root. You will need to delegate the diff ZFS permission to the account you’re using:

zfs allow backup diff storage

where backup is the account you want to grant permissions for and storage is the dataset you want to grant permissions to.

The next step is to grab the two latest snapshots using the following commands.

Obtain latest snapshot:

zfs list -t snapshot -o name -s creation -r storage/Documents | tail -1

Obtain the second to latest snapshot:

zfs list -t snapshot -o name -s creation -r storage/Documents | tail -2 | sort -r | tail -1

Putting it together in one line:

zfs diff `zfs list -t snapshot -o name -s creation -r storage/Documents | tail -2 | sort -r | tail -1` `zfs list -t snapshot -o name -s creation -r storage/Documents | tail -1`

While doing some testing I came across an unfortunate bug with the ZFS diff function. Sometimes it won’t show files that have been deleted! It indicates that the folder where the deleted files were in was modified but doesn’t specify any further. This bug appears to affect all ZFS implementations per here and here. As of this writing there has been no traction on this bug. The frustrating part is the bug is over two years old.

The workaround for this regrettable bug is to use rsync  with the -n parameter to compare snapshots. -n indicates to only do a dry run and not actually try to copy anything.

To use Rsync for comparison, you have to do a little more CLI-fu to massage the output from the zfs list command so it’s acceptable to rsync as well as include the full mountpoint of both snapshots. When working with rsync, don’t forget the trailing slash.

rsync -vahn --delete /mnt/storage/Documents/.zfs/snapshot/`zfs list -t snapshot -o name -s creation -r storage/Documents | tail -2 | sort -r | tail -1 | sed 's/.*@//g'`/ /mnt/storage/Documents/.zfs/snapshot/`zfs list -t snapshot -o name -s creation -r storage/Documents | tail -1 | sed 's/.*@//g'`/

Command breakdown:

Rsync arguments:
-v means verbose (lists files added/deleted)
-a means archive (preserve permissions)
-h means human readable numbers
-n means do a dry run only (no writing)
–delete will delete anything in the destination that’s not in the source (but not really since we’re doing -n – it will just print what it would delete on the screen)

Sed arguments
/s search and replace
/.*@ simple regex meaning anything up to and including the @ sign
/  What comes after this slash is what we would like to replace what was matched in the previous command. In this case, we choose nothing, and move directly to the last argument
/g tells sed to keep looking for other matches (not really necessary if we know there is only one in the stream)

All these backticks are pretty ugly, so for readability sake, save those commands into variables instead. The following is how you would do it in bash:

FIRST_SNAPSHOT="`zfs list -t snapshot -o name -s creation -r storage/Documents | tail -2 | sort -r | tail -1 | sed 's/.*@//g'/`"
SECOND_SNAPSHOT="`zfs list -t snapshot -o name -s creation -r storage/Documents | tail -1 | sed 's/.*@//g'/`"
rsync -vahn --delete /mnt/storage/Documents/.zfs/snapshot/$FIRST_SNAPSHOT /mnt/storage/Documents/.zfs/snapshot/$SECOND_SNAPSHOT

I think I’ll stop for now.

Blow away ZFS snapshots and watch the progress

For the last month I have had a testing system (FreeNAS) take ZFS snapshots of sample datasets every five minutes. As you can imagine, the snapshot count has risen quite dramatically. I am currently at over 12,000 snapshots.

In testing a backup script I’m working on I’ve discovered that replicating 12,000 snapshots takes a while. The initial data transfer completes in a reasonable time frame but copying each subsequent snapshot takes more time than the original data. Consequently, I decided to blow away all my snapshots. It took a while! I devised this fun little way to watch the progress.

Open two terminal windows. In terminal #1, enter the following:

bash
while [ true ]; do zfs list -H -t snapshot | wc -l; sleep 6; done

The above loads BASH and the runs a simple loop to count the total number of snapshots on the system. The sleep command is only there because it takes a few seconds to return the results when you have more than 10,000 snapshots.

Alternatively you could make the output a little prettier by entering the following:

while [ true ]; do REMAINING="`zfs list -H -t snapshot | wc -l`"; echo "Snapshots remaining: $REMAINING" ; sleep 6; done

In terminal #2, enter the following (taken from here):

bash
for snapshot in `zfs list -H -t snapshot | cut -f 1`
do
zfs destroy $snapshot
done

You can now hide terminal#2 and observe terminal #1. It will show you how many snapshots are left, refreshing the number every 6 seconds. Neat.

Fix Apache “Could not reliably determine name” error

For too many years now I have been too lazy to investigate the Apache error message I get whenever I restart the service:

 ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName

I finally decided to investigate it today and found this post which describes a simple fix: create /etc/apache2/conf.d/name and add the ServerName variable to it.

sudo vim /etc/apache2/conf.d/name
ServerName jeppson.org

Change ServerName to be whatever you would like, and you’re good to go.

Fix subsonic after 5.0 upgrade

Subsonic is a great media streaming program that I’ve used for a few years now. It was originally designed for streaming your private music collection but has since moved to allowing you to stream your video collection as well. It’s great for those of us who can’t bring their entire audio/visual library with them but would still like access to said library wherever they are.

I run subsonic behind an apache reverse proxy configuration similar to this one to allow it to run on the same server as other websites over port 80 and allow for HTTPS (When I set up my subsonic server years ago it had no native support for HTTPS.  The only way to have HTTPS was through another web server such as apache.)

After downloading and installing the Subsonic 5.0 upgrade I ran into a couple of issues, detailed below.

Issue #1

I have experienced several times over the years – upgrading causes /etc/default/subsonic to be replaced with a default, clean version. This is a problem if you have a few customizations to your subsonic setup, in my case context-path and port. (My experience is with Debian. I don’t know if other distros perform in a similar manner or not)

Resolution

Before you upgrade subsonic, make a backup copy of /etc/default/subsonic, then restore that copy after upgrade. If you forgot to make a backup first, edit the new /etc/default/subsonic file and check the following

  • Make sure the –port and –https-port arguments are correct
  • Re-add –context-path if you had it configured before. In my setup, I have configured –context-path=/subsonic to make my apache rewrite rules easier to manage.

Issue #2

The video streaming function broke entirely. This was due to the fact that it was trying to reference a local IP address to stream the videos, despite my apache proxypass rule. This problem will only surface itself if you are running Subsonic behind a reverse proxy.

Resolution

After a few days of searching I finally came across this helpful post. To get video to work, simply add

 ProxyPreserveHost on

to the apache configuration file you used for your reverse proxy, then restart apache. This will fix the video streaming function but you will notice your HTTPS icon change (if you configured HTTPS), notifying you that some content on the page is not encrypted. This is due to subsonic streaming the video in plain HTTP instead of HTTPS.

Unfortunately the fix to that appears to require at least Apache 2.4.5. Since I have an earlier version, I was greeted with this lovely message:

Syntax error on line 15 of /etc/apache2/sites-enabled/subsonic:
Invalid command 'SSLProxyCheckPeerName', perhaps misspelled or defined by a module not included in the server configuration

Since I did not want to upgrade my version of apache, I simply decided to accept the risk of my video streams possibly being intercepted.

Success.

WordPress wp-admin links are incorrect after site move

I’ve been scouring the internet for months for this particular issue.  It must not be very common. Ever since I moved my site from one source (local IP address) to another (web facing URL) I have had issues with bad links (things pointing to the old address instead of the new one.)

I have mostly resolved them (using methods from this post) but one vexing issue remained: links in wp-admin.php remained bad; specifically, the  column headers and pagination links in the All Posts section of managing the site – they all still pointed to the backend IP address instead of the domain name of the site.

I found a few bug reports mentioning this but no clear resolution. After investigating ticket 18944 I was put on the right track. One link from that ticket pointed me in the right direction, but the comment that really drove me to the resolution was the last one:

Any proxy configuration is “supported” by WordPress, you just need to remap the server vars based on whatever that particular proxy configuration is using.

This is proxy 101.

That made me realize that when I changed from a local IP to a public facing IP, I also went from direct access to the blog to being behind a reverse proxy. The issue I’ve been having is a proxy issue, not a site move issue. Thanks to the comment above, I learned I need to add a single line to wp-config.php:

$_SERVER[ 'HTTP_HOST' ]   = "jeppson.org";

Replace jeppson.org with the base URL of your site. That’s it! all links are correct now. Brilliant.

WordPress directing to old URL after upgrade to 4.0

I encountered an odd issue after upgrading one of my wordpress sites to version 4.0: the login page suddenly kept trying to redirect to its old address (I had changed addresses some time ago.)

I still don’t know how or why this happened, but after some googling the way to fix it was to follow instructions as outlined here.

wp-login.php can be used to (re-)set the URIs. Find this line:

require( dirname(__FILE__) . '/wp-load.php' );

and insert the following lines below:

//FIXME: do comment/remove these hack lines. (once the database is updated)
update_option('siteurl', 'http://your.domain.name/the/path' );
update_option('home', 'http://your.domain.name/the/path' );

You’re done. Test your site to make sure that it works right. If the change involves a new address for your site, make sure you let people know the new address, and consider adding some redirection instructions in your .htaccess file to guide visitors to the new location.

This worked for me – I could now log in with the correct URL.