I recently acquired a shiny new set of SSDs to host my VMs. The problem is I needed to create a new ZFS array to accommodate them. I needed to figure out a way to migrate my VMs to the new array and then instruct Xenserver to use the new array instead of the old one.
Fortunately with a bit of research I learned this is fairly painless. Thanks to this discussion on citrix forums that got me pointed in the right direction. To change the server / IP address of an existing NFS storage repository in Xenserver you must do the following:
- Shut down affected VMs
- Shutdown any VMs using NFS SRs
- Copy the NFS SRs (the directories containing the .vhd files) to the new NFS server
- xe pbd-unplug uuid=<uuid of pbd pointing to the NFS SR>
- xe pbd-destroy uuid=<uuid of pbd pointing to the NFS SR>
- xe pbd-create host-uuid=<uuid of Xen Host> sr-uuid=<uuid of the NFS SR> device-config-server=<New NFS server name> device-config-serverpath=<NFS Share Name>
- xe pbd-plug uuid=<uuid of the pbd created above>
- Reboot the VMs using NFS SRs
In my case since my VMs were on an existing ZFS volume with snapshots I wanted to preserve, I used ZFS send and receive to transfer data from my old array to my SSD array. Bonus: I was able to do this while the VMs were still running to ensure minimal downtime. My ZFS copy procedure was as follows:
- Create recursive snapshot of my VM dataset
zfs snapshot -r storage/VMs@migrate
- Start the initial data transfer (this took quite some time to finish)
zfs send -R storage/VMs@migrate | zfs recv ssd/VMs
- Do another incremental snapshot and transfer after initial huge transfer is complete (this took much less time to do)
zfs snapshot -r storage/VMs@migrate2
zfs send -R -i storage/VMs@migrate storage/VMs@migrate2 | zfs recv ssd/VMs
- Shutdown all affected VMs and do one more ZFS snapshot & transfer to ensure consistent data:
zfs snapshot -r storage/VMs@migrate3
zfs send -R -i storage/VMs@migrate2 storage/VMs@migrate3 | zfs recv ssd/VMs
In the above examples my source dataset was storage/VMs and the destination dataset was ssd/VMs.
Once the data was all transferred to the new location it was time to tell Xenserver about it. I had enough VMs that it was worth my time to write a little script to do it. It’s quick and dirty but it did the job. Behold:
#!/bin/bash
#Author: Nicholas Jeppson
#A simple script to change a xenserver NFS storage repository address to a new location
#Modify NFS_SERVER, NFS_PATH and/or NFS_VERSION to match your environment.
#Run this script on each xenserver host in your pool. Empty output means the transfer was successful.
#This script takes one argument - the name of the SR to be transferred.
SR_NAME="$1"
NFS_SERVER=10.0.0.1
NFS_PATH=/mnt/ssd/VMs/$SR_NAME
NFS_VERSION=4
#Use sed and awk to grab necessary UUIDs
HOST_UUID=$(xe host-list|egrep -B3 `hostname`$ | grep uuid | awk '{print $5}')
PBD_UUID=$(xe pbd-list|grep -A4 -B4 $SR_NAME | grep -B2 $HOST_UUID |grep -w '^uuid ( RO)' | awk '{print $5}')
SR_UUID=$(xe pbd-list|grep -A4 -B4 $SR_NAME | grep -A2 $HOST_UUID | grep 'sr-uuid' | awk '{print $4}')
#Unplug & destroy old NFS location, create new NFS location
xe pbd-unplug uuid=$PBD_UUID
xe pbd-destroy uuid=$PBD_UUID
NEW_PBD_UUID=$(xe pbd-create host-uuid=$HOST_UUID sr-uuid=$SR_UUID device-config-server=$NFS_SERVER device-config-serverpath=$NFS_PATH device-config-nfsversion=$NFS_VERSION)
xe pbd-plug uuid=$NEW_PBD_UUID
Download the script here (right click / save as)
You can run this script in a simple for loop with something like this:
for SR in <list of SR names separated by a space>; do bash <name of script saved from above> $SR; done
If you named the above script nfs-migrate.sh, and you had three SRs to change (blog1, blog2, blog3) then it would be:
for SR in blog1 blog2 blog3; do bash nfs-migrate.sh $SR; done
After I migrated the data and ran that script, my VMs booted up using the new SSD array. Success.