Update 2/28/2015: I have a newer article explaining how to do this in Xenserver 6.5.
After having a hard drive nearly die on me and threaten to obliterate the VMs living on it I realized it would be a good idea to have my xenserver installation live on a RAID array.
Following this guide I was able to successfully migrate my running xenserver installation to a software based RAID 1, with a few tweaks. In my case I wanted to migrate from a single old drive to two newer ones.
Below are the steps I took to accomplish this.
Partition the new drives
This assumes that your current drive resides on /dev/sda, and your two new drives are /dev/sdb and /dev/sdc.
sgdisk -p /dev/sda sgdisk --zap-all /dev/sdb sgdisk --zap-all /dev/sdc sgdisk --mbrtogpt --clear /dev/sdb sgdisk --mbrtogpt --clear /dev/sdc sgdisk --new=1:34:8388641 /dev/sdb sgdisk --new=1:34:8388641 /dev/sdc sgdisk --typecode=1:fd00 /dev/sdb sgdisk --typecode=1:fd00 /dev/sdc sgdisk --attributes=1:set:2 /dev/sdb sgdisk --attributes=1:set:2 /dev/sdc sgdisk --new=2:8388642:16777249 /dev/sdb sgdisk --new=2:8388642:16777249 /dev/sdc sgdisk --typecode=2:fd00 /dev/sdb sgdisk --typecode=2:fd00 /dev/sdc
The third partition (VM storage) had to be tweaked a bit since these are larger drives than the current xenserver installation. I simply used gdisk instead of sgdisk for this task.
gdisk /dev/sdb n #create new partition <enter> #accept defaults for partition number, first, and last sectors <enter> <enter> t #select partition type 3 #select partition number 3 fd00 #set for raid w #write changes to disk
Repeat above steps for the other disk (/dev/sdc in my case)
Create the RAID arrays for each partition
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2 mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdb3 /dev/sdc3
Watch array build (optional)
cat /proc/mdstat
Alternatively you can use the watch command to get a real time update of the raid build:
watch -n 1 cat /proc/mdstat
Format & mount the array
mkfs.ext3 /dev/md0 mount /dev/md0 /mnt
Copy the root filesystem to the new array
cp -vxpr / /mnt
Install bootloader on the new disks
mount --bind /dev /mnt/dev mount -t sysfs none /mnt/sys mount -t proc none /mnt/proc chroot /mnt /sbin/extlinux --install /boot dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdb dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdc
Generate new initrd image
chroot /mnt mkinitrd -v -f --theme=/usr/share/splash --without-multipath /boot/initrd-`uname -r`.img `uname -r` exit
Modify boot file
Edit /mnt/boot/extlinux.conf
and replace every mention of the old root filesystem (root=LABEL=xxx
) with root=/dev/md0
.
vi /mnt/boot/extlinux.conf :%s/LABEL=<root label>/\/dev\/md0/ :wq
Reboot
Keep the old drive in, but make sure to boot from either one of the member drives of your new array.
Create storage repository
Create new local storage repository with the new RAID array similar to here.
xe sr-create content-type=user device-config:device=/dev/md2 host-uuid=<UUID of xenserver host> name-label="RAID-1" shared=false type=lvm
Migrate VMs / disks
Migrate any disk images or VMs living on the old drive to the new array.
If these VMs / disks are not powered on or being used, it is as simple as pulling up xencenter, right clicking on the VM and clicking move then select new storage repository.
If the VMs are online you can live migrate them to a different xenserver, then live migrate them back to the proper storage repository.
Remove old storage repository
Following instructions found here.
Note: In my case the transfer returned a strange error but was still successful. I had to restart the XAPI toolstack in order for it to let me remove the old storage repository.
xe sr-list name-label="<name of SR to remove>" xe pbd-list sr-uuid=<UUID of SR above> xe pbd-unplug uuid=<UUID of pbd above> xe sr-forget uuid=<UUID of SR>
Final reboot
Shutdown, disconnect the old drive, and boot back up from the new array. Success.
Configure e-mail alerts (optional)
Now that you have a working RAID array you might want to receive e-mail alerts if there are problems with the array.
First, build an mdadm.conf
mdadm --detail --scan > /etc/mdadm.conf
Modify mdadm.conf to add your desired e-mail address for notifications
sed -i '1i MAILADDR <e-mail address>' /etc/mdadm.conf
Thanks to this site for the sed -i 1i trick.
Lastly, enable the mdadm monitoring service. I found via this site that this is fairly easy to do. Simply enter these two commands:
service mdmonitor start chkconfig mdmonitor on
Xenserver uses ssmtp to send e-mail. You can follow this guide on how to set it up for SSL if you happen to have an ISP that blocks port 25 (as I do.) Otherwise modify /etc/ssmtp/ssmtp.conf to suit your needs.
You can generate a test event from mdadm to make sure e-mail is configured properly:
mdadm --monitor --test /dev/md0 --oneshot
To get e-mail alerts to work right I had to ensure that FromLineOverride was NOT set to yes (default). I also had to add this line to /etc/ssmtp/revaliases:
root:<e-mail address being sent from>
Update 02/03/2015: A commenter made me realize I forgot a step – copying the Control Domain OS to the new Raid array. I’ve added that step above, after the “Format & Mount the array” section.
Update 02/17/2015: If you are using Xenserver 6.5 you might come across the following error message when trying to create RAID arrays:
mdadm: unexpected failure opening /dev/md0
If this happens, load the md kernel driver like so:
modprobe md
It should then let you create your arrays.
When do you copy the Control Domain OS?
I just realized I forgot to mention that step in the article. I have updated it to reflect the copy command. Thanks for pointing that out!
I was kind of hoping for magic but I guess I’ll have to settle for reality this one time.
Hi, on XenServer 7 don’t have extlinux… Any clues?
Sorry I don’t have any information on getting Xen 7 to work with RAID. It’s based on CentOS 7 and is a whole different animal to work with. I hope to get it figured out in the coming months but don’t have time to look into it now.