Tag Archives: mdadm

Fix raidcheck error on Xenserver 6.5

After configuring a software RAID to host my VMs on my Xenserver 6.5 instance I began receiving odd e-mails once a week. The e-mails simply said:

/usr/sbin/raid-check: line 62: declare: -A: invalid option 
declare: usage: declare [-afFirtx] [-p] [name[=value] ...]

It turns out /usr/sbin-raid-check is a bash script called from the file /etc/cron.d/raid-check. It’s a weekly cron job designed to “scrub” the RAID array. I was getting these e-mails because I had configured my xen server to e-mail me anything sent to root, which includes messages encountered during cron jobs.

There appears to be a typo in the raid-check script. Line 62 of raid-check reads:

declare -A check

After reading the syntax of the declare command, I believe the issue is the fact that the A is capitalized. I commented out that line and replaced it with

declare -a check

That seemed to work. No more weird errors coming from my xenserver.

Convert xenserver 6.5 to software RAID 1

I have written previously about how to convert Citrix Xenserver 6.2 to a software RAID 1. When I upgraded to Xenserver 6.5 I found I had to re-install the xenserver instance because the upgrade didn’t recognize the software RAID. When trying to follow my own guide I found that I couldn’t create the array – it gave the following error message:

mdadm: unexpected failure opening /dev/md0

It turns out 6.5 handles RAID differently. You have to manually load the RAID kernel modules before you can create arrays. I was able to get this running successfully thanks to guidance from this site, specifically comments on it by Olli.

The majority of this can simply be copy/pasted into the command window, once drive paths have been updated for your specific setup.

# Prepare /dev/sdd
sgdisk --zap-all /dev/sdd
sgdisk --mbrtogpt --clear /dev/sdd
sgdisk -R/dev/sdd /dev/sdc # Replicate partion table from /dev/sdc to /dev/sdd with unique identifier
sleep 5 # Sleep 5 seconds here if you script this…
sgdisk --typecode=1:fd00 /dev/sdd
sgdisk --typecode=2:fd00 /dev/sdd
sgdisk --typecode=3:fd00 /dev/sdd
sleep 5 # Sleep 5 seconds here if you script this…
modprobe md_mod # load raid, because it isn't load by default (XS6.5 only)
yes|mdadm --create /dev/md0 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdd1 missing # Create md0 (root)
yes|mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdd2 missing # Create md0 (swap)
yes|mdadm --create /dev/md2 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdd3 missing # Create md0 (storage)
sleep 5 # Sleep 5 seconds here if you script this…
mkfs.ext3 /dev/md0 # Create root FS
mount /dev/md0 /mnt # Mount root FS
cp -xR --preserve=all / /mnt # Replicate root files
mdadm --detail --scan > /mnt/etc/mdadm.conf #generate RAID configuration
sed -i 's/LABEL=[a-zA-Z\-]*/\/dev\/md0/' /mnt/etc/fstab # Update fstab for new RAID device
mount --bind /dev /mnt/dev
mount -t sysfs none /mnt/sys
mount -t proc none /mnt/proc
chroot /mnt /sbin/extlinux --install /boot
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdd
chroot /mnt
mkinitrd -v -f --theme=/usr/share/splash --without-multipath /boot/initrd-`uname -r`.img `uname -r`
exit
sed -i 's/LABEL=[a-zA-Z\-]*/\/dev\/md0/' /mnt/boot/extlinux.conf # Update extlinux for new RAID device
cd /mnt && extlinux --raid -i boot/
sgdisk /dev/sdd --attributes=1:set:2

#Unmount filesystems and reboot
cd
umount /mnt/dev
umount /mnt/sys
umount /mnt/proc
umount /mnt
sync
reboot

Tell BIOS to use disk B
After reboot to disk B…

sgdisk -R/dev/sdc /dev/sdd # Replicate partition table from /dev/sdd to /dev/sdc with unique identifier
sgdisk /dev/sdc --attributes=1:set:2
sleep 5 # Sleep 5 seconds here if you script this…
mdadm -a /dev/md0 /dev/sdc1
mdadm -a /dev/md1 /dev/sdc2
mdadm -a /dev/md2 /dev/sdc3 # If this command gives error, you need to forget/destroy an active SR first
#This next command is the only command you have to manually update before pasting in. Find the UUID of your xenserver host and paste it between the <> below
xe sr-create content-type=user device-config:device=/dev/md2 host-uuid=<UUID of xenserver host> name-label="RAID 1" shared=false type=lvm
# Watch rebuild progress and wait until no arrays are rebuilding before proceeding with any reboot
watch “mdadm --detail /dev/md* | grep rebuild”

Done!

Convert xenserver installation to software RAID-1

Update 2/28/2015:  I have a newer article explaining how to do this in Xenserver 6.5.


 

After having a hard drive nearly die on me and threaten to obliterate the VMs living on it I realized it would be a good idea to have my xenserver installation live on a RAID array.

Following this guide I was able to successfully migrate my running xenserver installation to a software based RAID 1, with a few tweaks. In my case I wanted to migrate from a single old drive to two newer ones.

Below are the steps I took to accomplish this.

Partition the new drives

This assumes that your current drive resides on /dev/sda, and your two new drives are /dev/sdb and /dev/sdc.

sgdisk -p /dev/sda
sgdisk --zap-all /dev/sdb
sgdisk --zap-all /dev/sdc
sgdisk --mbrtogpt --clear /dev/sdb
sgdisk --mbrtogpt --clear /dev/sdc
sgdisk --new=1:34:8388641 /dev/sdb
sgdisk --new=1:34:8388641 /dev/sdc
sgdisk --typecode=1:fd00 /dev/sdb
sgdisk --typecode=1:fd00 /dev/sdc
sgdisk --attributes=1:set:2 /dev/sdb
sgdisk --attributes=1:set:2 /dev/sdc
sgdisk --new=2:8388642:16777249 /dev/sdb
sgdisk --new=2:8388642:16777249 /dev/sdc
sgdisk --typecode=2:fd00 /dev/sdb
sgdisk --typecode=2:fd00 /dev/sdc

The third partition (VM storage) had to be tweaked a bit since these are larger drives than the current xenserver installation. I simply used gdisk instead of sgdisk for this task.

gdisk /dev/sdb
n #create new partition
<enter> #accept defaults for partition number, first, and last sectors
<enter>
<enter>
t #select partition type
3 #select partition number 3
fd00  #set for raid
w   #write changes to disk

Repeat above steps for the other disk (/dev/sdc in my case)

Create the RAID arrays for each partition

mdadm --create /dev/md0 --level=1 --raid-devices=2  /dev/sdb1 /dev/sdc1
mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2
mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdb3 /dev/sdc3

Watch array build (optional)

cat /proc/mdstat

Alternatively you can use the watch command to get a real time update of the raid build:

watch -n 1 cat /proc/mdstat

Format & mount the array

mkfs.ext3 /dev/md0
mount /dev/md0 /mnt

Copy the root filesystem to the new array

cp -vxpr / /mnt

Install bootloader on the new disks

mount --bind /dev /mnt/dev
mount -t sysfs none /mnt/sys
mount -t proc none /mnt/proc
chroot /mnt /sbin/extlinux --install /boot
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdb
dd if=/mnt/usr/share/syslinux/gptmbr.bin of=/dev/sdc

Generate new initrd image

chroot /mnt
mkinitrd -v -f --theme=/usr/share/splash --without-multipath /boot/initrd-`uname -r`.img `uname -r`
exit

Modify boot file

Edit /mnt/boot/extlinux.conf and replace every mention of the old root filesystem (root=LABEL=xxx) with root=/dev/md0.

vi /mnt/boot/extlinux.conf
:%s/LABEL=<root label>/\/dev\/md0/
:wq

Reboot

Keep the old drive in, but make sure to boot from either one of the member drives of your new array.

Create storage repository

Create new local storage repository with the new RAID array similar to here.

xe sr-create content-type=user device-config:device=/dev/md2 host-uuid=<UUID of xenserver host> name-label="RAID-1" shared=false type=lvm

Migrate VMs / disks

Migrate any disk images or VMs living on the old drive to the new array.

If these VMs / disks are not powered on or being used, it is as simple as pulling up xencenter, right clicking on the VM and clicking move then  select new storage repository.

If the VMs are online you can live migrate them to a different xenserver, then live migrate them back to the proper storage repository.

Remove old storage repository

Following instructions found here.
Note: In my case the transfer returned a strange error but was still successful. I had to restart the XAPI toolstack in order for it to let me remove the old storage repository.

xe sr-list name-label="<name of SR to remove>"
xe pbd-list sr-uuid=<UUID of SR above>
xe pbd-unplug uuid=<UUID of pbd above>
xe sr-forget uuid=<UUID of SR>

Final reboot

Shutdown, disconnect the old drive, and boot back up from the new array. Success.

Configure e-mail alerts (optional)

Now that you have a working RAID array you might want to receive e-mail alerts if there are problems with the array.

First, build an mdadm.conf

mdadm --detail --scan > /etc/mdadm.conf

Modify mdadm.conf to add your desired e-mail address for notifications

sed -i '1i MAILADDR <e-mail address>' /etc/mdadm.conf

Thanks to this site for the sed -i 1i trick.

Lastly, enable the mdadm monitoring service. I found via this site that this is fairly easy to do.  Simply enter these two commands:

service mdmonitor start
chkconfig mdmonitor on

Xenserver uses ssmtp to send e-mail. You can follow this guide on how to set it up for SSL if you happen to have an ISP that blocks port 25 (as I do.) Otherwise modify /etc/ssmtp/ssmtp.conf to suit your needs.

You can generate a test event from mdadm to make sure e-mail is configured properly:

mdadm --monitor --test /dev/md0 --oneshot

To get e-mail alerts to work right I had to ensure that FromLineOverride was NOT set to yes (default). I also had to add this line to /etc/ssmtp/revaliases:

root:<e-mail address being sent from>


Update 02/03/2015:  A commenter made me realize I forgot a step – copying the Control Domain OS to the new Raid array. I’ve added that step above, after the “Format & Mount the array” section.

Update 02/17/2015: If you are using Xenserver 6.5 you might come across the following error message when trying to create RAID arrays:

mdadm: unexpected failure opening /dev/md0

If this happens, load the md kernel driver like so:

modprobe md

It should then let you create your arrays.