Tag Archives: LVM

Rename LVM group in CentOS7

I recently made the discovery that all my VMs have the same volume group name – the default that is given when CentOS is installed. My OCD got the best of me and I set out to change these names to reflect the hostname. The problem is if you rename the volume group containing the root partition, the system will not boot.

The solution is to run a series of commands to get things updated. Thanks to the centOS forums for the information. In my case I had already made the mistake of renaming the group and ending up with an unbootable system. This is what you have to do to get it working again:

Boot into a Linux rescue shell (from the installer DVD, for example) and activate the volume groups (if not activated by default)

vgchange -ay

Mount the root and boot volumes (replace VG_NAME with name of your volume group and BOOT_PARTITION with the device name of your boot partition, typically sda1)

mount /dev/VG_NAME/root /mnt
mount /dev/BOOT_PARTITION /mnt/boot

Mount necessary system devices into our chroot:

mount --bind /proc /mnt/proc
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys

Chroot into our broken system:

chroot /mnt/ /bin/bash

Modify fstab and grub files to reflect new volume group name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 's/OLD_VG_NAME/NEW_VG_NAME/g'

Run dracut to modify boot images:

dracut -f

Remove your recovery boot CD and reboot into your newly fixed VM

exit
reboot

If you want to avoid having to boot into a recovery environment, do the following steps on the machine whose VG you want to rename:

Rename the volume group:

vgrename OLD_VG_NAME NEW_VG_VAME

Modify necessary boot files to reflect the new name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 
's/OLD_VG_NAME/NEW_VG_NAME/g'

Rebuild boot images:

dracut -f

Resizing LVM storage checklist

This is a short note of what to do when you change size of the physical disk an LVM setup, such as the default configuration in CentOS 7.

  1. Modify the physical disk size
  2. Modify the partition size
    1. I used fdisk to delete the partition, then re-create with a larger size
    2. Reboot
  3. Extend the physical volume size
    1. pvresize <path to enlarged partition>
  4. Extend the logical volume size
    1. lvextend <lv path> -l100%FREE
  5. Extend filesystem size
    1. resize2fs <lv path>
    2. #If you're running CentOS 7, the default filesystem is actually XFS, not ext4. In that case:
      xfs_growfs <lv path>
  6. Profit.

Configure iSCSI initiator in CentOS

Below are my notes for configuring a CentOS box to connect to an iSCSI target. This assumes you have already configured an iSCSI target on another machine / NAS. Much of this information comes thanks to this very helpful website.

Install the software package

1
yum -y install iscsi-initiator-utils

Configure the iqn name for the initiator

1
vi /etc/iscsi/initiatorname.iscsi
1
2
InitiatorName=iqn.2012-10.net.cpd:san.initiator01
InitiatorAlias=initiator01

Edit the iSCSI initiator configuration

1
vi /etc/iscsi/iscsid.conf
1
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = initiator_user
node.session.auth.password = initiator_pass
#The next two lines are for mutual CHAP authentication
node.session.auth.username_in = target_user
node.session.auth.password_in = target_password

Start iSCSI initiator daemon

1
2
/etc/init.d/iscsid start
chkconfig --levels 235 iscsid on

Discover targets in the iSCSI server:

1
2
iscsiadm --mode discovery -t sendtargets --portal 172.16.201.200 the portal's IP address
172.16.201.200:3260,1 iqn.2012-10.net.cpd:san.target01

Try to log in with the iSCSI LUN:

1
2
3
iscsiadm --mode node --targetname iqn.2012-10.net.cpd:san.target01 --portal 172.16.201.200 --login
Logging in to [iface: default, target: iqn.2012-10.net.cpd:san.target01, portal: 172.16.201.200,3260] (multiple)
Login to [iface: default, target: iqn.2012-10.net.cpd:san.target01, portal: 172.16.201.200,3260] successful.

Verify configuration

This command shows what is put into the  iSCSI targets database  (the files located in /var/lib/iscsi/)

1
cat /var/lib/iscsi/send_targets/172.16.201.200,3260/st_config
1
2
3
4
5
6
7
8
9
10
11
12
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 172.16.201.200
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768

Verify session is established

1
2
iscsiadm --mode session --op show
tcp: [2] 172.16.201.200:3260,1 iqn.2012-10.net.cpd:san.target01

Create LVM volume and mount

Add our iSCSI disk to a new LVM physical volume, volume group, and logical volume

1
2
3
4
5
6
7
fdisk -l
Disk /dev/sdb: 17.2 GB, 17171480576 bytes
64 heads, 32 sectors/track, 16376 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
1
Disk /dev/sdb doesn't contain a valid partition table
1
2
pvcreate /dev/sdb
vgcreate iSCSI /dev/sdb
lvcreate iSCSI -n volume_name -l100%FREE
mkfs.ext4 /dev/iSCSI/volume_name

Add the logical volume to fstab

Make sure to use the mount option _netdev.  Without this option, Linux will try to mount this device before it loads network support.

1
vi /etc/fstab
/dev/mapper/iSCSI-volume_name    /mnt   ext4   _netdev  0 0

Success.

Reclaim lost space in Xenserver 6.5

Storage XenMotion is awesome. It allows me to spin up a second Xenserver host and live migrate VMs to it whenever I need to do maintenance on my primary xenserver host. I don’t need an intermediary storage device such as a NAS – the two hosts can exchange live, running VMs directly. No downtime!

An unfortunate side effect of using Storage XenMotion is that sometimes it doesn’t clean itself up very well. It takes several snapshots in the migration process and they sometimes get “forgotten about.” This results in inexplicable low disk space errors such as this one:

The specified storage repository has insufficient space

..despite there being plenty of space.

This article explains how to use the coalesce option to reclaim space by issuing the following command:

xe host-call-plugin host-uuid=<host-UUID> plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=<VM-UUID>

Unfortunately that didn’t seem to do anything for me. Digging into the storage underpinnings I can see that there are a lot of logical volumes hanging out there not being used:

xe vdi-list sr-uuid=<UUID of SR without space>

This revealed a lot of disks floating around in the SR that aren’t being used (I know this by looking at that same SR inside xencenter.) Curiously there is a VDI with identical names but with different UUIDs, despite my not having any snapshots of that VM.

I was about to start using the vgscan command to look for active volume groups when I got called away. Hours later, when I got back to my task, I found that all the space had been freed up. Xenserver had done its own garbage collection, albeit slowly. So, if you’ve tried to use xenmotion and found you have no space.. give xenserver some time. You might just find out that it will clean itself up.


Update 05/20/2015

I ran into this problem once more. I read from here that simply initiating a scan of the storage repository is all you need to do to reclaim lost space. Unfortunately when I ran that the scan nothing changed. A check of /var/log/SMlog revealed the following error (thanks to ap’s blog for the guidance)

SM: [30364] ***** sr_scan: EXCEPTION XenAPI.Failure, ['INTERNAL_ERROR', 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", "3e616c49-adee-44cc-ae94-914df0489803")']
...
Raising exception [40, The SR scan failed  [opterr=['INTERNAL_ERROR', 'Db_exn.Uniqueness_constraint_violation("VDI", "uuid", "3e616c49-adee-44cc-ae94-914df0489803")']]]

For some reason one of the ISOs in one of my SRs was throwing an error – specifically a Xenserver operating system fixup iso, which was causing the coalescing process to abort. I didn’t care if I lost that VDI so I nuked it:

xe vdi-destroy uuid="3e616c49-adee-44cc-ae94-914df0489803"

That got me a little father, but I still wasn’t seeing any free space. Further inspection of the log revealed this gem:

SMGC: [7088] No space to leaf-coalesce f8f8b129[VHD](20.000G/10.043G/20.047G|ao) (free
 space: 1904214016)

I read that if there isn’t enough space, a coalesce can’t happen on a running VM. I decided to shut down one of my VMs that was hogging space and run the scan again. This time there was progress in the logs. It took a while, but eventually my space was restored!

Moral of the story: if your server isn’t automatically coalescing to free up space, check /var/log/SMlog to see what’s causing it to choke.

FreeNAS on Xenserver with PVHVM support

In my current home setup I have a single server performing many functions thanks to Citrix Xenserver 6.2 and PCI Passthrough. This single box is my firewall, webservers, and NAS. My primary motivation for this is power savings – I didn’t want to have more than one box up 24/7 but still wanted all those separate services, some of which are software appliances that aren’t very customizable.

My current NAS setup is a simple Debian Wheezy virtual machine with the on-board SATA controller from the motherboard passed through to it. The VM runs a six drive software RAID 6 using mdadm and LVM volume management on top of it. Lately, though, I have become concerned with data integrity and my use of commodity drives. It prompted me to investigate ZFS as a replacement for my current setup. ZFS has many features, but the one I’m most interested in is its ability to detect and correct any and all corrupted files / blocks. This will put my mind at ease when it comes to the thousands of files that I have which are accessed infrequently.

I decided to try out FreeNAS, a NAS appliance which utilizes ZFS. After searching on forums it quickly became clear that the people at FreeNAS are not too keen on virtualizing their software. There is very little help to be had there in getting it to work in virtual environments. In the case of Xenserver, FreeNAS does work out of the box but it is considerably slower than bare metal due to its lack of support of Xen HVM drivers.

Fortunately, a friendly FreeNAS user posted a link to his blog outlining how he compiled FreeNAS to work with Xen. Since Xenserver uses Xen (it’s in the name, after all) I was able to use his re-compiled ISO (I was too lazy to compile my own) to test in Xenserver.

There are some bugs to get around to get this to work, though. Wired dad’s xenified FreeNAS doesn’t appear to like to boot in Xenserver, at least out of the box. It begins to boot but then hangs indefinitely on the following error:

run_interrupt_drive_hooks: still waiting after 60 seconds for xenbusb_nop_confighook_cb

This is the result of a bug in the version of qemu Xenserver uses. The bug causes BSD kernels to really not like the DVD virtual device in the VM and refuse to boot. The solution is to remove the virtual DVD drive. How, then, do you install FreeNAS without a DVD drive?

It turns out that all the FreeNAS installer does is extract an image file to your target drive. That file is an .xz file inside the ISO. To get wired dad’s FreeNAS Xen image to work in Xenserver, one must extract that .xz file from the ISO, expand it to an .img file, and then apply that .img file to the Xenserver virtual machine’s hard disk. The following commands can be run on the Xenserver host machine to accomplish this.

  1. Create a virtual machine with a 2GB hard drive.
  2. Mount the FreeNAS-xen ISO in loopback mode to get at the necessary file
    mkdir temp
    mount -o loop FreeNAS-9.2.1.5-RELEASE-xen-x64.iso temp/
  3. Extract the IMG file from the freeNAS ISO
    xzcat ~/temp/FreeNAS-x64.img.xz | dd of=FreeNAS_x64.img bs=64k
    

    Note that the IMG file is 2GB in size, which is larger than can sit in the root drive of a default install Xenserver. Make sure you extract this file somewhere that has enough space.

  4. Import that IMG file into the virtual disk you created with your VM in step 1.
    cd ..
    xe vdi-import uuid=<UUID of the 2GB disk created in step 1> filename=FreeNAS_x64.img
    

    This results in an error:

    The server failed to handle your request, due to an internal error.  The given message may give details useful for debugging the problem.
    message: Caught exception: VDI_IO_ERROR: [ Device I/O errors ]
    

    This error can be safely ignored – it did indeed copy the necessary files.
    Note: To obtain the UUID of the 2GB disk you created in step 1, run the “xe vdi-list” command and look for the name of the disk.

  5. Remove the DVD drive from the virtual machine. From Xencenter:
    Shutdown the VM
    Mount xs-toos.iso
    Run this command in a command prompt:

    xe vm-cd-remove uuid=<UUID of VM> cd-name=xs-tools.iso
  6. Profit!

There is one aspect I haven’t gotten to work yet, and that is Xenserver Tools integration. The important bit – paravirtualized networking – has been achieved so once I get more time I will investigate xenserver tools further.