Embed commands in if statements in bash

I’ve recently had to do some bash-fu and thought I’d document it in case I come across the need again. It involved an if statement inside a for loop. The if statement looked at the result of an external command and acted if conditions were met.

The scenario: An application created folders beginning with a series of digits.  Later it was decided to add a prefix to new folders. A problem occurred where there were folders with the same numeric sequence – corresponding to the same user – but the program was saving things in both prefixed and non-prefixed folders at random. We needed a way to copy information from the numeric only folders into the prefix folders, then backup and delete the numeric-only folders. We also wanted to be warned about any file overwrites in the process.

After a bunch of research and experimentation I came up with the following one-line bash script:

for d in [0-9]*; do BN=$(basename "$d"); if [[ $(find . -maxdepth 1 -type d -name "*$d" | grep -o $d | wc -l) = 2 ]]; then  cp -i -p -r "$d" ../archive/"$d"; cp -i -p -r "$d"/* "PREFIX_$BN"; rm -rf "$d"; fi; done

It does the following:

  • Scan the current directory for files (or folders) beginning with numbers
  • Save the basename of discovered file to a variable (basename was required to remove the ./ that showed up in the results) and use that variable for the copy command
  • Scans the current directory to see if there is another folder with the same string of numbers in its name (same name but only with a prefix attached)
  • If there is a folder with the same string of numbers in its name, copy the non-prefixed folder to an archive location, then copy its contents to the folder with the prefix, prompting before overwriting anything.
  • Once the copy is complete, delete the original non-prefixed folder

The big learning moment for me was embedding a bash command into an if statement. The if statement runs the find command, pipes to wc -l to count the number of results, and then compares that result to something else. Pretty handy.

Thanks to these sites for helping me in my journey:

If statement inside for loop: https://unix.stackexchange.com/questions/52800/how-to-do-an-if-statement-from-the-result-of-a-executed-command

Find results only in current directory:  https://unix.stackexchange.com/questions/162411/find-maxdepth-0-not-returning-me-any-output

Count results from find command: https://stackoverflow.com/questions/6181324/counting-regex-pattern-matches-in-one-line-using-sed-or-grep

Warn before overwriting files: https://askubuntu.com/questions/236478/how-do-i-make-bash-warn-me-when-overwriting-an-existing-file

Add prefix to filenames in bash

A quick handy little way to add a prefix to files in bash (taken from here)

for f in * ; do mv "$f" "PRE_$f" ; done

In my case I wanted to rename all sub-100 filenames to have an extra zero so sorting played nicely with filenames beginning with 100+. To accomplish this I found about the rename command (thanks to this site.)  The command I used to enforce natural sorting was the following:

rename 's/\d+/sprintf("%03d", $&)/e' *

The command looked for anything beginning with a number, then used sprintf to make the number 3 digits. The asterisk instructed the rename command to work on every file. Success.

 

Batch convert Global security groups to Universal

Recently I came across a need to batch convert global security groups into universal security groups in my work’s Active Directory domain. The reason for this is so I could then turn them into Mail Enabled security groups, which would enable mail to be delivered to members of these groups. Unfortunately all security groups at this organization are Global in scope.

Seeing as this is a one domain organization there is no harm in changing the scope to Universal. Doing this via mouse is very tedious; fortunately we can use a few basic command line tools to automate the task. Thanks to Jeff Guillet for outlining how to do this.

The three magic commands are: dsquery, dsget, and dsmod.

First I wanted to test out a single security group to make sure everything would work. I couldn’t convert it because it was a member of several global security groups. This rabbit hole went several levels deep. Piping together dsquery, dsget, and dsmod all together solved this problem instantly:

dsquery group -limit 0 -name "<Group Name>" | dsget group -memberof | dsmod group -c -q -scope u

The above command first gets the full name of the group specified by the -name command. The output is sent to the dsget command to query what groups that group is a member of. The output of that command is sent to the dsmod command, which does the work of actually changing each of those groups into a security group:

  • -c tells it to continue on error
  • -q tells it to not print successful changes.
  • -scope u instructs it to change the group’s scope to Universal.

Any errors will be printed to the console. Depending on how many levels of global groups there are you may have to run this command several times in order to convert the problematic groups to Universal scope.

Once that command finishes without error you can modify the group itself to be a universal group by simply omitting the middle dsget command:

dsquery group -limit 0 -name "<Group Name>" | dsmod group -c -q -scope u

After testing we are now ready to expand this to convert ALL Global security groups to be Universal in scope. If you would like a report of how many groups would be affected, run this command. It will output all groups from the query to the text file Groups.txt:

dsquery group -limit 0 | dsget group -samid -scope -secgrp > Groups.txt

To modify every group simply omit the “-name” parameter from the group command used above with our test group. This will iterate through every group in the directory and pass it on to dsmod which will modify the scope to be universal:

dsquery group -limit 0 | dsmod group -c -q -scope u

Some built-in groups can’t be converted due to their nature, so you will have to work around those (Domain Users being one example.) You will probably need to run the command a few times until no errors appear.

Profit.

 

XAPI won’t start in Xenserver 7

I came home yesterday to discover that every last one of my VMs were unresponsive. It was most distressing. I couldn’t even SSH into my xenserver – it was unresponsive too. Its physical console had dropped into an emergency shell. A reboot allowed me to get a physical console again, but my networking and VMs would not start.

In trying to pick up the pieces and put everything back together I ran

systemctl --failed

which revealed several key services not running – namely openvswitch and xapi (very important services.) Manually starting them did nothing – they would silently fail and immediately quit working.

After banging my head against a wall for a bit (I really didn’t want to restore from backup) I stumbled across this post. It states in essence that xapi won’t start if the disk is full. I checked disk usage and it said I had a few gigs free, but thought I’d try the steps in the post anyway.

ls /var/log

revealed quite a lot of log files. I then decided to just delete all the .gz archived logs:

rm /var/log/*.gz

After doing this, xapi started. I restarted the hypervisor for good measure and everything came up – all back to normal as if nothing had happened.

It’s incredibly frustrating that Xenserver is designed to be a ticking time bomb with default configuration. If you don’t take care to manually delete old logs, or alternatively send logs to a remote log server, it will crash and burn. This is stupid. That being said, I was impressed that it recovered so gracefully once I freed up some disk space.

If you’re running xenserver, make sure you’re logging somewhere else – or put a cron job to delete old log files!

 

Mount folder from another system over SSH

I recently had a need to mount a folder over SSH to allow my file manager to browse through the files on a remote system. Two great resources led me to the solution to this problem: sshfs

I first came across this little tutorial on how to install sshfs on my shiny new Linux Mint 18 box:

sudo apt-get install sshfs
sudo mkdir /mnt/droplet #<--replace "droplet" with whatever you prefer
sudo sshfs -o allow_other,defer_permissions root@xxx.xxx.xxx.xxx:/ /mnt/droplet

Pretty slick. If you want to use a keyfile instead of being prompted for a password, you can use the IdentityFile option:

sudo sshfs -o allow_other,defer_permissions,IdentityFile=~/.ssh/id_rsa root@xxx.xxx.xxx.xxx:/ /mnt/droplet

You can have this handled in /etc/fstab for automounting. Thanks to this Arch Linux guide for the info. (The command below requires systemd.)

user@host:/remote/folder /mount/point  fuse.sshfs noauto,x-systemd.automount,_netdev,users,idmap=user,IdentityFile=/home/user/.ssh/id_rsa,allow_other,reconnect 0 0

I tweaked my /etc/fstab file a bit because it complained that allow_other required a configuration change. Since I’m the only user of this box it didn’t matter to me. Here is my configuration:

nicholas@remote:/ /home/desktop/remote fuse.sshfs noauto,x-systemd.automount,_netdev,users,idmap=user,IdentityFile=/home/desktop/.ssh/keyfile,reconnect,allow_other 0 0

I’m mounting the root folder of my remote machine into a folder named remote on my desktop machine. I generated ssh keyfiles so that no password was required. Now the mount shows up under “Devices” in my file manager and a simple click mounts the folder gets me there. Sweet.

Edit 2/25/18: Added allow_other option per this article

 

Transfer VMs between Xenserver pools

When Xenserver 7 came out I found myself unable to easily upgrade to it thanks to my custom RAID 1 build.  If I wanted Xenserver 7 I would have to blow the whole instance away and start from scratch. This posed a problem because I have a pool of 2 xenserver hosts. You cannot add a server with a higher xenserver version to a lower versioned pool; the pool master must always have the highest version of Xenserver installed. My decision to have an mdadm RAID 1 setup on my pool master ultimately turned into forced VM downtime for an upgrade despite having a pool of other xenserver hosts.

After transferring VMs to my secondary host and promoting it to pool master, I wiped my primary xenserver and installed 7. When it was up and running I essentially had two separate pools running. To transfer my VMs back to my primary server I had to resort to the command line.

Offline VM transfer

The xe vm-export and vm-import commands work with stdin/out and piping. This is how I accomplished transferring my VMs directly between two pools. Simply pipe xe vm-export commands with an ssh xe vm-import command like so:

xe vm-export uuid=<VM_UUID> filename= | ssh <other_server> xe vm-import filename=/dev/stdin

Note the lack of a filename – this instructs xenserver to pipe to standard output instead. Also note that transferring the VM scrambles the MAC addresses of its interfaces. If you want to keep the MAC address you’ll have to manually re-assign it after the copy is complete.

Minimal downtime

For the method above you will have to turn the VM off in order to transfer it.  I had some VMs that I didn’t want to stay down for the entire transfer. A way around this is to take a snapshot of the VM and then copy the snapshot to the other pool. Note that this method does not retain any changes made inside the VM that occurred after you took the snapshot. You will have to manually transfer any file changes that took place during the VM transfer (or be fine with losing them.)

In order to export a snapshot you must first convert it to a VM from a template (thanks to this site for outlining how.) The full procedure is as follows:

  1. Take a snapshot of the VM you want to move
    xe vm-snapshot uuid=<VM_UUID> new-name-label=<snapshotname>
  2. Convert the snashot template to a VM (the command xe snapshot-list is a handy way to obtain UUIDs of your snapshots)
    xe template-param-set is-a-template=false ha-always-run=false uuid=<UUID of snapshot>
  3. Transfer the template to the new pool
    xe vm-export uuid=<UUID of snapshot> filename= | ssh <other_server> xe vm-import filename=/dev/stdin
  4. Rename VM and/or modify interface MAC addresses as needed on the new host. Stop the VM on the old host and start it on the new one.

I used both methods above to successfully move my VMs from my older 6.5 pool to the newer 7 pool. Success.

PCI Passthrough in Xenserver 7 “Dundee”

I’ve recently upgraded to the latest version of Citrix Xenserver 7 (codenamed “Dundee”.) 7 is based on CentOS 7 and has a massive amount of changes under the hood. One such change was how they handle PCI Passthrough.

It took some time to figure PCI Passthrough out. 7 uses grub instead of extlinux for the bootloader. It appears to be grub2 but they don’t use the standard update-grub tool, rather you simply edit the config file and do nothing else.

After much searching I found this post which led me in the right direction. In Xenserver 7, for pci passthrough support you must do the following:

  • Prepare the VM for PCI passthrough (this part hasn’t changed)
    xe vm-param-set other-config:pci=0/0000:B:D.f uuid=<vm uuid>
  • Modify /boot/grub/grub.cfg and append the following to the end of the module2 line (if you boot from EFI the file to modify is /boot/efi/EFI/xenserver/grub.cfg)
     xen-pciback.hide=(B:D.f)
    
  • Reboot

You will now be able to pass through hardware to your virtual machines in Xenserver 7. Hooray.

Unable to update crouton after installing VirtualBox

When trying to update crouton I kept getting the following error message:

Preparing to unpack .../linux-image-3.8.11_20150314_amd64.deb ...
Ok, aborting, since modules for this image already exist.
dpkg: error processing archive /var/cache/apt/archives/linux-image-3.8.11_20150314_amd64.deb (--unpack):

After some digging I found this thread which explains how to fix it. The source of the problem was the fact that I had installed a modified kernel to allow VirtualBox to work properly. The way to update crouton is to remove the repository for that kernel

rm /etc/apt/sources.list.d/crouton-packages.list

After removing that repository the update completed successfully.