Mountpoint check script

I wrote a simple script to check to see if a specific mountpoint on a Linux system is still live.  It does this by trying to read a specific file on the share, and if it cannot, write the event to a log, unmount, and then re-mount the folder. The need arose for instances where a file server has been rebooted and the linux system loses the connection to the share. This way it will automatically re-mount.

Modify the variables section as needed and then have a cron job run the script as root at whatever interval you want. Enjoy.

#!/bin/bash
#Script to monitor mount directories to ensure they are properly mounted
#Place a file with the word "mounted" in it inside all mounted directories
#The script will try to read the file and attempt to unmount and remount the folder if it fails to read the file
#Updated 8/30/2016 by Nicholas Jeppson

#---------Variable section------------#

#Place mount folder locations here, separated by space 
#Paths containing spaces need to have quotes around them
LOCATIONS=(/home/njeppson /home/njeppson/Desktop)

#Name of file to try to read
TEST_FILENAME="mountcheck"

#---------End Variable Section--------#
#-----Do not edit below this line-----#

#Read file, if contents don't contain "mounted" then attempt to unmount and re-mount the folder, output attempt to /var/log/mountcheck

for FOLDER in "${LOCATIONS[@]}"; do 
 if [[ $(cat $FOLDER/$TEST_FILENAME) != "mounted" ]]; then
 echo "$(date "+%b %d %T") $(hostname) $FOLDER Not mounted, remounting." >> /var/log/mountcheck 
 umount $FOLDER
 mount $FOLDER
 fi
done

Battle.net won’t update after copying from network folder

I ran into an issue recently where I tried to copy a battle.net game (Heroes of the Storm) from a backup folder on my NAS onto a new computer. Once the copy was completed I couldn’t get battle.net to update the game. It kept failing with error code:

BLZBNTAGT00000840

file update failed for an unknown reason.

After much digging I found this post which mentions it’s due to the fact that the updater apparertly can’t update files with the hidden attribute. The hidden file attribute gets applied by the NAS because the file in question has a dot in front of it in the filename. For some reason the updater can’t touch it.

The fix is to change all files in the game folder to not have the hidden attribute. The easiest way to do this is via the command line. Navigate to the folder of the game you copied over and execute the following:

attrib -H .* /S

Finally, I can copy Blizzard game backups without agonizing over why they won’t patch.

FreeBSD: allow non-root processes to bind port 80

In experimenting with FreeNAS jails I wanted to allow a web service to use port 80. Normally 80 is a high order port reserved for root-level processes for security reasons. Since this is a FreeBSD jail and not a full on system I’m not worried about this.

The command to do so is fairly simple (thanks to this page for information)

sysctl net.inet.ip.portrange.reservedhigh=0

The above command is not permanent; to make it so add it to /etc/sysctl.conf:

echo "net.inet.ip.portrange.reservedhigh=0" >> /etc/sysctl.conf

FreeNAS unable to create jails fix

I recently got a shiny new FreeNAS Mini appliance. It’s the bee’s knees. Previously I was using a virtualized instance of FreeNAS that has served me admirably for two years now. During the migration I decided to start fresh with the jails configuration I had and deleted the entire jails dataset. This turned out to be a mistake. I suddenly found out that I couldn’t create any jails or plugins. The plugin download would hang for a long time and flash a brief message “Failed to download plugin.” Not helpful.

I tried changing the location of my jails in configuration to no avail. I even tried nuking my FreeNAS config entirely and starting from scratch. The error still happened! Somehow that configuration survived a factory restore.

I finally found this freenas forum entry that pointed me in the right direction. It suggested I use the warden command to delete the plugin jail template completely and re-install it. When I tried to I got this error:

 

[nicholas@freenas ~]$ sudo warden template delete pluginjail
ERROR: Not a ZFS volume: /mnt/storage/jails/.warden-template-pluginjail

It was still trying to install the plugin template in my non-existent dataset. I decided to try re-creating the missing dataset and then running the warden delete command again. Success!

[nicholas@freenas ~]$ sudo zfs create storage/jails/.warden-template-pluginjail
[nicholas@freenas ~]$ sudo warden template delete pluginjail

Once you delete the template jail via warden, you can re-create it in the right place after configuring the correct path in Jails / Configuration. Once you have the right place configured, issue the following:

warden template create -nick pluginjail -tar http://download.freenas.org/jails/9.3/x64/freenas-pluginjail-9.3-RELEASE.tgz

Plugins and jails work again! Success.

The power of find, grep, and xargs

Recently I needed to find folders with two different things in the path – mysql and DB. I toyed around with a bunch of options but finally settled on using xargs. I don’t use it much. I should use it more.

The command below takes output from find, greps it twice (thus looking for things that have both terms in them) and then creates a symbolic link of the results.

 find . -type d | grep mysql | grep DB | xargs -n 1 ln -s

This accomplished what I needed quite well. In a huge stash of folders there were a subset that contained both the words mysql and DB in their paths that  I was interested in. Find, grep, and xargs to the rescue.

Grep handy tips

I found some grep flags that are really handy and wanted to write them down.

Only list exact matches:  -w

When I run the “netstat -an | grep LISTEN” command it would include all text that included LISTEN – including LISTENING, which I didn’t want to see. Appending -w to the netstat command makes sure grep only displays exact matches.

netstat -an | grep -w LISTEN

Include extra lines above and below the match: -A, -B

When administering xenserver systems it’s often useless to use grep defaults because Xen likes to include relevant information on different lines. To fix this, use the -A and/or the -B flags to specify the number of lines after (A) or before (B) to include in the results. A real world example using grep to return the 3 lines above and below the line matching the word Splunk:

xe vdi-list | grep -A3 -B3 Splunk

 

Embed commands in if statements in bash

I’ve recently had to do some bash-fu and thought I’d document it in case I come across the need again. It involved an if statement inside a for loop. The if statement looked at the result of an external command and acted if conditions were met.

The scenario: An application created folders beginning with a series of digits.  Later it was decided to add a prefix to new folders. A problem occurred where there were folders with the same numeric sequence – corresponding to the same user – but the program was saving things in both prefixed and non-prefixed folders at random. We needed a way to copy information from the numeric only folders into the prefix folders, then backup and delete the numeric-only folders. We also wanted to be warned about any file overwrites in the process.

After a bunch of research and experimentation I came up with the following one-line bash script:

for d in [0-9]*; do BN=$(basename "$d"); if [[ $(find . -maxdepth 1 -type d -name "*$d" | grep -o $d | wc -l) = 2 ]]; then  cp -i -p -r "$d" ../archive/"$d"; cp -i -p -r "$d"/* "PREFIX_$BN"; rm -rf "$d"; fi; done

It does the following:

  • Scan the current directory for files (or folders) beginning with numbers
  • Save the basename of discovered file to a variable (basename was required to remove the ./ that showed up in the results) and use that variable for the copy command
  • Scans the current directory to see if there is another folder with the same string of numbers in its name (same name but only with a prefix attached)
  • If there is a folder with the same string of numbers in its name, copy the non-prefixed folder to an archive location, then copy its contents to the folder with the prefix, prompting before overwriting anything.
  • Once the copy is complete, delete the original non-prefixed folder

The big learning moment for me was embedding a bash command into an if statement. The if statement runs the find command, pipes to wc -l to count the number of results, and then compares that result to something else. Pretty handy.

Thanks to these sites for helping me in my journey:

If statement inside for loop: https://unix.stackexchange.com/questions/52800/how-to-do-an-if-statement-from-the-result-of-a-executed-command

Find results only in current directory:  https://unix.stackexchange.com/questions/162411/find-maxdepth-0-not-returning-me-any-output

Count results from find command: https://stackoverflow.com/questions/6181324/counting-regex-pattern-matches-in-one-line-using-sed-or-grep

Warn before overwriting files: https://askubuntu.com/questions/236478/how-do-i-make-bash-warn-me-when-overwriting-an-existing-file

Add prefix to filenames in bash

A quick handy little way to add a prefix to files in bash (taken from here)

for f in * ; do mv "$f" "PRE_$f" ; done

In my case I wanted to rename all sub-100 filenames to have an extra zero so sorting played nicely with filenames beginning with 100+. To accomplish this I found about the rename command (thanks to this site.)  The command I used to enforce natural sorting was the following:

rename 's/\d+/sprintf("%03d", $&)/e' *

The command looked for anything beginning with a number, then used sprintf to make the number 3 digits. The asterisk instructed the rename command to work on every file. Success.

 

Batch convert Global security groups to Universal

Recently I came across a need to batch convert global security groups into universal security groups in my work’s Active Directory domain. The reason for this is so I could then turn them into Mail Enabled security groups, which would enable mail to be delivered to members of these groups. Unfortunately all security groups at this organization are Global in scope.

Seeing as this is a one domain organization there is no harm in changing the scope to Universal. Doing this via mouse is very tedious; fortunately we can use a few basic command line tools to automate the task. Thanks to Jeff Guillet for outlining how to do this.

The three magic commands are: dsquery, dsget, and dsmod.

First I wanted to test out a single security group to make sure everything would work. I couldn’t convert it because it was a member of several global security groups. This rabbit hole went several levels deep. Piping together dsquery, dsget, and dsmod all together solved this problem instantly:

dsquery group -limit 0 -name "<Group Name>" | dsget group -memberof | dsmod group -c -q -scope u

The above command first gets the full name of the group specified by the -name command. The output is sent to the dsget command to query what groups that group is a member of. The output of that command is sent to the dsmod command, which does the work of actually changing each of those groups into a security group:

  • -c tells it to continue on error
  • -q tells it to not print successful changes.
  • -scope u instructs it to change the group’s scope to Universal.

Any errors will be printed to the console. Depending on how many levels of global groups there are you may have to run this command several times in order to convert the problematic groups to Universal scope.

Once that command finishes without error you can modify the group itself to be a universal group by simply omitting the middle dsget command:

dsquery group -limit 0 -name "<Group Name>" | dsmod group -c -q -scope u

After testing we are now ready to expand this to convert ALL Global security groups to be Universal in scope. If you would like a report of how many groups would be affected, run this command. It will output all groups from the query to the text file Groups.txt:

dsquery group -limit 0 | dsget group -samid -scope -secgrp > Groups.txt

To modify every group simply omit the “-name” parameter from the group command used above with our test group. This will iterate through every group in the directory and pass it on to dsmod which will modify the scope to be universal:

dsquery group -limit 0 | dsmod group -c -q -scope u

Some built-in groups can’t be converted due to their nature, so you will have to work around those (Domain Users being one example.) You will probably need to run the command a few times until no errors appear.

Profit.

 

XAPI won’t start in Xenserver 7

I came home yesterday to discover that every last one of my VMs were unresponsive. It was most distressing. I couldn’t even SSH into my xenserver – it was unresponsive too. Its physical console had dropped into an emergency shell. A reboot allowed me to get a physical console again, but my networking and VMs would not start.

In trying to pick up the pieces and put everything back together I ran

systemctl --failed

which revealed several key services not running – namely openvswitch and xapi (very important services.) Manually starting them did nothing – they would silently fail and immediately quit working.

After banging my head against a wall for a bit (I really didn’t want to restore from backup) I stumbled across this post. It states in essence that xapi won’t start if the disk is full. I checked disk usage and it said I had a few gigs free, but thought I’d try the steps in the post anyway.

ls /var/log

revealed quite a lot of log files. I then decided to just delete all the .gz archived logs:

rm /var/log/*.gz

After doing this, xapi started. I restarted the hypervisor for good measure and everything came up – all back to normal as if nothing had happened.

It’s incredibly frustrating that Xenserver is designed to be a ticking time bomb with default configuration. If you don’t take care to manually delete old logs, or alternatively send logs to a remote log server, it will crash and burn. This is stupid. That being said, I was impressed that it recovered so gracefully once I freed up some disk space.

If you’re running xenserver, make sure you’re logging somewhere else – or put a cron job to delete old log files!