Category Archives: CLI

Configure HDHR Viewer XMLTV in CentOS Linux

Recently I’ve accomplished the herculean task of setting up my parents’ cable connection to stream through Plex using a HD Homerun 3 cablecard network tuner. It works! This is how I got XMLTV guide working for the HDHR Viewer plugin for Plex on CentOS 7 Linux.

Required reading: http://hdhrviewer.zynine.net/hdhrviewerv2-initial-setup/xmltv-zap2xml/

First, install and configure the required perl and java packages

sudo yum install perl-Compress-Zlib perl-HTML-Parser perl-HTTP-Cookies perl-LWP-Protocol-https perl-JSON gcc cpan java-1.7.0-openjdk-headless 
sudo cpan JSON::XS 
#accept all defaults when prompted

Download the zap2xml perl module (zap2xml.pl) Place it somewhere it can be easily accessed.

Test to make sure the script will run properly:

perl zap2xml.pl -u <zap2it username> -p <zap2it password>

If you get an error like this:

Can't locate Compress/Zlib.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at zap2xml.pl line 26.

It means you haven’t installed the correct perl modules. Double check that you installed them all.

Once we know it runs properly e need to configure a cron job to run zap2xml daily (to make sure the guide data is always up to date.)

crontab -e
#press i to begin inserting
0 0 * * * perl <full path to where you downloaded zap2xml>/zap2xml.pl -u <zap2it e-mail> -p <zap2it password>
#ESC :wq to save and exit

 

Next download and unzip the Channel Guide app. I placed it in the same place I downloaded zap2xml to keep things simple.

Test it out to make sure it works:

java -jar channel-guide-app-0.0.3.jar server app-config.yml

If it starts and doesn’t crash, you know it’s working.

Now we want to configure the channel guide app to run on startup

sudo vi /etc/systemd/system/channelguide.service
[Unit]
Description=Plex Channel Guide

[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/java -jar <full path to channel-guide dir>/channel-guide-app-0.0.3.jar server <full path to channel-guide dir>/app-config.yml

[Install]
WantedBy=multi-user.target

Make sure this systemd service is enabled:

sudo systemctl enable /etc/systemd/system/channelguide.service

Lastly make sure you’ve configured the HDHRViewer plugin in Plex to use xmltv and rest API as per the how-to on their site.

Success!

Use FFMPEG to batch convert video files

Below are the tips and tricks I’ve learned in my quest to get all my family home movies working properly in Plex. I ended up having to re-encode most of them to have a proper codec.

ffmpeg is what I ended up using. Thanks to here and here I found a quality setting that I liked: x264 video with crf of 18 and veryslow encoder preset, AAC audio at 192kbps, mp4 container.

ffmpeg -i <file> -acodec libfdk_aac -b:a 192k -ac 2 -vcodec libx264 -crf 18 -preset veryslow <filename>.mp4

I tried to use exiftools to add metadata to the file but Plex wouldn’t read it. Instead I discovered you can use ffmpeg to encode things like title and date taken directly into the file. The syntax for ffmpeg is

-metadata "key=value"

The metadata values I ended up using were:

  • date – Plex looks at this
  • Title – what’s displayed in plex
  • comment – not seen by plex but handy to know about anyway

Encoding

My first pass at encoding combined find with ffmpeg to re-encode all my avi files. I used this command:

find . -name *.avi -exec ffmpeg -i {} -acodec libfdk_aac -b:a 192k -ac 2 -vcodec libx264 -crf 18 -preset veryslow {}.mp4 \;

Interrupted

I bit off more than I could chew with the above command. I had to kill the process because it was chewing up too much CPU. There are two ways to figure out which file ffmpeg is on:

ps aux|grep ffmpeg

will list the file that ffmpeg is currently working on. Take note of this because you’ll want to remove it so ffmpeg will re-encode it.

You can also run this command to see the last 10 most recently modified files, taken from here:

find $1 -type f -exec stat --format '%Y :%y %n' "{}" \; | sort -nr | cut -d: -f2- | head

Resume

Use the above command to remove what ffmpeg was working on when killed. Append -n to tell ffmpeg to exit and not overwrite existing files

find . -name *.avi -exec ffmpeg -n -i {} -acodec libfdk_aac -b:a 192k -ac 2 -vcodec libx264 -crf 18 -preset veryslow {}.mp4 \;

Cleanup

The command above gave me a bunch of .avi.mp4 files. To clean them up to just be .mp4 files combine find with some bash-fu (thanks to this site for the info)

find . -name “*.avi.mp4” -exec sh -c ‘mv “$1” “${1%.avi.mp4}.mp4″‘ _ {} \;

Finally, remove original .avi files:

find . -name "*.avi" -exec rm {} \;

 

Update 2/3/2016: Added a verification step.

Verification

Well, it looks like the conversion I did was not without problems. I only detected them when I tried to change the audio codec I used. You can use ffmpeg to analyze your file to see if there are any errors with it. I learned this thanks to this thread.

The command is:

ffmpeg -v error -i <file to check> -f null -

This will have ffmpeg go through each frame and log and tell you any errors it finds. You can have it dump them to a logfile instead by redirecting the output like so:

ffmpeg -v error -i <file to scan> -f null - >error.log 2>&1

What if you want to check the integrity of multiple files? Use find:

find . -type f -exec sh -c 'ffmpeg -v error -i "{}" -f null - > "{}".log 2>&1' \;

The above command searches for any file in the current directory and subdirectories, uses ffmpeg to analyze the file, and will output what it finds to a log file of the same name as the file that was analyzed. If nothing was found, a 0 byte file was created. You can use find to remove all 0 byte files so all that’s left are logs of files with errors:

find . -type f -size 0 -exec rm {} \;

You can open each .log file to see what was wrong, or do what I did and see that the presence of the log file at all means that the file needs to be re-encoded from source.

Checking the integrity of the generated files themselves made me realize I had to re-encode part of my home movie library. After re-encoding, there were no more errors. Peace of mind. Hooray!

Change the hostname on a Splunk Indexer

Recently I set about to change the hostname on a Splunk indexer. It should be pretty easy, right? Beware. It can be pretty nasty! Below is my experience.

I started with the basics.

  • hostname command
    hostname <newhostname>
  • Modify /etc/system/network to make it persistent (CentOS specific)
    sed -i 's/<old hostname>/<new hostname>/g' /etc/system/network
  • Inform Splunk of the hostname change
    sed -i 's/<old hostname>/<new hostname>/g' $SPLUNK_HOME/etc/system/local/server.conf
  • Restart Splunk

Sadly, that wasn’t the end of it. I noticed right away Splunk complained of a few things:

TcpOutputProc - Forwarding to indexer group default-autolb-group blocked for 300 seconds.
WARN TcpOutputFd - Connect to 10.0.0.10:9997 failed. Connection refused

Running

netstat -an | grep LISTEN

revealed that the server was not even listening on 9997 like it should be. I found this answer indicating it could be an issue with DNS tripping up on that server. I edited $SPLUNK_HOME/etc/system/local/inputs.conf with the following:

[splunktcp://9997]
connection_host = none

but I also noticed that after I ran the command a short time later it was no longer listening on 9997. Attempting to telnet from the forwarder to the indexer in question revealed the same results – works at first, then quit working. Meanwhile no events are getting stored on that indexer.

I was pulling my hair out trying to figure out what was happening. Finally I discovered this gem on Splunk Answers:

Are you using the deployment server in your environment? Is it possible your forwarders’ outputs.conf got deployed to your indexer?

On the indexer:
./splunk cmd btool outputs list –debug

Sure enough! after running

./splunk cmd btool outputs list --debug

I discovered this little gem of a stanza:

/opt/splunk/etc/apps/APP_Forwarders/default/outputs.conf [tcpout]

That shouldn’t’ have been there! Digging into my deployment server I discovered that I had a server class with a blacklist, that is, it included all deployment clients except some that I had listed. The blacklist had the old hostname, which meant when I changed the indexer’s hostname it no longer matched the blacklist and thus was deployed a forwarder’s configuration, causing a forwarding loop. My indexer was forwarding back to the forwarder everything it was getting from the forwarder, causing Splunk to shut down port 9997 on the offending indexer completely.

After getting all that set up I noticed Splunk was only returning searches from the indexers whose hostnames I had not changed. Everything looked good in the distributed search arena – status was OK on all indexers; yet I still was not getting any results from the indexer whose name I had changed, even though it was receiving events! This was turning into a problem. It was creating a blind spot.

Connections great, search status great, deployment status good.. I didn’t know what else to do. I finally thought to reload Splunk on the search head that had been talking to the server whose name I changed. Success! Something in the search head must have made it blind to the indexer once its name had changed. Simply restarting Splunk on the search head fixed it.

In short, if you’re crazy enough to change the name of one of your indexers in a distributed Splunk environment, make sure you do the following:

  • Change hostname on the OS
  • Change ServerName in Splunk config files
    • Add connection_host = none in inputs.conf (optional?)
  • Clean up your deployment server
    • Delete old hostname from clients phoning home
    • MAKE SURE the new hostname won’t be sucked up into an unwanted server class
  • Clean up your search head
    • Delete old hostname search peer
    • Add new hostname search peer
    • Restart search head
  • Profit

Configure VMWare View Smartcard in Ubuntu

Recently I’ve been required to use a smart card to log into some servers I manage. Configuring my Linux Mint 17.2 machine to pass my smartcard through to those machines via VMWare View has not been straightforward. This guide will walk you through how to get Smartcard redirection to work with VMWare View in Ubuntu 14.04 Trusty Tahr, which Linux Mint 17.2 is based off of. Enjoy.

Procedure

  1. Install the latest version of the VMWare View client (distro versions are often quite out of date) from here
    chmod +x VMware-Horizon-Client-3.5.0-2999900.x64.bundle 
    sudo ./VMware-Horizon-Client-3.5.0-2999900.x64.bundle
  2. Install necessary packages for CommonAccessCard (thanks to this helpful ubuntu writeup)
    sudo apt-get install libpcsclite1 pcscd pcsc-tools
  3. (re)Start the pcscd daemon
    sudo /etc/init.d/pcscd restart
  4. Ensure your smartcard reader is properly identified by running this command:
    pcsc_scan

    If that command is stuck on “Waiting for the first reader…” then you need to install your smartcard drivers. If it sees your smartcard, skip this next step and proceed to step 6.

  5. Install your smartcard driver. This process is different for each card. For the card reader I have (the Identive SCR3500 A Contact Reader), I was able to obtain the drivers after much difficulty from here. The link to the drivers itself are here (alternate link). In my case I was able to untar and run the install script, which worked beautifully.
  6. Install 32 bit compatibility libraries (only applicable for 64 bit installations) thanks to this site for the answer and this one for clarification
    sudo dpkg --add-architecture i386
    sudo apt-get update
    sudo apt-get install -y libxml2:i386 libssl1.0.0:i386 libXtst6:i386 libudev1:i386 libpcsclite1:i386 libtheora0:i386 libv4l-0:i386 libpulse0:i386
    sudo ln -sf /lib/i386-linux-gnu/libudev.so.1 /lib/i386-linux-gnu/libudev.so.0
    sudo ln -sf /lib/i386-linux-gnu/libssl.so.1.0.0 /lib/i386-linux-gnu/libssl.so.1.0.1
    sudo ln -sf /lib/i386-linux-gnu/libcrypto.so.1.0.0 /lib/i386-linux-gnu/libcrypto.so.1 
    sudo ln -sf /lib/$(arch)-linux-gnu/libudev.so.1 /lib/$(arch)-linux-gnu/libudev.so.0
  7. (re)Start the vmware-USBArbitrator and vmware-view-USBD services
    sudo /etc/init.d/vmware-USBArbitrator start
    sudo /etc/init.d/vmware-view-USBD start

    For some reason after I did all of this the vmware-view binary was nowhere to be found. It was quite strange. I fixed this issue by removing and re-installing the view client:

    sudo ./VMware-Horizon-Client-3.5.0-2999900.x64.bundle -u vmware-horizon-client
    sudo ./VMware-Horizon-Client-3.5.0-2999900.x64.bundle

    After doing this the binary was there as expected.

  8. Create a config file to instruct the view client to redirect your smartcard reader.
    echo 'viewusb.IncludeFamily = "smart-card"' >> /etc/vmware/config
    echo 'viewusb.AllowSmartcard = "true"' >> /etc/vmware/config

    There is no graphical option to pass devices through like there is in the Windows client. I spent more time than I’d like to admit on this step. It turns out the name of the file is important – it has to simply be called “config.” Place this config file in ~/.vmware (it can also be placed in /etc/vmware/config and/or /usr/lib/vmware/config)

  9. Start vmware-view and enjoy your new smartcard capabilities
    vmware-view

Troubleshooting

If it’s not working, make sure that these services are started

  • pcscd
  • vmware-USBArbitrator
  • vmware-view-USBD

One of these services have been known to crash if you attempt to connect while your smartcard is plugged in. The dance to get around this is to unplug your card reader, re-launch the above services, launch vmware-view, connect to your view server, and then only after you’ve logged in, plug in your card reader.


 

Update 2/25/2016: Here is the script I use to make my chromebook work beautifully for remoting into work:

sudo /etc/init.d/pcscd restart
sudo /etc/init.d/vmware-USBArbitrator restart
sudo /etc/init.d/vmware-view-USBD restart
setres 1600 1024
vmware-view
setres 2560 1700

Install Guacamole 0.9.8 in CentOS 7

Lately I’ve embarked in installing the latest version of Guacamole, 0.9.8, in a fresh installation of CentOS 7. Kudos go to the excellent guide I found from here.  Derek’s guide is for 0.9.7 but it also works for 0.9.8. I ran into a few hangups but after I figured them out it worked beautifully.

First, fetch the needed binaries:

rpm -Uvh http://mirror.metrocast.net/fedora/epel/7/x86_64/e/epel-release-7-5.noarch.rpm   # EPEL Repo
yum -y install wget   # wget
wget http://download.opensuse.org/repositories/home:/felfert/Fedora_19/home:felfert.repo && mv home\:felfert.repo /etc/yum.repos.d/   # Felfert Repo
yum -y install tomcat libvncserver freerdp libvorbis libguac libguac-client-vnc libguac-client-rdp libguac-client-ssh
yum -y install cairo-devel pango-devel libvorbis-devel openssl-devel gcc pulseaudio-libs-devel libvncserver-devel terminus-fonts \
freerdp-devel uuid-devel libssh2-devel libtelnet libtelnet-devel tomcat-webapps tomcat-admin-webapps java-1.7.0-openjdk.x86_64

Next, install guac server (the latest as of this writing is 0.9.8)

mkdir ~/guacamole && cd ~/
wget http://sourceforge.net/projects/guacamole/files/current/source/guacamole-server-0.9.8.tar.gz
tar -xzf guacamole-server-0.9.8.tar.gz && cd guacamole-server-0.9.8
./configure --with-init-dir=/etc/init.d
make
make install
ldconfig

I received an error while running ./configure :

checking for jpeg_start_compress in -ljpeg... no
configure: error: "libjpeg is required for writing jpeg messages"

It means I didn’t have libjpeg dev libraries installed. Easily fixed:

yum install libjpeg-turbo-devel

Next, install the guacamole war files

mkdir -p /var/lib/guacamole && cd /var/lib/guacamole/
 wget http://sourceforge.net/projects/guacamole/files/current/binary/guacamole-0.9.8.war -O guacamole.war
 ln -s /var/lib/guacamole/guacamole.war /var/lib/tomcat/webapps/
 rm -rf /usr/lib64/freerdp/guacdr.so
 ln -s /usr/local/lib/freerdp/guacdr.so /usr/lib64/freerdp/

Next comes configuring the database

#Install database and connector
yum -y install mariadb mariadb-server
 mkdir -p ~/guacamole/sqlauth && cd ~/guacamole/sqlauth
 wget http://sourceforge.net/projects/guacamole/files/current/extensions/guacamole-auth-jdbc-0.9.8.tar.gz
 tar -zxf guacamole-auth-jdbc-0.9.8.tar.gz
 wget http://dev.mysql.com/get/Downloads/Connector/j/mysql-connector-java-5.1.32.tar.gz
 tar -zxf mysql-connector-java-5.1.32.tar.gz
 mkdir -p /usr/share/tomcat/.guacamole/{extensions,lib}
 mv guacamole-auth-jdbc-0.9.8/mysql/guacamole-auth-jdbc-mysql-0.9.8.jar /usr/share/tomcat/.guacamole/extensions/
 mv mysql-connector-java-5.1.32/mysql-connector-java-5.1.32-bin.jar /usr/share/tomcat/.guacamole/lib/
 systemctl restart mariadb.service

#Configure database
mysqladmin -u root password MySQLRootPass
mysql -u root -p   # Enter above password
create database guacdb;
create user 'guacuser'@'localhost' identified by 'guacDBpass';
grant select,insert,update,delete on guacdb.* to 'guacuser'@'localhost';
flush privileges;
quit
cd ~/guacamole/sqlauth/guacamole-auth-jdbc-0.9.8/mysql/schema/
cat ./*.sql | mysql -u root -p guacdb   # Enter SQL root password set above

Now we need to configure guacamole to use our new database.

mkdir -p /etc/guacamole/ && vi /etc/guacamole/guacamole.properties
# MySQL properties
mysql-hostname: localhost
mysql-port: 3306
mysql-database: guacdb
mysql-username: guacuser
mysql-password: guacDBpass

# Additional settings
mysql-disallow-duplicate-connections: false

Link the file you just made to the tomcat configuration directory

ln -s /etc/guacamole/guacamole.properties /usr/share/tomcat/.guacamole/

Cleanup temporary files and enable necessary services on boot

cd ~ && rm -rf guacamole*
systemctl enable tomcat.service && systemctl enable mariadb.service && chkconfig guacd on
systemctl reboot

Lastly, open the firewall up for port 8080 (thanks stack overflow)

firewall-cmd --permanent --add-port=8080/tcp
firewall-cmd --reload

Navigate to guacamole in your browser: http://<IP address>/guacamole:8080. You should see the guacamole login screen.

Additional hiccup

This new version of guacamole has a different user interface. It took me longer than I’d like to admit to realize how to get out of a guacamole session once it’s started. Sessions are now full screen with no obvious way to exit.

The way to exit the full screen guacamole session is to press the magic key combination of ctrl, alt, and shift. It will reveal a menu from the side. This is all clearly defined in the user documentation, but my lack of willingness to read it caused me to waste much time. Lesson learned!

Randomize files in a folder

I wanted to make a simple slideshow for a cheap Kindle turned photo frame in my office. Windows Movie Maker (free, already installed program) does not have a randomize function when importing photos. I had a lot of photos I wanted imported and I wanted them randomized. Movie Maker doesn’t include subfolders for some dumb reason, so I also needed a way to grab pictures from various directories and put them in a single directory.

My solution (not movie-maker specific) was to use bash combined with find, ln, and mv to get the files exactly how I want them. The process goes as follows:

  1. Create a temporary folder
  2. Use the find command to find files you want
    1. Use -type f argument to find only files (don’t replicate directory structure)
    2. Use the -exec argument to call the ln command to create links to each file found
    3. Use the -s argument of ln to create symbolic links
    4. Use the -b argument of ln to ensure duplicate filenames are not overwritten
  3. Invoke a one line bash command to randomize the filenames of those symbolic links

It worked beautifully. The commands I ended up using were as follows:

mkdir temp
cd temp
find /Pictures/2013/ -type f -exec ln -s -b {} . \;
#repeat for each subfolder as needed, unless you want all folders in which case you can just specify the directory beneath it.
find /Pictures/2014/ -type f -exec ln -s -b {} . \;
find /Pictures/2015/ -type f -exec ln -s -b {} . \;

for i in *.JPG; do mv "$i" "$RANDOM.jpg"; done
#repeat for all permutations. The -b argument of ln creates files with tildes in the extension - don't forget about them.
for i in *.jpg; do mv "$i" "$RANDOM.jpg"; done
for i in *.JPG~; do mv "$i" "$RANDOM.jpg"; done
for i in *.jpg~; do mv "$i" "$RANDOM.jpg"; done

The end result was a directory full of pictures with random filenames, ready to be dropped into any crappy slideshow making software of your choosing 🙂

Xenserver SSH backup script

Citrix Xenserver is an amazing hypervisor with pretty much every function released to you for free. One thing they do not handle, however, is automated backups.

I have hacked together a backup script for myself that seems to work fairly well. It is my own mix of this and this script along with some logic for e-mail reporting that I came up with myself. It does not require any modification of the xenserver host at all (no need to mount anything!) with the exception of adding a public key to the xenserver’s authorized_keys file.

This script can be run on anything with BASH and the appropriate UNIX tools (even other xenservers if you want) and uses SSH to initiate and transfer the backups to a location of your choosing.

Place this script on the machine you want to be initiating / saving the backups on. It requires that you generate an SSH public/private key pair, which can be done with this command:

ssh-keygen

Add the resulting .pub file’s contents to your xenserver’s /root/.ssh/authorized_keys file (create it if it doesn’t exist.) Take note of the location of the private key file that was generated with that command and put that path in the script.

You can download the script here or view it below. This script has been working pretty well for me. Note it will not work with any VMs that have spaces in their names. I was too lazy to debug this so I just renamed the problem VMs to remove the spaces. Enjoy!

#!/bin/bash

# Modified August 30, 2015 by Nicholas Jeppson
# Taken from http://discussions.citrix.com/topic/345960-xenserver-automated-snapshots-script/ and modified to allow for ssh backups
# Additional insight taken from https://github.com/cepa/xen-backup

# [Usage & Config]
# This script involves two computers: a xenserver machine and a backup machine.
# Put this script on the backup server and run with any account that has privileges to the desired export directory.
#
# This script assumes you have already created a private and public key pair on the backup server
# as well as adding respective the public key to the xenserver authorized_keys file.
#
# [How it works]
# Step1: Snapshots each VM on the xen server
# Step2: Backs up the snapshots to specified location
# Step3: Deletes temporary snapshot created in step 1
# Step4: Deletes old VM backups as defined later in this file
#
# [Note]
# This script will only work with VMs that don't have spaces in their names
# Please make sure you have enough disk space for BACKUP_PATH, or backup will fail
#
# Tested on xenserver 6.5
#
# Modify the variables in the config section below to suit your particular environment's needs.
#

############### Config section ###############

#Location where you want backups to go
BACKUP_PATH="/mnt/backup/OS/"

#SSH configuration
SSH_CIPHER="arcfour128"

#Number of backups to keep
NUM_BACKUPS=2

#Xenserver ssh configuration
#This dictates the address and location of keyfiles as they reside on the xenserver
XEN_ADDRESS="192.168.1.1"
XEN_KEY_LOCATION="/home/backup/.ssh/backup"
XEN_USER="root"

#E-mail configuration
EMAIL_ADDRESS="youremail@provider.here"
EMAIL_SUBJECT="`hostname -s | awk '{print "["toupper($1)"]"}'` VM Backup Report: `date +"%A %b %d %Y"`"

########## End of Config section ###############

ret_code=0

#Replace any spaces found with backslashes because dd doesn't like them
BACKUP_PATH_ESCAPED="`echo $BACKUP_PATH | sed 's/ /\\\ /g'`"

# SSH command
remote_exec() {
chmod 0600 $XEN_KEY_LOCATION
ssh -i $XEN_KEY_LOCATION -o "StrictHostKeyChecking no" -c $SSH_CIPHER $XEN_USER@$XEN_ADDRESS $1
}

backup() {
echo "======================================================"
echo "VM Backup started: `date`"
begin="$(date +%s)"
echo "Backup location: ${BACKUP_PATH}"
echo

#add a slash to the end of the backup path if it doesn't exist
if [[ "$BACKUP_PATH" != */ ]]; then
BACKUP_PATH="$BACKUP_PATH/"
fi

#Build array of VM names
VMNAMES=$(remote_exec "xe vm-list is-control-domain=false | grep name-label | cut -d ':' -f 2 | tr -d ' '")

for VMNAME in $VMNAMES
do
echo "======================================================"
echo "$VMNAME backup started `date`"
echo
before="$(date +%s)"

# create snapshot
TIMESTAMP=`date '+%Y%m%d-%H%M%S'`
SNAPNAME="$VMNAME-$TIMESTAMP"
SNAPUUID=$(remote_exec "xe vm-snapshot vm=\"$VMNAME\" new-name-label=\"$SNAPNAME\"")

# export snapshot
# remote_exec "xe snapshot-export-to-template snapshot-uuid=$SNAPUUID filename= | gzip" | gunzip | dd of="$BACKUP_PATH/$SNAPNAME.xva"
remote_exec "xe snapshot-export-to-template snapshot-uuid=$SNAPUUID filename=" | dd of="$BACKUP_PATH/$SNAPNAME.xva"

#if export was unsuccessful, return error
if [ $? -ne 0 ]; then
echo "Failed to export snapshot name = $snapshot_name$backup_ext"
ret_code=1

else
#calculate backup time, print results
after="$(date +%s)"
elapsed=`bc -l <<< "$after-$before"`
elapsedMin=`bc -l <<< "$elapsed/60"`
echo "Snapshot of $VMNAME saved to $SNAPNAME.xva"
echo "Backup completed in `echo $(printf %.2f $elapsedMin)` minutes"

# destroy snapshot
remote_exec "xe snapshot-uninstall force=true snapshot-uuid=$SNAPUUID"

#remove old backups (uses num_backups variable from top)
BACKUP_COUNT=$(find $BACKUP_PATH -name "$VMNAME*.xva" | wc -l)

if [[ "$BACKUP_COUNT" -gt "$NUM_BACKUPS" ]]; then

OLDEST_BACKUP=$(find $BACKUP_PATH -name "$VMNAME*" -print0 | xargs -0 ls -tr | head -n 1)
echo
echo "Removing oldest backup: $OLDEST_BACKUP"
rm $OLDEST_BACKUP
if [ $? -ne 0 ]; then
echo "Failed to remove $OLDEST_BACKUP"
fi
fi
echo "======================================================"
fi
done

end=$"$(date +%s)"
total_time=`bc -l <<< "$end-$begin"`
total_time_min=`bc -l <<< "$total_time/60"`
echo "Backup completed: `date`"
echo "VM Backup completed in `echo $(printf %.2f $total_time_min)` minutes"
echo
}

#Run the backup function and save all output to a variable, including stderr
BACKUP_OUTPUT=$(backup 2>&1)

#Clean up the output of the backup function
#Remove records count from dd, do some basic math to make dd's numbers more human readable
BACKUP_OUTPUT_HUMANIZED=$(echo "$BACKUP_OUTPUT" | sed -r '/.*records /d' | tr -d '()' \
| awk '{sub(/.*bytes /, $1/1024/1024/1024" GB "); sub(/in .* secs/, "in "$5/60" mins "); sub(/mins .*/, "mins (" $7/1024/1024" MB/sec)"); print}')

#Send a report e-mail with the backup results
echo "$BACKUP_OUTPUT_HUMANIZED" | mail -s "$EMAIL_SUBJECT" "$EMAIL_ADDRESS"

exit $ret_code

Avoid prompts when installing FreeBSD ports

The FreeBSD ports tree is wonderful for installing software but sometimes it can be a real pain. Recently I was trying to install Emby in FreeBSD because why not? The instructions were easy enough except for when I ran

make install clean

I was constantly barraged with choices for things. I want to assume the default on all of these and not be barraged with questions.

Thanks to stack exchange I learned it’s relatively easy to bypass all these questions. Simply add:

BATCH=yes

to the end of your make install clean statement to assume the defaults to all the questions for the package. The Emby guide is pretty comprehensive, but I would add this command at the bottom:

make install clean BATCH=yes

Handy.

Measure SSH transfer speeds

SSH is a beautiful thing. In addition to remotely administering machines you can use it to transfer files. To do this one simply pipes the cat command on both ends. For example, to copy hello.txt on the source host to hi.txt on the destination host, the command would be:

ssh remote_host cat hello.txt | cat > hi.txt

The command takes the contents of hello.txt and pipes it over to the remote host. The cat command on the remote host takes what was piped to it as  input and the > sign instructs cat to take its input and output it to hi.txt.

A great way to measure transfer speeds using ssh between two hosts is to take /dev/zero on the source host and output it to /dev/null on the destination host. This bypasses any disk speed bottlenecks and only measures network throughput. Combine this with the pv command to get a nice graphical view of how fast the transfer is going.

ssh remote_host cat /dev/zero | pv | cat > /dev/null

The default options between my machines result in about a 65 megabytes a second transfer speed.

1

It turns out that the encryption cipher used makes a big difference on transfer speeds. Use the -c command to specify which cipher to use and see how much of a difference this makes. -o compression=no can also help with transfer speeds.

The fastest cipher I’ve found is arcfour. It’s touted as less secure, but for my local network I can accept the risk (thanks to slashdot for the discussion.)

ssh -c arcfour -o Compression=no remote_host cat /dev/zero | pv | cat > /dev/null

2

Using acrfour more than doubles the speed for me! Amazing.

Allow non-root users to mount disks

I came across a need today to allow a regular (non-root) user to mount disks in Ubuntu 14.04 Trusty Tahr. I usually use sudo but in this case I needed to be able to run photorec as a regular user.

The way to accomplish this is to add the regular user to the disk group. To accomplish this, run this command:

sudo usermod -a -G disk <username>

If you are logged in as that user, you will have to log out and log back in to receive the permissions. Once this is done you should be able to mount disks without using sudo or being root.