Improve FreeNAS NFS performance in Xenserver

My home lab consists of a virtualized instance of freenas, Citrix Xenserver, and various VMs. Recently I wanted to migrate some of my VMs to an NFS export from FreeNAS. To my dismay, the speed was abysmal (3 MB/second write speeds.) This tutorial will walk you through how to improve FreeNAS NFS performance in Xenserver by adding an log device (ZIL) to your ZFS pool.

After much research I realized the problem lies with ZFS behind the NFS export. Xenserver mounts the NFS share in such a way that it constantly wants to synchronize writes, which slows things down.

The solution: add a ZIL device. Since my freeNAS is virtualized, I chose the route of adding a virtual disk that is attached to an SSD. This process wasn’t straightforward.  If you have a virtual FreeNAS this is how to improve NFS performance:

  1. Add a disk in xenserver. Rule of thumb for size is half the amount of system RAM. I added 16GB ZIL disk to be safe.
  2. Add the following tunables in FreeNAS (to allow the OS to properly see xen hard drives)
    1. hint.ada.0.at, scbus100 (for the FreeNAS OS disk)
    2. hint.ada.1.at, scbus100 (for the newly added ZIL disk)
  3. Reboot FreeNAS
  4. In the FreeNAS GUI, click the ZFS Volume Manager, select your volume to expand from the dropdown, and select the device to be a LOG volume (ZIL)

That’s it! Once I added an SSD based ZIL device for my ZFS pool, NFS writes went from 3 MB/s to 60 MB/s. Awesome.

Resizing LVM storage checklist

This is a short note of what to do when you change size of the physical disk an LVM setup, such as the default configuration in CentOS 7.

  1. Modify the physical disk size
  2. Modify the partition size
    1. I used fdisk to delete the partition, then re-create with a larger size
    2. Reboot
  3. Extend the physical volume size
    1. pvresize <path to enlarged partition>
  4. Extend the logical volume size
    1. lvextend <lv path> -l100%FREE
  5. Extend filesystem size
    1. resize2fs <lv path>
    2. #If you're running CentOS 7, the default filesystem is actually XFS, not ext4. In that case:
      xfs_growfs <lv path>
  6. Profit.

Fix inconsistent mouse cursors in Linux Mint

I love Linux Mint but a frustration of mine is the fact that the mouse cursors are inconsistent. If you change the mouse theme in the themes settings it will change for most windows, but certain windows such as chrome or wine revert to the system default mouse cursor.

I’ve finally found a fix courtesy of Ubuntu Forums. The problem lies with the x-cursor-theme being independent of the theme set in cinnamon. What you have to do is run a command to update the x-cursor-theme.

First, find the name of the mouse cursor you want from a list of your installed themes:

ls /usr/share/icons

Set an environment variable of the theme you want. Specify the folder name of the theme. For example, for DMZ-Black cursor:

CURSOR=DMZ-Black

Lastly run the command to update your cursor:

gsettings set org.gnome.desktop.interface cursor-theme "$CURSOR" && sudo update-alternatives --set x-cursor-theme /usr/share/icons/$CURSOR/cursor.theme

That’s it! You now have consistent mouse cursors. OCD demons satisfied.

Compile ffmpeg on CentOS 7

Recently I had to compile ffmpeg from scratch on CentOS 7. The reason? I wanted libfdk_aac support. Here are my notes on the procedure. The how-to on https://trac.ffmpeg.org/wiki/CompilationGuide/Centos was actually quite helpful and accurate.

Install necessary dependencies and set up build folder

yum install autoconf automake cmake freetype-devel gcc gcc-c++ git libtool make mercurial nasm pkgconfig zlib-devel
mkdir ~/ffmpeg_sources

Build necessary components
I only needed x264 and libfdk_aac, so that’s all I ended up doing:

#yasm
cd ~/ffmpeg_sources
git clone --depth 1 git://github.com/yasm/yasm.git
cd yasm
autoreconf -fiv
./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin"
make
make install
make distclean
#libx264
cd ~/ffmpeg_sources
git clone --depth 1 git://git.videolan.org/x264
cd x264
PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-static
make
make install
make distclean
#libfdk_aac
cd ~/ffmpeg_sources
git clone --depth 1 git://git.code.sf.net/p/opencore-amr/fdk-aac
cd fdk-aac
autoreconf -fiv
./configure --prefix="$HOME/ffmpeg_build" --disable-shared
make
make install
make distclean

Compile ffmpeg
I actually specified a git mirror because the sources at ffmpeg site were glacially slow.

cd ~/ffmpeg_sources
git clone https://github.com/FFmpeg/FFmpeg.git
cd FFmpeg
PKG_CONFIG_PATH="$HOME/FFmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/FFmpeg_build" --extra-cflags="-I$HOME/FFmpeg_build/include" --extra-ldflags="-L$HOME/FFmpeg_build/lib" --bindir="$HOME/bin" --pkg-config-flags="--static" --enable-gpl --enable-nonfree --enable-libfdk-aac  --enable-libx264
make
make install
make distclean
hash -r

Optionally, remove existing ffmpeg

sudo yum remove ffmpeg

That was it! After a bit of compile time ffmpeg worked with the items I wanted it to.

Install Cinnamon on a Wily chromebook chroot

I recently installed Ubuntu Wily Werewolf 15.10 as a chroot on my Chromebook Pixel 2. The process wasn’t as straightforward as I thought it would be so I will document it here.

First, I followed my own guide on how to set up a crouton chroot. The install would not complete – it was complaining about gnome-session-manager. I had to install the chroot with no GUI. This is the command I used (I specify a specific mirror to use because it’s much faster)

sudo sh ~/Downloads/crouton -r wily -t touch,extension,keyboard,cli-extra -e -n cinnamon -m http://mirrors.xmission.com/ubuntu

Once the initial chroot was set up, I installed cinnamon:

sudo apt-get install cinnamon-desktop-environment

After all that was installed, I followed my own guide on configuring cinnamon. I placed the following script in /usr/local/bin/startcinnamon on my chromebook (not the chroot)

APPLICATION="${0##*/}"

USAGE="$APPLICATION [options]

Wraps enter-chroot to start a Mint session.
By default, it will log into the primary user on the first chroot found.

Options are directly passed to enter-chroot; run enter-chroot to list them."

exec sh -e "`dirname "$0"`/enter-chroot" "$@" xinit

And I placed this file within the chroot, in my home directory:

echo "exec cinnamon-session" > ~/.xinitrc

I started by issuing the command

sudo startcinnamon

I noticed things didn’t look quite right. It turned out I was missing some icons.  Fix this by installing them:

sudo apt-get install gnome-icon-theme-full

I then discovered gnome-terminal wouldn’t run – it would simply crash on exit error 8. I discovered that it was due to missing locale settings. The fix was found here, which involves installing the gnome language pack and setting your locale.

sudo apt-get install language-pack-gnome-en
sudo update-locale LANG="en_US.UTF-8" LANGUAGE="en_US"

To instigate the changes you must exit all chroot instances.

That was it! After that bit of tweaking I have an Ubuntu 15.10 chroot working pretty well on my Chromebook Pixel 2.

 

Fix openVPN on chromebooks

Edit: I’ve updated the script due to more updates in ChromeOS. Find the update here.


Around October of 2015 an update came out to Google Chromebooks that had an unfortunate side effect for me: openvpn no longer worked. Despite my having created a tunnel device seconds earlier, openvpn complained that a tunnel device didn’t exist.

I finally found a fix for this here. It turns out that the update caused the “shill” process to aggressively kill the tun0 interface. The clever workaround as posted by pippo0312 works flawlessly. It involves creating the openvpn tunnel in chromium itself rather than in a crouton chroot as I had previously done.

The original script took an argument for an ovpn file to use. Since I only have one VPN profile I just modified it to specify the ovpn file I use.

Place the script in /usr/local/bin on your chrome install (not a chroot) and mark it executable by issuing chmod +x

#!/bin/sh -e
trap '' 2
# Stop shill and restart it with a nicer attitude towards tun0
sudo stop shill
sudo start shill BLACKLISTED_DEVICES=tun0
# Sleep 10 seconds to allow chromebook to reconnect to the network
sudo sleep 10
sudo openvpn --mktun --dev tun0
sudo sleep 3
# Add google DNS on top of current ones, since openvpn command does not do it
sudo sed -i '1s/^/# new DNS\nnameserver 8.8.8.8\nnameserver 8.8.4.4\n# old DNS\n/' /var/run/shill/resolv.conf
# Lauch openvpn, finally...
sudo openvpn --config $1 --dev tun0
# When ctrl-c is hit remove tun0 and cleanup the DNS
sudo openvpn --rmtun --dev tun0
sudo sed -i '/# new DNS/,/# old DNS/d' /var/run/shill/resolv.conf
trap 2

Now you can just issue the command of the name of your script and vpn works again! hooray.

Linux two factor user exception

Two factor authentication is much more security than simply password authentication. There are times, though, that you will want to create an exception for a specific user. In my case, I wanted to allow a vulnerability scanner to scan my systems. Rather than turn  two factor off for the duration of the scan, I set out to learn how to add an exception for a specific user. I accomplished this on CentOS 6 Linux, but it works an any Linux version using PAM.

The solution to my problem is the pam_listfile PAM module. Pam_listfile allows you to specify a text file that contains a list of either users or groups. You then tell PAM what to do with the file (allow, deny) as well as how to handle what to do if it can’t read the file for some reason.

Thanks to this site I learned the details of what to do. In my case I want a single username to not be prompted for a 2nd authentication factor. All other users must use two factors. I created the file /etc/scan_user and added the username I wanted to have the exception:

echo "scanuser" > /etc/scanuser

Then I modified /etc/pam.d/password-auth and placed it after the first authentication factor, but before the second.

vi /etc/pam.d/password-auth
#First authentication factor
auth        required    pam_unix.so

#pam_listfile to check username and see if it's allowed with only one factor or must provide a second
auth        sufficient  pam_listfile.so onerr=fail item=user sense=allow file=/etc/qualys_user

#Second authentication factor. This is only reached if the user is not on the list provided in pam_listfile.
auth        required   pam_google_authenticator.so

The PAM configuration is as follows:

  • First factor required for everyone (pam_unix)
  • pam_listfile sufficient for anyone who matches the provided list.
  • Second factor required for everyone else (anyone who wasn’t on the pam_listfile list

My vulnerability scanner is now happy and I still have two factor authentication enabled for every other user in the system. Success.

 

Configure ARC welder to access local folder

ARC Welder is an amazing tool. It’s a chrome extension that allows you to take an android APK file and convert it into a chrome extension. This means you can run android apps on any device with Chrome installed. Sweet!

I came across a need for an arc welder app to access my local filesystem. I wanted to share a file that was on my host system with the android app. After some digging I came across this forum post which details what you need to do.

The solution is specify a certain option when using arc welder:

{"enableExternalDirectory": true}

You do this on the final screen before you click test app / download zip.

Screenshot 2016-01-28 at 1.24.25 PM

Arc welder will prompt you to pick a folder. Once that’s done, you can navigate to the Downloads folder in your app and your linked folder will be there. Pretty slick.

(No, OpenVPN ended up not working, but I wanted to save this knowledge in case I want to try a different app that needs / creates files.)

Configure HDHR Viewer XMLTV in CentOS Linux

Recently I’ve accomplished the herculean task of setting up my parents’ cable connection to stream through Plex using a HD Homerun 3 cablecard network tuner. It works! This is how I got XMLTV guide working for the HDHR Viewer plugin for Plex on CentOS 7 Linux.

Required reading: http://hdhrviewer.zynine.net/hdhrviewerv2-initial-setup/xmltv-zap2xml/

First, install and configure the required perl and java packages

sudo yum install perl-Compress-Zlib perl-HTML-Parser perl-HTTP-Cookies perl-LWP-Protocol-https perl-JSON gcc cpan java-1.7.0-openjdk-headless 
sudo cpan JSON::XS 
#accept all defaults when prompted

Download the zap2xml perl module (zap2xml.pl) Place it somewhere it can be easily accessed.

Test to make sure the script will run properly:

perl zap2xml.pl -u <zap2it username> -p <zap2it password>

If you get an error like this:

Can't locate Compress/Zlib.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at zap2xml.pl line 26.

It means you haven’t installed the correct perl modules. Double check that you installed them all.

Once we know it runs properly e need to configure a cron job to run zap2xml daily (to make sure the guide data is always up to date.)

crontab -e
#press i to begin inserting
0 0 * * * perl <full path to where you downloaded zap2xml>/zap2xml.pl -u <zap2it e-mail> -p <zap2it password>
#ESC :wq to save and exit

 

Next download and unzip the Channel Guide app. I placed it in the same place I downloaded zap2xml to keep things simple.

Test it out to make sure it works:

java -jar channel-guide-app-0.0.3.jar server app-config.yml

If it starts and doesn’t crash, you know it’s working.

Now we want to configure the channel guide app to run on startup

sudo vi /etc/systemd/system/channelguide.service
[Unit]
Description=Plex Channel Guide

[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/java -jar <full path to channel-guide dir>/channel-guide-app-0.0.3.jar server <full path to channel-guide dir>/app-config.yml

[Install]
WantedBy=multi-user.target

Make sure this systemd service is enabled:

sudo systemctl enable /etc/systemd/system/channelguide.service

Lastly make sure you’ve configured the HDHRViewer plugin in Plex to use xmltv and rest API as per the how-to on their site.

Success!

Use FFMPEG to batch convert video files

Below are the tips and tricks I’ve learned in my quest to get all my family home movies working properly in Plex. I ended up having to re-encode most of them to have a proper codec.

ffmpeg is what I ended up using. Thanks to here and here I found a quality setting that I liked: x264 video with crf of 18 and veryslow encoder preset, AAC audio at 192kbps, mp4 container.

ffmpeg -i <file> -acodec libfdk_aac -b:a 192k -ac 2 -vcodec libx264 -crf 18 -preset veryslow <filename>.mp4

I tried to use exiftools to add metadata to the file but Plex wouldn’t read it. Instead I discovered you can use ffmpeg to encode things like title and date taken directly into the file. The syntax for ffmpeg is

-metadata "key=value"

The metadata values I ended up using were:

  • date – Plex looks at this
  • Title – what’s displayed in plex
  • comment – not seen by plex but handy to know about anyway

Encoding

My first pass at encoding combined find with ffmpeg to re-encode all my avi files. I used this command:

find . -name *.avi -exec ffmpeg -i {} -acodec libfdk_aac -b:a 192k -ac 2 -vcodec libx264 -crf 18 -preset veryslow {}.mp4 \;

Interrupted

I bit off more than I could chew with the above command. I had to kill the process because it was chewing up too much CPU. There are two ways to figure out which file ffmpeg is on:

ps aux|grep ffmpeg

will list the file that ffmpeg is currently working on. Take note of this because you’ll want to remove it so ffmpeg will re-encode it.

You can also run this command to see the last 10 most recently modified files, taken from here:

find $1 -type f -exec stat --format '%Y :%y %n' "{}" \; | sort -nr | cut -d: -f2- | head

Resume

Use the above command to remove what ffmpeg was working on when killed. Append -n to tell ffmpeg to exit and not overwrite existing files

find . -name *.avi -exec ffmpeg -n -i {} -acodec libfdk_aac -b:a 192k -ac 2 -vcodec libx264 -crf 18 -preset veryslow {}.mp4 \;

Cleanup

The command above gave me a bunch of .avi.mp4 files. To clean them up to just be .mp4 files combine find with some bash-fu (thanks to this site for the info)

find . -name “*.avi.mp4” -exec sh -c ‘mv “$1” “${1%.avi.mp4}.mp4″‘ _ {} \;

Finally, remove original .avi files:

find . -name "*.avi" -exec rm {} \;

 

Update 2/3/2016: Added a verification step.

Verification

Well, it looks like the conversion I did was not without problems. I only detected them when I tried to change the audio codec I used. You can use ffmpeg to analyze your file to see if there are any errors with it. I learned this thanks to this thread.

The command is:

ffmpeg -v error -i <file to check> -f null -

This will have ffmpeg go through each frame and log and tell you any errors it finds. You can have it dump them to a logfile instead by redirecting the output like so:

ffmpeg -v error -i <file to scan> -f null - >error.log 2>&1

What if you want to check the integrity of multiple files? Use find:

find . -type f -exec sh -c 'ffmpeg -v error -i "{}" -f null - > "{}".log 2>&1' \;

The above command searches for any file in the current directory and subdirectories, uses ffmpeg to analyze the file, and will output what it finds to a log file of the same name as the file that was analyzed. If nothing was found, a 0 byte file was created. You can use find to remove all 0 byte files so all that’s left are logs of files with errors:

find . -type f -size 0 -exec rm {} \;

You can open each .log file to see what was wrong, or do what I did and see that the presence of the log file at all means that the file needs to be re-encoded from source.

Checking the integrity of the generated files themselves made me realize I had to re-encode part of my home movie library. After re-encoding, there were no more errors. Peace of mind. Hooray!