Category Archives: OS

Xenserver NFS SR from FreeNAS VM hack

I have a Citrix xenserver 6.5 host which hosts a FreeNAS VM that exports an NFS share. I then have that same xenserver host use that NFS export as a SR for other VMs on that same server. It’s unusual, but it saves me from buying a separate server for VM storage.

The problem is if you reboot the hypervisor it will fail to connect to the NFS export (because the VM hosting it hasn’t booted yet.) Additionally it appears Xenserver does not play well at all with hung NFS mounts. If you try to shutdown or reboot your FreeNAS VM while Xenserver is still using its NFS export, things start to freeze. You will be unable to do anything to any of your VMs thanks to the hung NFS share. It’s a problem!

My hack around this mess is to have FreeNAS, not Xenserver, control starting and stopping these VMs.

First, create public/private key pair for ssh into xenserver

ssh-keygen

This will generate two files, a private key file and a public (.pub) file. Copy the contents of the .pub file into the xenserver’s authorized_keys file:

echo "PUT_RSA_PUBLIC_KEY_HERE" >> /root/.ssh/authorized_keys

Copy the private key file (same name but without .pub extension) somewhere on your FreeNAS VM.

Next, create NFS startup and shutdown scripts. Thanks to linuxcommando for some guidance with this.  Replace the -i argument with the path to your SSH private key file generated earlier. You will also need to know the PBD UUID of the NFS store. Discover this by issuing

xe pbd-list

Copy the UUID for use in the scripts.

vi nfs-startup.sh
#!/bin/bash
#NFS FreeNAS VM startup script

SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i <PRIVATE_KEY_LOCATION> -l root <ADDRESS_OF_XENSERVER>"

#Attach NFS drive first, then start up NFS-reliant VMs
$SSH_COMMAND xe pbd-plug uuid=<UUID_COPIED_FROM_ABOVE>

sleep 10

#Issue startup commands for each of your NFS-based VMs, repeat for each VM you have
$SSH_COMMAND xe vm-start vm="VM_NAME"
...
vi nfs-shutdown.sh
#!/bin/bash
#NFS FreeNAS VM shutdown script
#Shut down NFS-reliant VMs, detach NFS SR

#Re-establish networking to work around the fact that Network goes down before this script is executed within FreeNAS
/sbin/ifconfig -l | /usr/bin/xargs -n 1 -J % /sbin/ifconfig % up
SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i <PRIVATE_KEY_LOCATION> -l root <ADDRESS_OF_XENSERVER>"

#Issue shutdown commands for each of your VMs
$SSH_COMMAND xe vm-shutdown vm="VM_NAME"

sleep 60

$SSH_COMMAND xe pbd-unplug <UUID_OF_NFS_SR>

#Take the networking interfaces back down for shutdown
/sbin/ifconfig -l | /usr/bin/xargs -n 1 -J % /sbin/ifconfig % down

Don’t forget to mark them executable:

chmod +x nfs-startup.sh
chmod +x nfs-shutdown.sh

Now add the scripts as a startup task in FreeNAS  and shutdown task respectively by going to System / Init/Shutdown Scripts. For startup, Select Type: Script, Type: postinit and point it to your nfs-startup.sh script. For shutdown, select Type: Script and Type: Shutdown.

Success! Now whenever your FreeNAS VM is shut down or rebooted, things will be handled properly which will prevent your hypervisor from freezing.

 

Resizing LVM storage checklist

This is a short note of what to do when you change size of the physical disk an LVM setup, such as the default configuration in CentOS 7.

  1. Modify the physical disk size
  2. Modify the partition size
    1. I used fdisk to delete the partition, then re-create with a larger size
    2. Reboot
  3. Extend the physical volume size
    1. pvresize <path to enlarged partition>
  4. Extend the logical volume size
    1. lvextend <lv path> -l100%FREE
  5. Extend filesystem size
    1. resize2fs <lv path>
    2. #If you're running CentOS 7, the default filesystem is actually XFS, not ext4. In that case:
      xfs_growfs <lv path>
  6. Profit.

UPDATE 2024-12-17

I looked this trusty old article up to rezize a VM disk but kept running into the error that there was not enough free space even after doing a pvresize. In this case, I had to use -L+<increased size> instead of -l100%FREE. Not sure why 100%FREE works sometimes and doesn’t others.

Fix inconsistent mouse cursors in Linux Mint

I love Linux Mint but a frustration of mine is the fact that the mouse cursors are inconsistent. If you change the mouse theme in the themes settings it will change for most windows, but certain windows such as chrome or wine revert to the system default mouse cursor.

I’ve finally found a fix courtesy of Ubuntu Forums. The problem lies with the x-cursor-theme being independent of the theme set in cinnamon. What you have to do is run a command to update the x-cursor-theme.

First, find the name of the mouse cursor you want from a list of your installed themes:

ls /usr/share/icons

Set an environment variable of the theme you want. Specify the folder name of the theme. For example, for DMZ-Black cursor:

CURSOR=DMZ-Black

Lastly run the command to update your cursor:

gsettings set org.gnome.desktop.interface cursor-theme "$CURSOR" && sudo update-alternatives --set x-cursor-theme /usr/share/icons/$CURSOR/cursor.theme

That’s it! You now have consistent mouse cursors. OCD demons satisfied.

Install Cinnamon on a Wily chromebook chroot

I recently installed Ubuntu Wily Werewolf 15.10 as a chroot on my Chromebook Pixel 2. The process wasn’t as straightforward as I thought it would be so I will document it here.

First, I followed my own guide on how to set up a crouton chroot. The install would not complete – it was complaining about gnome-session-manager. I had to install the chroot with no GUI. This is the command I used (I specify a specific mirror to use because it’s much faster)

sudo sh ~/Downloads/crouton -r wily -t touch,extension,keyboard,cli-extra -e -n cinnamon -m http://mirrors.xmission.com/ubuntu

Once the initial chroot was set up, I installed cinnamon:

sudo apt-get install cinnamon-desktop-environment

After all that was installed, I followed my own guide on configuring cinnamon. I placed the following script in /usr/local/bin/startcinnamon on my chromebook (not the chroot)

APPLICATION="${0##*/}"

USAGE="$APPLICATION [options]

Wraps enter-chroot to start a Mint session.
By default, it will log into the primary user on the first chroot found.

Options are directly passed to enter-chroot; run enter-chroot to list them."

exec sh -e "`dirname "$0"`/enter-chroot" "$@" xinit

And I placed this file within the chroot, in my home directory:

echo "exec cinnamon-session" > ~/.xinitrc

I started by issuing the command

sudo startcinnamon

I noticed things didn’t look quite right. It turned out I was missing some icons.  Fix this by installing them:

sudo apt-get install gnome-icon-theme-full

I then discovered gnome-terminal wouldn’t run – it would simply crash on exit error 8. I discovered that it was due to missing locale settings. The fix was found here, which involves installing the gnome language pack and setting your locale.

sudo apt-get install language-pack-gnome-en
sudo update-locale LANG="en_US.UTF-8" LANGUAGE="en_US"

To instigate the changes you must exit all chroot instances.

That was it! After that bit of tweaking I have an Ubuntu 15.10 chroot working pretty well on my Chromebook Pixel 2.

 

Fix openVPN on chromebooks

Edit: I’ve updated the script due to more updates in ChromeOS. Find the update here.


Around October of 2015 an update came out to Google Chromebooks that had an unfortunate side effect for me: openvpn no longer worked. Despite my having created a tunnel device seconds earlier, openvpn complained that a tunnel device didn’t exist.

I finally found a fix for this here. It turns out that the update caused the “shill” process to aggressively kill the tun0 interface. The clever workaround as posted by pippo0312 works flawlessly. It involves creating the openvpn tunnel in chromium itself rather than in a crouton chroot as I had previously done.

The original script took an argument for an ovpn file to use. Since I only have one VPN profile I just modified it to specify the ovpn file I use.

Place the script in /usr/local/bin on your chrome install (not a chroot) and mark it executable by issuing chmod +x

#!/bin/sh -e
trap '' 2
# Stop shill and restart it with a nicer attitude towards tun0
sudo stop shill
sudo start shill BLACKLISTED_DEVICES=tun0
# Sleep 10 seconds to allow chromebook to reconnect to the network
sudo sleep 10
sudo openvpn --mktun --dev tun0
sudo sleep 3
# Add google DNS on top of current ones, since openvpn command does not do it
sudo sed -i '1s/^/# new DNS\nnameserver 8.8.8.8\nnameserver 8.8.4.4\n# old DNS\n/' /var/run/shill/resolv.conf
# Lauch openvpn, finally...
sudo openvpn --config $1 --dev tun0
# When ctrl-c is hit remove tun0 and cleanup the DNS
sudo openvpn --rmtun --dev tun0
sudo sed -i '/# new DNS/,/# old DNS/d' /var/run/shill/resolv.conf
trap 2

Now you can just issue the command of the name of your script and vpn works again! hooray.

Linux two factor user exception

Two factor authentication is much more security than simply password authentication. There are times, though, that you will want to create an exception for a specific user. In my case, I wanted to allow a vulnerability scanner to scan my systems. Rather than turn  two factor off for the duration of the scan, I set out to learn how to add an exception for a specific user. I accomplished this on CentOS 6 Linux, but it works an any Linux version using PAM.

The solution to my problem is the pam_listfile PAM module. Pam_listfile allows you to specify a text file that contains a list of either users or groups. You then tell PAM what to do with the file (allow, deny) as well as how to handle what to do if it can’t read the file for some reason.

Thanks to this site I learned the details of what to do. In my case I want a single username to not be prompted for a 2nd authentication factor. All other users must use two factors. I created the file /etc/scan_user and added the username I wanted to have the exception:

echo "scanuser" > /etc/scanuser

Then I modified /etc/pam.d/password-auth and placed it after the first authentication factor, but before the second.

vi /etc/pam.d/password-auth
#First authentication factor
auth        required    pam_unix.so

#pam_listfile to check username and see if it's allowed with only one factor or must provide a second
auth        sufficient  pam_listfile.so onerr=fail item=user sense=allow file=/etc/qualys_user

#Second authentication factor. This is only reached if the user is not on the list provided in pam_listfile.
auth        required   pam_google_authenticator.so

The PAM configuration is as follows:

  • First factor required for everyone (pam_unix)
  • pam_listfile sufficient for anyone who matches the provided list.
  • Second factor required for everyone else (anyone who wasn’t on the pam_listfile list

My vulnerability scanner is now happy and I still have two factor authentication enabled for every other user in the system. Success.

 

Configure ARC welder to access local folder

ARC Welder is an amazing tool. It’s a chrome extension that allows you to take an android APK file and convert it into a chrome extension. This means you can run android apps on any device with Chrome installed. Sweet!

I came across a need for an arc welder app to access my local filesystem. I wanted to share a file that was on my host system with the android app. After some digging I came across this forum post which details what you need to do.

The solution is specify a certain option when using arc welder:

{"enableExternalDirectory": true}

You do this on the final screen before you click test app / download zip.

Screenshot 2016-01-28 at 1.24.25 PM

Arc welder will prompt you to pick a folder. Once that’s done, you can navigate to the Downloads folder in your app and your linked folder will be there. Pretty slick.

(No, OpenVPN ended up not working, but I wanted to save this knowledge in case I want to try a different app that needs / creates files.)

Configure HDHR Viewer XMLTV in CentOS Linux

Recently I’ve accomplished the herculean task of setting up my parents’ cable connection to stream through Plex using a HD Homerun 3 cablecard network tuner. It works! This is how I got XMLTV guide working for the HDHR Viewer plugin for Plex on CentOS 7 Linux.

Required reading: http://hdhrviewer.zynine.net/hdhrviewerv2-initial-setup/xmltv-zap2xml/

First, install and configure the required perl and java packages

sudo yum install perl-Compress-Zlib perl-HTML-Parser perl-HTTP-Cookies perl-LWP-Protocol-https perl-JSON gcc cpan java-1.7.0-openjdk-headless 
sudo cpan JSON::XS 
#accept all defaults when prompted

Download the zap2xml perl module (zap2xml.pl) Place it somewhere it can be easily accessed.

Test to make sure the script will run properly:

perl zap2xml.pl -u <zap2it username> -p <zap2it password>

If you get an error like this:

Can't locate Compress/Zlib.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at zap2xml.pl line 26.

It means you haven’t installed the correct perl modules. Double check that you installed them all.

Once we know it runs properly e need to configure a cron job to run zap2xml daily (to make sure the guide data is always up to date.)

crontab -e
#press i to begin inserting
0 0 * * * perl <full path to where you downloaded zap2xml>/zap2xml.pl -u <zap2it e-mail> -p <zap2it password>
#ESC :wq to save and exit

 

Next download and unzip the Channel Guide app. I placed it in the same place I downloaded zap2xml to keep things simple.

Test it out to make sure it works:

java -jar channel-guide-app-0.0.3.jar server app-config.yml

If it starts and doesn’t crash, you know it’s working.

Now we want to configure the channel guide app to run on startup

sudo vi /etc/systemd/system/channelguide.service
[Unit]
Description=Plex Channel Guide

[Service]
TimeoutStartSec=0
ExecStart=/usr/bin/java -jar <full path to channel-guide dir>/channel-guide-app-0.0.3.jar server <full path to channel-guide dir>/app-config.yml

[Install]
WantedBy=multi-user.target

Make sure this systemd service is enabled:

sudo systemctl enable /etc/systemd/system/channelguide.service

Lastly make sure you’ve configured the HDHRViewer plugin in Plex to use xmltv and rest API as per the how-to on their site.

Success!

Change the hostname on a Splunk Indexer

Recently I set about to change the hostname on a Splunk indexer. It should be pretty easy, right? Beware. It can be pretty nasty! Below is my experience.

I started with the basics.

  • hostname command
    hostname <newhostname>
  • Modify /etc/system/network to make it persistent (CentOS specific)
    sed -i 's/<old hostname>/<new hostname>/g' /etc/system/network
  • Inform Splunk of the hostname change
    sed -i 's/<old hostname>/<new hostname>/g' $SPLUNK_HOME/etc/system/local/server.conf
  • Restart Splunk

Sadly, that wasn’t the end of it. I noticed right away Splunk complained of a few things:

TcpOutputProc - Forwarding to indexer group default-autolb-group blocked for 300 seconds.
WARN TcpOutputFd - Connect to 10.0.0.10:9997 failed. Connection refused

Running

netstat -an | grep LISTEN

revealed that the server was not even listening on 9997 like it should be. I found this answer indicating it could be an issue with DNS tripping up on that server. I edited $SPLUNK_HOME/etc/system/local/inputs.conf with the following:

[splunktcp://9997]
connection_host = none

but I also noticed that after I ran the command a short time later it was no longer listening on 9997. Attempting to telnet from the forwarder to the indexer in question revealed the same results – works at first, then quit working. Meanwhile no events are getting stored on that indexer.

I was pulling my hair out trying to figure out what was happening. Finally I discovered this gem on Splunk Answers:

Are you using the deployment server in your environment? Is it possible your forwarders’ outputs.conf got deployed to your indexer?

On the indexer:
./splunk cmd btool outputs list –debug

Sure enough! after running

./splunk cmd btool outputs list --debug

I discovered this little gem of a stanza:

/opt/splunk/etc/apps/APP_Forwarders/default/outputs.conf [tcpout]

That shouldn’t’ have been there! Digging into my deployment server I discovered that I had a server class with a blacklist, that is, it included all deployment clients except some that I had listed. The blacklist had the old hostname, which meant when I changed the indexer’s hostname it no longer matched the blacklist and thus was deployed a forwarder’s configuration, causing a forwarding loop. My indexer was forwarding back to the forwarder everything it was getting from the forwarder, causing Splunk to shut down port 9997 on the offending indexer completely.

After getting all that set up I noticed Splunk was only returning searches from the indexers whose hostnames I had not changed. Everything looked good in the distributed search arena – status was OK on all indexers; yet I still was not getting any results from the indexer whose name I had changed, even though it was receiving events! This was turning into a problem. It was creating a blind spot.

Connections great, search status great, deployment status good.. I didn’t know what else to do. I finally thought to reload Splunk on the search head that had been talking to the server whose name I changed. Success! Something in the search head must have made it blind to the indexer once its name had changed. Simply restarting Splunk on the search head fixed it.

In short, if you’re crazy enough to change the name of one of your indexers in a distributed Splunk environment, make sure you do the following:

  • Change hostname on the OS
  • Change ServerName in Splunk config files
    • Add connection_host = none in inputs.conf (optional?)
  • Clean up your deployment server
    • Delete old hostname from clients phoning home
    • MAKE SURE the new hostname won’t be sucked up into an unwanted server class
  • Clean up your search head
    • Delete old hostname search peer
    • Add new hostname search peer
    • Restart search head
  • Profit

Configure VMWare View Smartcard in Ubuntu

Recently I’ve been required to use a smart card to log into some servers I manage. Configuring my Linux Mint 17.2 machine to pass my smartcard through to those machines via VMWare View has not been straightforward. This guide will walk you through how to get Smartcard redirection to work with VMWare View in Ubuntu 14.04 Trusty Tahr, which Linux Mint 17.2 is based off of. Enjoy.

Procedure

  1. Install the latest version of the VMWare View client (distro versions are often quite out of date) from here
    chmod +x VMware-Horizon-Client-3.5.0-2999900.x64.bundle 
    sudo ./VMware-Horizon-Client-3.5.0-2999900.x64.bundle
  2. Install necessary packages for CommonAccessCard (thanks to this helpful ubuntu writeup)
    sudo apt-get install libpcsclite1 pcscd pcsc-tools
  3. (re)Start the pcscd daemon
    sudo /etc/init.d/pcscd restart
  4. Ensure your smartcard reader is properly identified by running this command:
    pcsc_scan

    If that command is stuck on “Waiting for the first reader…” then you need to install your smartcard drivers. If it sees your smartcard, skip this next step and proceed to step 6.

  5. Install your smartcard driver. This process is different for each card. For the card reader I have (the Identive SCR3500 A Contact Reader), I was able to obtain the drivers after much difficulty from here. The link to the drivers itself are here (alternate link). In my case I was able to untar and run the install script, which worked beautifully.
  6. Install 32 bit compatibility libraries (only applicable for 64 bit installations) thanks to this site for the answer and this one for clarification
    sudo dpkg --add-architecture i386
    sudo apt-get update
    sudo apt-get install -y libxml2:i386 libssl1.0.0:i386 libXtst6:i386 libudev1:i386 libpcsclite1:i386 libtheora0:i386 libv4l-0:i386 libpulse0:i386
    sudo ln -sf /lib/i386-linux-gnu/libudev.so.1 /lib/i386-linux-gnu/libudev.so.0
    sudo ln -sf /lib/i386-linux-gnu/libssl.so.1.0.0 /lib/i386-linux-gnu/libssl.so.1.0.1
    sudo ln -sf /lib/i386-linux-gnu/libcrypto.so.1.0.0 /lib/i386-linux-gnu/libcrypto.so.1 
    sudo ln -sf /lib/$(arch)-linux-gnu/libudev.so.1 /lib/$(arch)-linux-gnu/libudev.so.0
  7. (re)Start the vmware-USBArbitrator and vmware-view-USBD services
    sudo /etc/init.d/vmware-USBArbitrator start
    sudo /etc/init.d/vmware-view-USBD start

    For some reason after I did all of this the vmware-view binary was nowhere to be found. It was quite strange. I fixed this issue by removing and re-installing the view client:

    sudo ./VMware-Horizon-Client-3.5.0-2999900.x64.bundle -u vmware-horizon-client
    sudo ./VMware-Horizon-Client-3.5.0-2999900.x64.bundle

    After doing this the binary was there as expected.

  8. Create a config file to instruct the view client to redirect your smartcard reader.
    echo 'viewusb.IncludeFamily = "smart-card"' >> /etc/vmware/config
    echo 'viewusb.AllowSmartcard = "true"' >> /etc/vmware/config

    There is no graphical option to pass devices through like there is in the Windows client. I spent more time than I’d like to admit on this step. It turns out the name of the file is important – it has to simply be called “config.” Place this config file in ~/.vmware (it can also be placed in /etc/vmware/config and/or /usr/lib/vmware/config)

  9. Start vmware-view and enjoy your new smartcard capabilities
    vmware-view

Troubleshooting

If it’s not working, make sure that these services are started

  • pcscd
  • vmware-USBArbitrator
  • vmware-view-USBD

One of these services have been known to crash if you attempt to connect while your smartcard is plugged in. The dance to get around this is to unplug your card reader, re-launch the above services, launch vmware-view, connect to your view server, and then only after you’ve logged in, plug in your card reader.


 

Update 2/25/2016: Here is the script I use to make my chromebook work beautifully for remoting into work:

sudo /etc/init.d/pcscd restart
sudo /etc/init.d/vmware-USBArbitrator restart
sudo /etc/init.d/vmware-view-USBD restart
setres 1600 1024
vmware-view
setres 2560 1700