Tag Archives: scripting

VGA Passthrough with Threadripper

An unfortunate bug exists for the AMD Threadripper family of GPUs which causes VGA Passthrough not to work properly. Fortunately some very clever people have implemented a workaround to allow proper VGA passthrough until a proper Linux Kernel patch can be accepted and implemented. See here for the whole story.

Right now my Thrdearipper 1950x successfully has GPU passthrough thanks to HyenaCheeseHeads “java hack” applet.  I went this route because I really didn’t want to try and recompile my ProxMox kernel to get passthrough to work. Per the description “It is a small program that runs as any user with read/write access to sysfs (this small guide assumes “root”). The program monitors any PCIe device that is connected to VFIO-PCI when the program starts, if the device disconnects due to the issues described in this post then the program tries to re-connect the device by rewriting the bridge configuration.” Instructions taken from the above Reddit post.

  • Go to https://pastebin.com/iYg3Dngs and hit “Download” (the MD5 sum is supposed to be 91914b021b890d778f4055bcc5f41002)
  • Rename the downloaded file to “ZenBridgeBaconRecovery.java” and put it in a new folder somewhere
  • Go to the folder in a terminal and type “javac ZenBridgeBaconRecovery.java”, this should take a short while and then complete with no errors. You may need to install the Java 8 JDK to get the javac command (use your distribution’s software manager)
  • In the same folder type “sudo java ZenBridgeBaconRecovery”
  • Make sure that the PCIe device that you intend to passthru is listed as monitored with a bridge
  • Now start your VM

In my case (Debian Stretch, ProxMox) I needed to install openjdk-8-jdk-headless

sudo apt install openjdk-8-jdk-headless
javac ZenBridgeBaconRecovery.java

Next I have a little script on startup to spawn this as root in a detached tmux session, so I don’t have to remember to run it (If you try to start your VM before running this, it will hose passthrough on your system until you reboot it.) Be sure to change the script to point to wherever you compiled ZenBridgeBaconRecovery

#!/bin/bash
cd /home/nicholas  #change me to suit your needs
sudo java ZenBridgeBaconRecovery

And here is the command I use to run on startup:

tmux new -d '/home/nicholas/passthrough.sh'

Again, be sure to modify the above to point to the path of wherever you saved the above script.

So far this works pretty well for me. I hate having to run a java process as sudo, but it’s better than recompiling my kernel.

Update /etc/hosts with current IP for ProxMox

ProxMox virtual environment is a really nice package for managing KVM and container visualization. One quirk about it is you need to have an entry in /etc/hosts that points to your system’s IP address, not 127.0.0.1 or 127.0.1.1. I wrote a little script to grab the IP of your specified interface and add it to /etc/hosts automatically for you. You may download it here or see below:

#!/bin/bash
#A simple script to update /etc/hosts with your current IP address for use with ProxMox virtual environment
#Author: Nicholas Jeppson
#Date: 4/25/2018

###Edit these variables to your environment###
INTERFACE="enp4s0" #the interface that has the IP you want to update hosts for
DNS_SUFFIX=""
###End variables section###

#Variables you shouldn't have to change
IP=$(ip addr show $INTERFACE |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1)
HOSTNAME=$(hostname)

#Use sed to add IP to first line in /etc/hosts
sed -i "1s/^/$IP $HOSTNAME $HOSTNAME$DNS_SUFFIX\n/" /etc/hosts

Use grep, awk, and cut to display only your IP address

I needed a quick way to determine my IP address for a script. If you run the ip addr show command it outputs a lot of information I don’t need. I settled on using grep, awk, and cut to get the information I want

ip addr show <interface name> |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1

The result is a clean IP address. Beautiful. Thanks to this site for insight into how to use cut.

Simple network folder mount script for Linux

I wrote a simple little network mount script for Linux desktops. I wanted to replicate my Windows box as best as I could where a bunch of network drives are mapped upon user login. This script relies on having gvfs-mount and the cifs utilities installed (installed by default in Ubuntu.)

#!/bin/bash
#Simple script to mount network drives

#Specify network paths here, one per line
#use forward slash instead of backslash
FOLDER=(
  server1/folder1
  server1/folder2
  server2/folder2/folder3
  server3/
)

#Create a symlink to gvfs mounts in home directory
ln -s $XDG_RUNTIME_DIR/gvfs ~/Drive_Mounts

for mountpoint in "${FOLDER[@]}"
do
  gvfs-mount smb://$mountpoint
done

Mark this script as executable and place it in /usr/local/bin. Then make it a default startup application for all users:

vim /etc/xdg/autostart/drive-mount.desktop
[Desktop Entry]
Name=Mount Network Drives
Type=Application
Exec=/usr/local/bin/drive-mount.sh
Terminal=false

Voila, now you’ve got your samba mount script starting up for every user.

Append users to powerbroker open RequireMembershipOf

The title isn’t very descriptive. I recently came across a need to script adding users & groups to the “RequireMembershipOf” directive of PowerBroker Open. PowerBroker is a handy tool that really facilitates joining a Linux machine to a Windows domain. It has a lot of configurable options but the one I was interested in was RequireMembershipOf – which as you might expect requires that the person signing into the Linux machine be a member of that list.

The problem with RequireMembershipOf is, as far as I can tell, it has no append function. It has an add function which frustratingly erases everything that was there before and includes only what you added onto the list. I needed a way to append a member to the already existing RequireMembershipOf list. My solution involves the usage of bash, sed, and a lot of regex. It boils down to two lines of code:

#take output of show require membership of, remove words multistring & local policy, replace spaces with carat (pbis space representation) and put results into variable (which automatically puts results onto a single line)

add=$(/opt/pbis/bin/config --show RequireMembershipOf | sed 's/\(multistring\)\|\(local policy\)//g' | sed 's/ /^/g')

#run RequireMembershipOf command with previous output and any added users

sudo /opt/pbis/bin/config RequireMembershipOf "$add" "<USER_OR_GROUP_TO_ADD>"

That did the trick.

Change ZFS based NFS SR address in Xenserver

I recently acquired a shiny new set of SSDs to host my VMs. The problem is I needed to create a new ZFS array to accommodate them. I needed to figure out a way to migrate my VMs to the new array and then instruct Xenserver to use the new array instead of the old one.

Fortunately with a bit of research I learned this is fairly painless. Thanks to this discussion on citrix forums that got me pointed in the right direction. To change the server / IP address of an existing NFS storage repository in Xenserver you must do the following:

  • Shut down affected VMs
  • Shutdown any VMs using NFS SRs
  • Copy the NFS SRs (the directories containing the .vhd files) to the new NFS server
  • xe pbd-unplug uuid=<uuid of pbd pointing to the NFS SR>
  • xe pbd-destroy uuid=<uuid of pbd pointing to the NFS SR>
  • xe pbd-create host-uuid=<uuid of Xen Host> sr-uuid=<uuid of the NFS SR> device-config-server=<New NFS server name> device-config-serverpath=<NFS Share Name>
  • xe pbd-plug uuid=<uuid of the pbd created above>
  • Reboot the VMs using NFS SRs

In my case since my VMs were on an existing ZFS volume with snapshots I wanted to preserve, I used ZFS send and receive to transfer data from my old array to my SSD array. Bonus: I was able to do this while the VMs were still running to ensure minimal downtime. My ZFS copy procedure was as follows:

  • Create recursive snapshot of my VM dataset
    zfs snapshot -r storage/VMs@migrate
  • Start the initial data transfer (this took quite some time to finish)
    zfs send -R storage/VMs@migrate | zfs recv ssd/VMs
  • Do another incremental snapshot and transfer after initial huge transfer is complete (this took much less time to do)
    zfs snapshot -r storage/VMs@migrate2
    zfs send -R -i storage/VMs@migrate storage/VMs@migrate2 | zfs recv ssd/VMs
  • Shutdown all affected VMs and do one more ZFS snapshot & transfer to ensure consistent data:
    zfs snapshot -r storage/VMs@migrate3
    zfs send -R -i storage/VMs@migrate2 storage/VMs@migrate3 | zfs recv ssd/VMs

In the above examples my source dataset was storage/VMs and the destination dataset was ssd/VMs.

Once the data was all transferred to the new location it was time to tell Xenserver about it. I had enough VMs that it was worth my time to write a little script to do it. It’s quick and dirty but it did the job. Behold:

#!/bin/bash
#Author: Nicholas Jeppson
#A simple script to change a xenserver NFS storage repository address to a new location
#Modify NFS_SERVER, NFS_PATH and/or NFS_VERSION to match your environment. 
#Run this script on each xenserver host in your pool. Empty output means the transfer was successful.
#This script takes one argument - the name of the SR to be transferred.

SR_NAME="$1"

NFS_SERVER=10.0.0.1
NFS_PATH=/mnt/ssd/VMs/$SR_NAME
NFS_VERSION=4

#Use sed and awk to grab necessary UUIDs
HOST_UUID=$(xe host-list|egrep -B3 `hostname`$ | grep uuid | awk '{print $5}')
PBD_UUID=$(xe pbd-list|grep -A4 -B4 $SR_NAME | grep -B2 $HOST_UUID |grep -w '^uuid ( RO)' | awk '{print $5}')
SR_UUID=$(xe pbd-list|grep -A4 -B4 $SR_NAME | grep -A2 $HOST_UUID | grep 'sr-uuid' | awk '{print $4}')

#Unplug & destroy old NFS location, create new NFS location
xe pbd-unplug uuid=$PBD_UUID
xe pbd-destroy uuid=$PBD_UUID
NEW_PBD_UUID=$(xe pbd-create host-uuid=$HOST_UUID sr-uuid=$SR_UUID device-config-server=$NFS_SERVER device-config-serverpath=$NFS_PATH device-config-nfsversion=$NFS_VERSION)
xe pbd-plug uuid=$NEW_PBD_UUID

Download the script here (right click / save as)

You can run this script in a simple for loop with something like this:

for SR in <list of SR names separated by a space>; do bash <name of script saved from above> $SR; done

If you named the above script nfs-migrate.sh, and you had three SRs to change (blog1, blog2, blog3) then it would be:

for SR in blog1 blog2 blog3; do bash nfs-migrate.sh $SR; done

After I migrated the data and ran that script, my VMs booted up using the new SSD array. Success.

Automatically extract rar files downloaded with transmission

My new project recently has been to configure sonarr to work with transmission. The challenge was getting these two pieces of software to properly interface with each other. Sonarr would successfully instruct transmission to download the requested show but once the download completed it would not import the show to its library. The reason behind this was my torrent tracker – most torrents are downloaded as multi part rar files. Sonarr has no mechanism for processing rar files so I had to get creative.

The solution was to write a simple script and have transmission execute it after finishing the download. The script uses the find command to look for rar files in the directory transmission created for that particular torrent. If any rar files are found it will extract them into that same directory. This was important because sonarr will only look in the torrent download directory for the completed video file.

After some tweaking I got it to work pretty well for me. Here is the code I used (thanks to this site for the direction.)

#!/bin/bash
#A simple script to extract a rar file inside a directory downloaded by Transmission.
#It uses environment variables passed by the transmission client to find and extract any rar files from a downloaded torrent into the folder they were found in.
find /$TR_TORRENT_DIR/$TR_TORRENT_NAME -name "*.rar" -execdir unrar e -o- "{}" \;

Save the above script into a file your transmission client can read and make it executable. Lastly configure transmission to run this script on torrent completion by modifying your settings.json file (mine was located at /var/lib/transmission/.config/transmission-daemon/settings.json) Modify the following variables (be sure to stop your transmission client first before making any changes.)

"script-torrent-done-enabled": true, 
"script-torrent-done-filename": "/path/to/where/you/saved/the/script",

That’s it! Sonarr will now properly import shows that were downloaded via multipart rar torrent.

Find top 10 requests returning 404 errors

I had a website where I was curious what the top 10 URLs that were returning 404s were along with how many hits those URLs got. This was after a huge site redesign so I was curious what old links were still trying to be accessed.

Getting a report on this can be accomplished with nothing more than the Linux command line and the log file you’re interested in. It involves combining grep, sed, awk, sort, uniq, and head commands. I enjoyed how well these tools work together so I thought I’d share. Thanks to this site for giving me the inspiration to do this.

This is the command I used to get the information I wanted:

grep '404' _log_file_ | sed 's/, /,/g' | awk {'print $7'} | sort | uniq -c | sort -n -r | head -10

Here is a rundown of each command and why it was used:

  • grep ‘404’ _log_file_ (replace with filename of your apache, tomcat, or varnish access log.) grep reads a file and returns all instances of what you want, in this case I’m looking for the number 404 (page not found HTTP error)
  • sed ‘s/, /,/g’ Sed will edit a stream of text in any way that you specify. The command I gave it (s/, /,/g) tells sed to look for instances of commas followed by spaces and replace them with just commas (eliminating the space after any comma it sees.) This was necessary in my case because sometimes the source IP address field has multiple IP addresses and it messed up the results. This may be optional if your server isn’t sitting behind any type of reverse proxy.
  • awk {‘print $7’} Awk has a lot of similar functions to sed – it allows you to do all sorts of things to text. In this case we’re telling awk to only display the 7th column of information (the URL requested in apache and varnish logs is the 7th column)
  • sort This command (absent of arguments) sorts our results alphabetically, which is necessary for the next command to work properly.
  • uniq -c This command eliminates any duplicates in the results. The -c argument adds a number indicating how many times that unique string was found.
  • sort -n -r Sorts the results in reverse alphabetical order. The -n argument sorts things numerically so that 2 follows 1 instead of 10. -r Indicates to reverse the order so the highest number is at the top of the results instead of the default which is to put the lowest number first.
  • head -10 outputs the top 10 results. This command is optional if you want to see all the results instead of the top 10. A similar command is tail – if you want to see the last results instead.

This was my output – exactly what I was looking for. Perfect.

2186 http://<sitename>/source/quicken/index.ini
2171 http://<sitename>/img/_sig.png
1947 http://<sitename>/img/email/email1.aspx
1133 http://<sitename>/source/quicken/index.ini
830 http://<sitename>/img/_sig1.png
709 https://<sitename>/img/email/email1.aspx
370 http://<sitename>/apple-touch-icon.png
204 http://<sitename>/apple-touch-icon-precomposed.png
193 http://<sitename>/About-/Plan.aspx
191 http://<sitename>/Contact-Us.aspx

Script to change WordPress URL

I wrote up a little script to run when you migrate a wordpress installation from one host to another (hostname change.)  Once this script is run you can access the site via the hostname of the server it’s running on and then change the URL to whatever you like. This comes in handy for when you want to migrate one internal host to another, then specify an external hostname once things are looking how you like them.

Change SQL_COMMAND to reflect the name of the wordpress table in the destination server. Thanks to this site for the guidance in writing the script.

#!/bin/bash

#A simple script to update the wordpress database to reflect a change in hostname
#Run this after changing the hostname / IP of a wordpress server

#Prompt for mysql root password
read -s -p "Enter mysql root password: " SQL_PASSWORD

SQL_COMMAND="mysql -u root -p$SQL_PASSWORD wordpress -e"

#Determine what the old URL was and save to variable
OLD_URL=$(mysql -u root -p$SQL_PASSWORD wordpress -e 'select option_value from wp_options where option_id = 1;' | grep http)
#Get current hostname
HOST=$(hostname)

#SQL statements to update database to new hostname
$SQL_COMMAND "UPDATE wp_options SET option_value = replace(option_value, '$OLD_URL', 'http://$HOST') WHERE option_name = 'home' OR option_name = 'siteurl';"
$SQL_COMMAND "UPDATE wp_posts SET guid = replace(guid, '$OLD_URL','http://$HOST');"
$SQL_COMMAND "UPDATE wp_posts SET post_content = replace(post_content, '$OLD_URL', 'http://$HOST');"
$SQL_COMMAND "UPDATE wp_postmeta SET meta_value = replace(meta_value,'$OLD_URL','http://$HOST');"

Mountpoint check script

I wrote a simple script to check to see if a specific mountpoint on a Linux system is still live.  It does this by trying to read a specific file on the share, and if it cannot, write the event to a log, unmount, and then re-mount the folder. The need arose for instances where a file server has been rebooted and the linux system loses the connection to the share. This way it will automatically re-mount.

Modify the variables section as needed and then have a cron job run the script as root at whatever interval you want. Enjoy.

#!/bin/bash
#Script to monitor mount directories to ensure they are properly mounted
#Place a file with the word "mounted" in it inside all mounted directories
#The script will try to read the file and attempt to unmount and remount the folder if it fails to read the file
#Updated 8/30/2016 by Nicholas Jeppson

#---------Variable section------------#

#Place mount folder locations here, separated by space 
#Paths containing spaces need to have quotes around them
LOCATIONS=(/home/njeppson /home/njeppson/Desktop)

#Name of file to try to read
TEST_FILENAME="mountcheck"

#---------End Variable Section--------#
#-----Do not edit below this line-----#

#Read file, if contents don't contain "mounted" then attempt to unmount and re-mount the folder, output attempt to /var/log/mountcheck

for FOLDER in "${LOCATIONS[@]}"; do 
 if [[ $(cat $FOLDER/$TEST_FILENAME) != "mounted" ]]; then
 echo "$(date "+%b %d %T") $(hostname) $FOLDER Not mounted, remounting." >> /var/log/mountcheck 
 umount $FOLDER
 mount $FOLDER
 fi
done