Tag Archives: BASH

mountpoint check script

I have a few NFS mounts that I want to be working at all times. If there is a power outage, sometimes NFS clients come up before the NFS server does, and thus the mounts are not there. I wrote a quick little bash script to fix this utilizing the mountpoint command.

Behold (Change the mountpoint(s) to the one you want to monitor.)

#!/bin/bash
#Simple bash script to check mount points and re-mount them if they're not mounted
#Authored by Nick Jepspon 8/11/2019

### Variables ###
# Changes these to suit your needs

MOUNTPOINTS=(/mnt/1 /mnt/2 /mnt/3)  #space separated list of mountpoints to monitor

### End Variables ###


for mount in $MOUNTPOINTS 
do
    if  ! mountpoint -q $mount
    then
        echo "$mount is not mounted, attempting to mount."
        mount $mount
    fi
    #otherwise do nothing
done

I have this set as a cronjob running every 5 minutes

*/5 * * * * /mountcheck.sh

Now the system will continually try to mount the specified folder if it isn’t already mounted.

Automate USG config deploy with Ubiquiti API in Bash

I have a new Ubiquiti Unifi Security Gateway Pro 4 which is pretty neat; however, the Unifi web interface is pretty limited. Most advanced firewall functions must be configured outside of the GUI. One must create a .json file with the configuration they need, copy that file to the Unifi controller, and then force a provision of the gateway to get it to pick up the new config.

I wanted a way to automate this process but very frustratingly Ubiquiti hasn’t documented their Unifi Controller API. I had to resort to reverse engineering their API by using my browser’s developer console to figure out which API calls were needed to do what I wanted. I then took the API functions from https://dl.ui.com/unifi/5.10.25/unifi_sh_api (the current unifi controller software download link which has unifi_sh_api) and embedded them into a bash script. Thanks to this forum post for the information on how to do this.

This bash script copies the specified config file to the Unifi controller via SCP, then uses curl to issue the API call to tell the controller to force a provision to the device having the supplied mac address.

#!/bin/bash
# Written by Nick Jeppson 08/01/2019
# Inspired by posts made from ubiquiti forums: https://community.ui.com/questions/API/82a3a9c7-60da-4ec2-a4d1-cac68e86b53c
# API interface functions taken from unifi_sh_api shipped with controller version 5.10.25, https://dl.ui.com/unifi/5.10.25/unifi_sh_api
#
# This bash script copies the specified config file to the Unifi controller via SCP
# It then uses curl to issue an API call to tell the controller to force a provision to the device with the supplied mac address. 

#### BEGIN VARIABLES ####
#Fill out to match your environment

gateway_mac="12:34:56:78:90:ab" #MAC address of the gateway you wish to manage
config_file="your_config_file.json"   #Path to config file
unifi_server="unifi_server_name"         #Name/IP of unifi controller server
unifi_gateway_path="/usr/lib/unifi/data/sites/default/config.gateway.json"    #Path to config.gateway.json on the controller
ssh_user="root"                 #User to SSH to controller as
username="unifi_admin_username"             #Unifi username
password="unifi_admin_password" #Unifi password
baseurl="https://unifi_server_name:8443" #Unifi URL
site="default"                  #Unifi site the gateway resides in

#### END VARIABLES ####

#Copy updated config to controller
scp $config_file $ssh_user@$unifi_server:$unifi_gateway_path

#API interface functions
cookie=$(mktemp)
curl_cmd="curl --tlsv1 --silent --cookie ${cookie} --cookie-jar ${cookie} --insecure "
unifi_login() {
    # authenticate against unifi controller
    ${curl_cmd} --data "{\"username\":\"$username\", \"password\":\"$password\"}" $baseurl/api/login
}

unifi_logout() {
    # logout
    ${curl_cmd} $baseurl/logout
}

unifi_api() {
    if [ $# -lt 1 ] ; then
        echo "Usage: $0 <uri> [json]"
        echo "    uri example /stat/sta "
        return
    fi
    uri=$1
    shift
    [ "${uri:0:1}" != "/" ] && uri="/$uri"
    json="$@"
    [ "$json" = "" ] && json="{}"
    ${curl_cmd} --data "$json" $baseurl/api/s/$site$uri
}

#Trigger a provision
unifi_login 
unifi_api /cmd/devmgr {\"mac\": \"$gateway_mac\", \"cmd\": \"force-provision\"}
unifi_logout

No more manually clicking provision after manually editing the config file on the controller!

Batch crop images with imagemagick

My scanner adds annoying borders on everything it scans. I wanted to find a way to fix this with the command line. Enter Imagemagick (thanks to this site for the help.)

I found one picture and selected the area I wanted to crop it from. I used IrfanView to tell me the dimensions of the desired crop, then passed that info onto the command line. I used a bash for loop to get the job done on the entire directory:

for file in *.jpg; do convert $file -crop 4907x6561+53+75 $file; done

It worked beautifully.

Find video files in bash

I wanted a quick way to search my files for video types. I found here a quick snippet on how to do so. I augmented it after finding out how to remove some info and make it case insensitive. Here is the result:

find FULL_FOLDER_PATH -type f | grep -E "\.webm$|\.flv$|\.vob$|\.ogg$|\.ogv$|\.drc$|\.gifv$|\.mng$|\.avi$|\.mov$|\.qt$|\.wmv$|\.yuv$|\.rm$|\.rmvb$|/.asf$|\.amv$|\.mp4$|\.m4v$|\.mp*$|\.m?v$|\.svi$|\.3gp$|\.flv$|\.f4v$" -iname|sed 's/^.*://g'|sort

Mount encfs folder on startup with systemd

A quick note on how to encrypt a folder with encfs and then mount it on boot via a systemd startup script. In my case the folder is located on a network drive and I wanted it to happen whether I was logged in or not.

Create encfs folder:

encfs <path to encrypted folder> <path to mount decrypted folder>

Follow the prompts to create the folder and set a password.

Next create a file which will contain your decryption password

echo "YOUR_PASSWORD" > /home/user/super_secret_password_location
chmod 700 /home/user/super_secret_password_location

Create a simple script to be called by systemd on startup using cat to pass your password over to encfs

#!/bin/bash
cat super_secret_password_location | encfs -S path_to_encrypted_folder path_to_mount_decrypted_folder

Finally create a systemd unit to run your script on startup:

vim /etc/systemd/system/mount-encrypted.service
[Unit] 
Description=Mount encrypted folder 
After=network.target 

[Service] 
User=<YOUR USER> 
Type=oneshot 
ExecStartPre=/bin/sleep 20 
ExecStart=PATH_TO_SCRIPT
TimeoutStopSec=30 
KillMode=process 

[Install] 
WantedBy=multi-user.target

Then enable the unit:

sudo systemctl daemon-reload
sudo systemctl enable mountt-encrypted.service

CPU Pinning in Proxmox

Proxmox uses qemu which doesn’t implement CPU pinning by itself. If you want to limit a guest VM’s operations to specific CPU cores on the host you need to use taskset. It was a bit confusing to figure out but fortunately I found this gist by ayufan which handles it beautifully.

Save the following into taskset.sh and edit VMID to the ID of the VM you wish to pin CPUs to. Make sure you have the “expect” package installed.

#!/bin/bash

set -eo pipefail

VMID=200

cpu_tasks() {
	expect <<EOF | sed -n 's/^.* CPU .*thread_id=\(.*\)$/\1/p' | tr -d '\r' || true
spawn qm monitor $VMID
expect ">"
send "info cpus\r"
expect ">"
EOF
}

VCPUS=($(cpu_tasks))
VCPU_COUNT="${#VCPUS[@]}"

if [[ $VCPU_COUNT -eq 0 ]]; then
	echo "* No VCPUS for VM$VMID"
	exit 1
fi

echo "* Detected ${#VCPUS[@]} assigned to VM$VMID..."
echo "* Resetting cpu shield..."

for CPU_INDEX in "${!VCPUS[@]}"
do
	CPU_TASK="${VCPUS[$CPU_INDEX]}"
	echo "* Assigning $CPU_INDEX to $CPU_TASK..."
	taskset -pc "$CPU_INDEX" "$CPU_TASK"
done

Update 9/29/18: Fixed missing done at the end. Also if you want to offset which cores this script uses, you can do so by modifying  the $CPU_INDEX variable to do a bit of math, like so:

        taskset -pc "$[CPU_INDEX+16]"

The above adds 16 to each process ID, so instead of staring on thread 0 it starts on thread 16.

Backup and restore docker container configurations

I came across a need to start afresh with my docker setup. I didn’t want to re-create all the port and volume mappings for my various containers. Fortunately I found a way around this by using docker-autocompose to create .yml files with all my settings and docker-compose to restore them to my new docker host.

Backup

Docker-autocompose source: https://github.com/Red5d/docker-autocompose

git clone https://github.com/Red5d/docker-autocompose.git
cd docker-autocompose
docker build -t red5d/docker-autocompose .

With docker-autocompose created you can then use it to create .yml files for each of your running containers by utilizing a simple BASH for loop:

for image in $(docker ps --format '{{.Names}}'); do docker run -v /var/run/docker.sock:/var/run/docker.sock red5d/docker-autocompose $image > $image.yml; done

Simple.

Restore

To restore, install and use docker-compose:

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Next we use another simple for loop to go through each .yml file and import them into Docker. The sed piece escapes any $ characters in the .yml files so they will import properly.

for file in *.yml; do sed 's/\$/\$\$/g' -i $file;
docker-compose -f $file up --force-recreate -d; done

You can safely ignore the warnings about orphans.

That’s it!

Troubleshooting

ERROR: Invalid interpolation format for “environment” option in service “Transmission”: “PS1=$(whoami)@$(hostname):$(pwd)$ “

This is due to .yml files which contain unescaped $ characters.

Escape any $ with another $ using sed

sed 's/\$/\$\$/g' -i <filename>.yml

ERROR: The Compose file ‘./MariaDB.yml’ is invalid because:
MariaDB.user contains an invalid type, it should be a string

My MariaDB docker .yml file had a user: environment variable that was a number, which docker compose interpreted as a number instead of a string. I had to modify that particular .yml file and add quotes around the value that I had for the User environment variable.

Accept multiple SSH RSA keys with ssh-keyscan

I came across a new machine that needed to connect to many SSH hosts via ansible. I had a problem where ansible was prompting me for each post if I wanted to accept the RSA key. As I had dozens of hosts I didn’t want to type yes for every single one; furthermore the yes command didn’t appear to work. I needed a way to automatically accept all SSH RSA keys from a list of server names. I know you can disable RSA key checking but I didn’t want to do that.

I eventually found this site which suggested a small for loop, which did the trick beautifully. I modified it to suit my needs.

This little two-liner takes a file (in my case, my ansible hosts file) and then runs ssh-keyscan against it and adds the results to the .ssh/known_hosts file. The end result is an automated way to accept many SSH keys.

SERVER_LIST=$(cat /etc/ansible/hosts)
for host in $SERVER_LIST; do ssh-keyscan -H $host >> ~/.ssh/known_hosts; done

Update /etc/hosts with current IP for ProxMox

ProxMox virtual environment is a really nice package for managing KVM and container visualization. One quirk about it is you need to have an entry in /etc/hosts that points to your system’s IP address, not 127.0.0.1 or 127.0.1.1. I wrote a little script to grab the IP of your specified interface and add it to /etc/hosts automatically for you. You may download it here or see below:

#!/bin/bash
#A simple script to update /etc/hosts with your current IP address for use with ProxMox virtual environment
#Author: Nicholas Jeppson
#Date: 4/25/2018

###Edit these variables to your environment###
INTERFACE="enp4s0" #the interface that has the IP you want to update hosts for
DNS_SUFFIX=""
###End variables section###

#Variables you shouldn't have to change
IP=$(ip addr show $INTERFACE |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1)
HOSTNAME=$(hostname)

#Use sed to add IP to first line in /etc/hosts
sed -i "1s/^/$IP $HOSTNAME $HOSTNAME$DNS_SUFFIX\n/" /etc/hosts