Tag Archives: BASH

Mount encfs folder on startup with systemd

A quick note on how to encrypt a folder with encfs and then mount it on boot via a systemd startup script. In my case the folder is located on a network drive and I wanted it to happen whether I was logged in or not.

Create encfs folder:

encfs <path to encrypted folder> <path to mount decrypted folder>

Follow the prompts to create the folder and set a password.

Next create a file which will contain your decryption password

echo "YOUR_PASSWORD" > /home/user/super_secret_password_location
chmod 700 /home/user/super_secret_password_location

Create a simple script to be called by systemd on startup using cat to pass your password over to encfs

#!/bin/bash
cat super_secret_password_location | encfs -S path_to_encrypted_folder path_to_mount_decrypted_folder

Finally create a systemd unit to run your script on startup:

vim /etc/systemd/system/mount-encrypted.service
[Unit] 
Description=Mount encrypted folder 
After=network.target 

[Service] 
User=<YOUR USER> 
Type=oneshot 
ExecStartPre=/bin/sleep 20 
ExecStart=PATH_TO_SCRIPT
TimeoutStopSec=30 
KillMode=process 

[Install] 
WantedBy=multi-user.target

Then enable the unit:

sudo systemctl daemon-reload
sudo systemctl enable mountt-encrypted.service

CPU Pinning in Proxmox

Proxmox uses qemu which doesn’t implement CPU pinning by itself. If you want to limit a guest VM’s operations to specific CPU cores on the host you need to use taskset. It was a bit confusing to figure out but fortunately I found this gist by ayufan which handles it beautifully.

Save the following into taskset.sh and edit VMID to the ID of the VM you wish to pin CPUs to. Make sure you have the “expect” package installed.

#!/bin/bash

set -eo pipefail

VMID=200

cpu_tasks() {
	expect <<EOF | sed -n 's/^.* CPU .*thread_id=\(.*\)$/\1/p' | tr -d '\r' || true
spawn qm monitor $VMID
expect ">"
send "info cpus\r"
expect ">"
EOF
}

VCPUS=($(cpu_tasks))
VCPU_COUNT="${#VCPUS[@]}"

if [[ $VCPU_COUNT -eq 0 ]]; then
	echo "* No VCPUS for VM$VMID"
	exit 1
fi

echo "* Detected ${#VCPUS[@]} assigned to VM$VMID..."
echo "* Resetting cpu shield..."

for CPU_INDEX in "${!VCPUS[@]}"
do
	CPU_TASK="${VCPUS[$CPU_INDEX]}"
	echo "* Assigning $CPU_INDEX to $CPU_TASK..."
	taskset -pc "$CPU_INDEX" "$CPU_TASK"

 

Backup and restore docker container configurations

I came across a need to start afresh with my docker setup. I didn’t want to re-create all the port and volume mappings for my various containers. Fortunately I found a way around this by using docker-autocompose to create .yml files with all my settings and docker-compose to restore them to my new docker host.

Backup

Docker-autocompose source: https://github.com/Red5d/docker-autocompose

git clone https://github.com/Red5d/docker-autocompose.git
cd docker-autocompose
docker build -t red5d/docker-autocompose .

With docker-autocompose created you can then use it to create .yml files for each of your running containers by utilizing a simple BASH for loop:

for image in $(docker ps --format '{{.Names}}'); do docker run -v /var/run/docker.sock:/var/run/docker.sock red5d/docker-autocompose $image > $image.yml; done

Simple.

Restore

To restore, install and use docker-compose:

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Next we use another simple for loop to go through each .yml file and import them into Docker. The sed piece escapes any $ characters in the .yml files so they will import properly.

for file in *.yml; do sed 's/\$/\$\$/g' -i $file;
docker-compose -f $file up --force-recreate -d; done

You can safely ignore the warnings about orphans.

That’s it!

Troubleshooting

ERROR: Invalid interpolation format for “environment” option in service “Transmission”: “PS1=$(whoami)@$(hostname):$(pwd)$ “

This is due to .yml files which contain unescaped $ characters.

Escape any $ with another $ using sed

sed 's/\$/\$\$/g' -i <filename>.yml

ERROR: The Compose file ‘./MariaDB.yml’ is invalid because:
MariaDB.user contains an invalid type, it should be a string

My MariaDB docker .yml file had a user: environment variable that was a number, which docker compose interpreted as a number instead of a string. I had to modify that particular .yml file and add quotes around the value that I had for the User environment variable.

Accept multiple SSH RSA keys with ssh-keyscan

I came across a new machine that needed to connect to many SSH hosts via ansible. I had a problem where ansible was prompting me for each post if I wanted to accept the RSA key. As I had dozens of hosts I didn’t want to type yes for every single one; furthermore the yes command didn’t appear to work. I needed a way to automatically accept all SSH RSA keys from a list of server names. I know you can disable RSA key checking but I didn’t want to do that.

I eventually found this site which suggested a small for loop, which did the trick beautifully. I modified it to suit my needs.

This little two-liner takes a file (in my case, my ansible hosts file) and then runs ssh-keyscan against it and adds the results to the .ssh/known_hosts file. The end result is an automated way to accept many SSH keys.

SERVER_LIST=$(cat /etc/ansible/hosts)
for host in $SERVER_LIST; do ssh-keyscan -H $host >> ~/.ssh/known_hosts; done

Update /etc/hosts with current IP for ProxMox

ProxMox virtual environment is a really nice package for managing KVM and container visualization. One quirk about it is you need to have an entry in /etc/hosts that points to your system’s IP address, not 127.0.0.1 or 127.0.1.1. I wrote a little script to grab the IP of your specified interface and add it to /etc/hosts automatically for you. You may download it here or see below:

#!/bin/bash
#A simple script to update /etc/hosts with your current IP address for use with ProxMox virtual environment
#Author: Nicholas Jeppson
#Date: 4/25/2018

###Edit these variables to your environment###
INTERFACE="enp4s0" #the interface that has the IP you want to update hosts for
DNS_SUFFIX=""
###End variables section###

#Variables you shouldn't have to change
IP=$(ip addr show $INTERFACE |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1)
HOSTNAME=$(hostname)

#Use sed to add IP to first line in /etc/hosts
sed -i "1s/^/$IP $HOSTNAME $HOSTNAME$DNS_SUFFIX\n/" /etc/hosts

Rename files to reflect modified time while preserving extension

I’m undergoing a big project of scanning old family scrapbooks. I have two scanners involved, each with their own naming scheme, dumping files of various types (pdf, jpg) into the same folder. I need to have an accurate chronological view of when things were scanned (nothing was scanned at the exact same moment.) At this point the only way to get this information is to sort all the files by modified time,  but no other programs but a file manager look at files this way. I needed a way to rename all the files to reflect their modified time.

I found a couple different ways to do this, but settled on a bash for loop utilizing the stat, sed, and mv commands.

Challenge 1

Capture the extension of the files.

Using sed:

ls <filename> | sed 's/.*\(\..*\)$/\1/'

Challenge 2

Obtain the modified time of your file in an acceptable format. I do this using the stat command.

stat -c %y <filename>

%y is the best option here – it leaves no room for ambiguity. You could also choose %Y so the filenames aren’t so large and don’t contain colons (some systems struggle with this.) The downside do this is you only have to-the-second precision. In my case I had a few files that had the same epoch timestamp, which caused problems. More on how to format this can be found here and here.

Stringing it all together

I wrapped it all up in a bash for loop with appropriate variables. This was my final command, which I ran inside the directory I wanted to modify:

for file in *; do name=$(stat -c %y "$file"); ext=$(echo "$file" | sed 's/.*\(\..*\)$/\1/'); mv -n "$file" "$name$ext"; done

The for loop goes through each file one at a time and assigns it to the $file variable.  I then create the name variable which uses the stat command to obtain a precise date modified timestamp of the file. The ext variable is derived from the filename but only keeps the extension (using sed.) The last step uses mv -n (no clobber mode – don’t overwrite anything) to rename the original file to its date modified timestamp. The result: a directory where each file is named precisely when it was modified – a true chronology of what was scanned irrespective of file extension or which scanner created the file. Success.

Append users to powerbroker open RequireMembershipOf

The title isn’t very descriptive. I recently came across a need to script adding users & groups to the “RequireMembershipOf” directive of PowerBroker Open. PowerBroker is a handy tool that really facilitates joining a Linux machine to a Windows domain. It has a lot of configurable options but the one I was interested in was RequireMembershipOf – which as you might expect requires that the person signing into the Linux machine be a member of that list.

The problem with RequireMembershipOf is, as far as I can tell, it has no append function. It has an add function which frustratingly erases everything that was there before and includes only what you added onto the list. I needed a way to append a member to the already existing RequireMembershipOf list. My solution involves the usage of bash, sed, and a lot of regex. It boils down to two lines of code:

#take output of show require membership of, remove words multistring & local policy, replace spaces with carat (pbis space representation) and put results into variable (which automatically puts results onto a single line)

add=$(/opt/pbis/bin/config --show RequireMembershipOf | sed 's/\(multistring\)\|\(local policy\)//g' | sed 's/ /^/g')

#run RequireMembershipOf command with previous output and any added users

sudo /opt/pbis/bin/config RequireMembershipOf "$add" "<USER_OR_GROUP_TO_ADD>"

That did the trick.

Automatically extract rar files downloaded with transmission

My new project recently has been to configure sonarr to work with transmission. The challenge was getting these two pieces of software to properly interface with each other. Sonarr would successfully instruct transmission to download the requested show but once the download completed it would not import the show to its library. The reason behind this was my torrent tracker – most torrents are downloaded as multi part rar files. Sonarr has no mechanism for processing rar files so I had to get creative.

The solution was to write a simple script and have transmission execute it after finishing the download. The script uses the find command to look for rar files in the directory transmission created for that particular torrent. If any rar files are found it will extract them into that same directory. This was important because sonarr will only look in the torrent download directory for the completed video file.

After some tweaking I got it to work pretty well for me. Here is the code I used (thanks to this site for the direction.)

#!/bin/bash
#A simple script to extract a rar file inside a directory downloaded by Transmission.
#It uses environment variables passed by the transmission client to find and extract any rar files from a downloaded torrent into the folder they were found in.
find /$TR_TORRENT_DIR/$TR_TORRENT_NAME -name "*.rar" -execdir unrar e -o- "{}" \;

Save the above script into a file your transmission client can read and make it executable. Lastly configure transmission to run this script on torrent completion by modifying your settings.json file (mine was located at /var/lib/transmission/.config/transmission-daemon/settings.json) Modify the following variables (be sure to stop your transmission client first before making any changes.)

"script-torrent-done-enabled": true, 
"script-torrent-done-filename": "/path/to/where/you/saved/the/script",

That’s it! Sonarr will now properly import shows that were downloaded via multipart rar torrent.

Script to change WordPress URL

I wrote up a little script to run when you migrate a wordpress installation from one host to another (hostname change.)  Once this script is run you can access the site via the hostname of the server it’s running on and then change the URL to whatever you like. This comes in handy for when you want to migrate one internal host to another, then specify an external hostname once things are looking how you like them.

Change SQL_COMMAND to reflect the name of the wordpress table in the destination server. Thanks to this site for the guidance in writing the script.

#!/bin/bash

#A simple script to update the wordpress database to reflect a change in hostname
#Run this after changing the hostname / IP of a wordpress server

#Prompt for mysql root password
read -s -p "Enter mysql root password: " SQL_PASSWORD

SQL_COMMAND="mysql -u root -p$SQL_PASSWORD wordpress -e"

#Determine what the old URL was and save to variable
OLD_URL=$(mysql -u root -p$SQL_PASSWORD wordpress -e 'select option_value from wp_options where option_id = 1;' | grep http)
#Get current hostname
HOST=$(hostname)

#SQL statements to update database to new hostname
$SQL_COMMAND "UPDATE wp_options SET option_value = replace(option_value, '$OLD_URL', 'http://$HOST') WHERE option_name = 'home' OR option_name = 'siteurl';"
$SQL_COMMAND "UPDATE wp_posts SET guid = replace(guid, '$OLD_URL','http://$HOST');"
$SQL_COMMAND "UPDATE wp_posts SET post_content = replace(post_content, '$OLD_URL', 'http://$HOST');"
$SQL_COMMAND "UPDATE wp_postmeta SET meta_value = replace(meta_value,'$OLD_URL','http://$HOST');"

Rename files for proper sorting in Linux

I often come across files than are named 1..9 and then go to 10…99. The problem is many Linux programs begin with 1, then go to 10, etc. The sorting is wrong. Fortunately the rename command comes to our rescue:

rename 's/\d+/sprintf("%05d", $&)/e' *.jpg

Running the above command looks for numbers in the name of JPG files (in the current directory) and renames the file to ensure there are 5 digits in the filename. Now, instead of 1.jpg, your file will be named 00001.jpg. Handy.

Thanks to this forum for the information.