Tag Archives: scripting

Trigger button to run a script in Home Assistant

I configured a button (Runlesswire Click) to log diaper changes for my new baby. The diaper changes are logged in a Google Docs spreadsheet. I set up a simple public facing Google Form that I could run unauthenticated curl requests against. I then configured Home Assistant to run that curl command when the button is pressed. Instant diaper logging by the press of a button.

Lessons learned:

  • Zigbee Home Assistant (ZHA) does not yet support the Zigbee Green protocol, which the RunlessWire Click uses. I had to pair the switches to my Hue hub instead.
    * It looks like they’re getting close to supporting it, though: https://github.com/zigpy/zigpy/pull/1282

Here was my process:

  • Create Google Form
  • Obtain form ID from URL bar
  • Get pre-filled link to get names of fields by clicking the three dots on top right and clicking “Get pre-filled link”. Make note of the names for each entry e.g. entry.1363419348
    Thanks to help from: https://stackoverflow.com/questions/65142364/i-cant-find-name-attribute-while-inspecting-input-elements-of-google-form-ho
  • Curl command is:
    curl https://docs.google.com/forms/<FORM_URL>/formResponse -d ifq -d <ENTRY_NAME>=<ENTITY_VALUE> -d <ADDITIONAL_ENTRY_NAME>=<ADDITIONAL_ENTRY_VALUE> -d submit=Submit
    Thanks to help from: https://eureka.ykyuen.info/2014/07/30/submit-google-forms-by-curl-command/
  • Shell commands go into configuration.yaml
    shell_command:
    log_pee: <CURL_COMMAND>
    log_poo: <CURL_COMMAND>
    Thanks to help from: https://community.home-assistant.io/t/dont-understand-how-to-use-shell-commands/576580/9
  • Restart Home Assistant to pick up your configuration changes.
  • Configure the automation to call Service: shell_command

Success!

piKVM pushover startup script

I’ve had an issue where I wasn’t sure if my dynamic DNS provider registered properly. I then realized that I have a piKVM attached to one of my servers that boots on powerup, even if the server does not. I could utilize this piKVM to help me out.

Thanks to inspiration from Chris Dzombak I was able to whip up a little script that runs on startup. This script waits 5 minutes to allow for my firewall and modem to boot up, then sends a pushover notification to let me know the piKVM is online and what its external IP address is.

To get it working on the piKVM I had to enter into RW mode, write and save the script, add execute permissions to the script, then configure a systemd service to run the script at startup.

Here is the script, saved under /root/boot-pushover.sh

#!/usr/bin/env bash
set -eu

#Wait 5 minutes to allow router bootup
sleep 300

TOKEN="PUSHOVER_APPLICATION_TOKEN"
USER="PUSHOVER_USER_TOKEN"
EXTERNAL_IP="$(curl ifconfig.me)"
MESSAGE="$(hostname) is online. External IP: $EXTERNAL_IP"

#Send pushover command to alert it's up and send its external IP
curl -s \
  --form-string "token=$TOKEN" \
  --form-string "user=$USER" \
  --form-string "message=$MESSAGE" \
  https://api.pushover.net/1/messages.json

Set executable: chmod +x /root/boot-pushover.sh

Here is the systemd service, saved under /etc/systemd/system/boot-pushover-notification.service

[Service]
Type=oneshot
ExecStart=/root/boot-pushover.sh
RemainAfterExit=yes
User=root
Group=root
RestartSec=15
Restart=on-failure

[Unit]
Wants=network.target
After=network.target nss-lookup.target

[Install]
WantedBy=multi-user.target

Reload daemons & enable startup:

systemctl daemon-reload
systemctl enable boot-pushover-notification.service

Test by exiting rw mode and rebooting the piKVM:

ro
reboot

It works really well!

rsync create directory tree on remote host

I ran into an issue where I want to use rsync to copy a folder to a remote host into a destination directory that doesn’t yet exist. I was frustrated to find that rsync doesn’t appear to be able to create a remote directory tree. It would keep erroring out with this message:

rsync: mkdir "/opt/splunk/var/run/searchpeers" failed: No such file or directory (2)

I discovered this workaround which allowed me to finally accomplish what I wanted in one line: create the remote directory structure first, then synchronize into it. This is done with the --rsync-path option. You can specify the mkdir -p command beforehand, then add the rsync command after double ampersand (&&)

My specific use case was to copy a Splunk search peer bundle from one indexer to another. This was my working one liner:

rsync -aP --rsync-path="sudo mkdir -p /opt/splunk/var/run/searchpeers && sudo rsync" /opt/splunk/var/run/searchpeers splunk-idx2.jeppson.org:/opt/splunk/var/run/searchpeers

Success.

Synchronize internet calendar to google calendar more frequently

Despite having my own e-mail server I still use Google Calendar for some things. I have an ICS file for the calendar for the Covid vaccination clinic I’m volunteering at. I ran into some frustrating sync problems when I tried to import it into my calendar. Google Calendar’s ICS sync process takes up to 12 hours, which was frustrating. I also had some mobile clients that wouldn’t even see the calendar imported from the ICS file.

I luckily found this post from Derek Antrican on stack exchange that outlines a script that you can configure to run at any given interval which will take all events in that ICS file and add/update/remove your calendar to match. It works beautifully. It’s a Google Apps script that you must copy into your own Google Scripts account to run.

First, go to the script here. Then go to Overview (i) and click “Make a Copy” in the top right (page icon.) Once the scripts are copied to your own script.google.com account, follow the instructions for configuring the script for your desired ICS URLs and other options, then click run.

My calendars are all synchronized and happy now.

Threadripper / Epyc processor core optimization

I had a pet project (folding@home) where I wanted to maximize computing power. I became frustrated with default CPU scheduling of my folding@home threads. Ideal performance would keep similar threads on the same CPU, but the threads were jumping all over the place, which was impacting performance.

Step one was to figure out which threads belonged to which physical cores. I found on this site that you can use cat to find out what your “sibling threads” are:

cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

The above command is for my Threadripper & Epyc systems, which each have 16 cores hyperthreaded to 32 cores. Adjust the {0..15} number to match your number of cores (core 0 being the fist core.) This was my output:

cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

0,16
1,17
2,18
3,19
4,20
5,21
6,22
7,23
8,24
9,25
10,26
11,27
12,28
13,29
14,30
15,31

Now that I know the sibling threads are offset by 16, I can use this information to optimize my folding@home VMs. I modified my CPU pinning script to take this into consideration. The script ensures that each VM is pinned to only use sibling threads (ensuring they all stay on the same physical CPU.)

This script should be used with caution. It pins processes to specific CPUs, which limits the kernel scheduler’s ability to move things around if needed. If configured badly this can cause the machine to lock up or VMs to be terminated.

I saw some impressive results spinning up four separate 8 core VMs and pinning them to sibling cores using this script. It almost doubled the rate at which I completed folding@home work units.

And now, the script:

#!/bin/bash
#Properly assign CPU cores to their respective die for EPYC/Threadripper systems
#Based on how hyperthreads are done in these systems
#cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

#The script takes two arguments - the ID of the Proxmox VM to modify, and the core to begin the VM on
#If running this against multiple VMs, make sure to increment this second number by half of the cores of the previous VM
#For example, if I have one 8 core VM and I run this script specifying 0 for the offset, if I spin up a second VM, the second argument would be 4
#this would ensure the second VM starts on core 4 (the 5th core) and assigns sibling cores to match

set -eo pipefail

#take First argument as which VMID to pin CPU cores to, the second argument is which core to start pinning to
VMID=$1
OFFSET=$2

#Determine offset for sibling threads
SIBLING_THREAD_OFFSET=$(cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list| sed 's/,/ /g' | awk '{print $2}')

#Function to determine number of CPU cores a VM has
cpu_tasks() {
	expect <<EOF | sed -n 's/^.* CPU .*thread_id=\(.*\)$/\1/p' | tr -d '\r' || true
spawn qm monitor $VMID
expect ">"
send "info cpus\r"
expect ">"
EOF
}

#Only act if VMID & OFFSET are set
if [[ -z $VMID  || -z $OFFSET ]]
then
	echo "Usage: cpupin.sh <VMID> <OFFSET>"
	exit 1
else
	#Get PIDs of each CPU core for VM, count number of VM cores, and get even/odd PIDs for assignment
	VCPUS=($(cpu_tasks))
	VCPU_COUNT="${#VCPUS[@]}"
	VCPU_EVEN_THREADS=($(for EVEN_THREAD in "${VCPUS[@]}"; do echo $EVEN_THREAD; done | awk '!(NR%2)'))
	VCPU_ODD_THREADS=($(for ODD_THREAD in "${VCPUS[@]}"; do echo $ODD_THREAD; done | awk '(NR%2)'))

	if [[ $VCPU_COUNT -eq 0 ]]; then
		echo "* No VCPUS for VM$VMID"
		exit 1
	fi

	echo "* Detected ${#VCPUS[@]} assigned to VM$VMID..."
	echo "* Resetting cpu shield..."

	#Start at offset CPU number, assign odd numbered PIDs to their own CPU thread, then increment CPU core number
	#0-3 if offset is 0, 4-7 if offset is 4, etc
	ODD_CPU_INDEX=$OFFSET
	for PID in "${VCPU_ODD_THREADS[@]}"
	do
		echo "* Assigning ODD thread $ODD_CPU_INDEX to $PID..."
		taskset -pc "$ODD_CPU_INDEX" "$PID"
		((ODD_CPU_INDEX+=1))
	done

	#Start at offset + CPU count, assign even number PIDs to their own CPU thread, then increment CPU core number
	#16-19 if offset is 0,	20-23 if offset is 4, etc
	EVEN_CPU_INDEX=$(($OFFSET + $SIBLING_THREAD_OFFSET))
	for PID in "${VCPU_EVEN_THREADS[@]}"
	do
		echo "* Assigning EVEN thread $EVEN_CPU_INDEX to $PID..."
		taskset -pc "$EVEN_CPU_INDEX" "$PID"
		((EVEN_CPU_INDEX+=1))
	done
fi

Proxmox HA management script

I was a bit frustrated at the lack of certain functions of ProxMox. I wanted an easy way to tag a VM and manage that tag as a group. My solution was to create HA groups for VMs with various functions. I can then manage the group and tell them to migrate storage or turn off & on.

I wrote a script to facilitate this. Right now it only covers powering on, powering off, and migrating the location of the primary disk, but more is to come.

Here’s what I have so far:

#!/bin/bash
#Proxmox HA management script
#Migrates storage, starts, or stops Proxmox HA groups based on the name and function passed to it.
#Usage: manage-HA-group.sh <start|stop|migrate> <ha-group-name> [local|remote]

#Change to the name of your local storage (for migrating from remote to local storage)
LOCAL_STORAGE_NAME="pve-1TB"

function get_vm_name() {
    #Determine the name of the VMID passed to this function
    VM_NAME=$(qm config "$1" | grep '^name:' | awk '{print $2}')
}

function get_group_VMIDs() {
    #Get a list of VMIDs based on the name of the HA group passed to this function
    group_VMIDs=$(ha-manager config | grep -B1 "$1" | grep vm: | sed 's/vm://g')
}

function group_power_state() {
    #Loop through members of HA group passed to this function
    for group in "$1" 
    do
        get_group_VMIDs "$group"
        for VM in $group_VMIDs
        do
            get_vm_name "$VM"
            echo "$OPERATION $VM_NAME in HA group $group"
            ha-manager set $VM --state $VM_STATE
        done
    done
}

function group_migrate() {
    #This function migrates the VM's first disk (scsi0) to the specified location (local/remote)
    #TODO String to determine all disk IDs:  qm config 115 | grep '^scsi[0-9]:' | tr -d ':' | awk '{print $1}'
    disk="scsi0"    

    #Loop through each VM in specified group name (second argument passed on CLI)
    for group in "$2" 
    do
        get_group_VMIDs "$group"
        for VM in $group_VMIDs
        do
            #Determine the names of each VM in the HA group
            get_vm_name "$VM"

            #Set storage location based on argument
            if [[ "$3" == "remote" ]]; then
                storage="$VM_NAME"
            else
                storage="$LOCAL_STORAGE_NAME"
            fi

            #Move primary disk to desired location
            echo "Migrating $VM_NAME to "$3" storage"
            qm move_disk $VM $disk $storage --delete=1

        done
    done
}

case "$1" in 
    start)
        VM_STATE="started"
        OPERATION="Starting"
        group_power_state "$2" 
        ;;
    stop)
        VM_STATE="stopped"
        OPERATION="Stopping"
        group_power_state "$2"
        ;;
    migrate)
        case "$3" in
            local|remote)
                group_migrate "$@"
                ;;
            *)
                echo "Usage: manage-HA-group.sh migrate <ha-group-name> <local|remote>"
                ;;
        esac        
    ;;
    *)
        echo "Usage: manage-HA-group.sh <start|stop|migrate> <ha-group-name> [local|remote]"
        exit 1
        ;;
esac

apache reverse proxy with basic authentication

I have an old Apache server that’s serving as a reverse proxy for my webcam. I swapped webcams out and unfortunately the new one requires authentication. I had to figure out how to get Apache to reverse proxy with the proper authentication. The best information I found was given by user ThR37 at superuser.com

Essentially you have to use an Apache module called headers to add an HTTP header to the request. On my Debian system this was not enabled, so I had to install it (thanks to Andy over at serverfault)

sudo a2enmod headers
#if you're on ubuntu then it's mod_headers

I then needed to generate the basic authentication hash for the header to pass. This was done via a simple python script:

#replace USERNAME:PASSWORD below with your credentials
import base64
hash = base64.b64encode(b'USERNAME:PASSWORD')
print hash

Save the above script into a file hash.py and then run it by typing

python hash.py

With headers enabled and hash acquired I just needed to tweak my config by adding a RequestHeader line:

RequestHeader set Authorization "Basic <HASH>"
#Replace <HASH> with hash acquired above

After adding that one line and restarting apache, it worked!

Add colors to bash scripts

A quick note on how to easily add colors to your bash scripts. Thanks to https://stackoverflow.com/questions/5947742/how-to-change-the-output-color-of-echo-in-linux

Here are a few ANSI escape codes for reference

Black        0;30     Dark Gray     1;30
Red          0;31     Light Red     1;31
Green        0;32     Light Green   1;32
Brown/Orange 0;33     Yellow        1;33
Blue         0;34     Light Blue    1;34
Purple       0;35     Light Purple  1;35
Cyan         0;36     Light Cyan    1;36
Light Gray   0;37     White         1;37
#Set colors to variables
RED='\033[0;31m'
GREEN='\033[0;32m'
LIGHTBLUE='\033[0;34m'
NC='\033[0m' #No color

Reference the variables using echo -e like so:

echo -e "${RED} $instance is not a valid instance. Exiting. ${NC}"

Profit.

Run startup / shutdown on every VM in PRoxmox HA group

I wanted to run a stop operation on all VMs in one of my HA groups in Proxmox and was frustrated to see there was no easy way to do so. I wrote a quick & dirty bash script that will let me start & stop all VMs within an HA group to do what I wanted.

#!/bin/bash
#Proxmox HA start/stop script
#Takes first argument of the operation to do (start / stop) and any additional arguments for which HA group(s) to do it on, then acts as requested.

if [[ "$1" != "start" && "$1" != "stop" ]]; then
    echo "Please provide desired state (start | stop)"
    exit 1
fi

if [ "$1" == "start" ]; then
    VM_STATE="started"
    OPERATION="Starting"
elif [ "$1" == "stop" ]; then
    VM_STATE="stopped"
    OPERATION="Stopping"
else exit 1 #should not ever get here
fi

#Loop through each argument except for the first
for group in "${@:-1}" 
do
    group_members=$(ha-manager config | grep -B1 $group | grep vm: )
    for VM in $group_members
    do
        echo "$OPERATION $VM in HA group $group"
        ha-manager set $VM --state $VM_STATE
    done
done

proxmox suspend & resume scripts

Update 12/17/2019: Added logic to wait for VM to be suspended before suspending the shypervisor

Update 12/8/2019: After switching VMs I needed to tweak the pair of scripts. I modified it to make all the magic happen on the hypervisor; the VM simply needs to SSH into the hypervisor and call the script. The hypervisor now also needs access to SSH via public key to the VM to tell it to suspend.

#!/bin/sh
#ProxMox suspend script part 1 of 2
#To be run on the VM 
#All this does is call the suspend script on the hypervisor
#This could also just be a bash alias

####### Variables #########
HYPERVISOR=        #Name / IP of the hypervisor
SSH_USER=          #User to SSH into hypervisor as
HYPERVISOR_SCRIPT= #Path to part 2 of the script on the hypervisor

####### End Variables ######

#Execute server suspend script
ssh $SSH_USER@$HYPERVISOR "$HYPERVISOR_SCRIPT" &
#!/bin/bash
#ProxMox suspend script part 2 of 2
#Script to run on the hypervisor, it waits for VM to suspend and then suspends itself
#It relies on passwordless sudo configured on the VM as well as SSH keys to allow passwordless SSH access to the VM from the hypervisor
#It resumes the VM after it resumes itself
#Called from the VM

########### Variables ###############

VM=             #Name/IP of VM to SSH into
VM_SSH_USER=    #User to ssh into the vm with
VMID=           #VMID of VM you wish to suspend

########### End Variables############

#Tell guest VM to suspend
ssh $VM_SSH_USER@$VM "sudo systemctl suspend"

#Wait until guest VM is suspended, wait 5 seconds between attempts
while [ "$(qm status $VMID)" != "status: suspended" ]
do 
    echo "Waiting for VM to suspend"
    sleep 5 
done

#Suspend hypervisor
systemctl suspend

#Resume after shutdown
qm resume $VMID

I have a desktop running ProxMox. My GUI is handled via a virtual machine with physical hardware passed through it. The challenge with this setup is getting suspend & resume to work properly. I got it to work by suspending the VM first, then the host; on resume, I power up the host first, then resume the VM. Doing anything else would cause hardware passthrough problems that would force me to reboot the VM.

I automated the suspend process by using two scripts: one for the VM, and one for the hypervisor. The first script is run on the VM. It makes an SSH command to the hypervisor (thanks to this post) to instruct it to run the second half of the script; then initiates a suspend of the VM.

The second half of the script waits a few seconds to allow the VM to suspend itself, then instructs the hypervisor to also go into suspend. I had to split these into two scripts because once the VM is suspended, it can’t issue any more commands. Suspending the hypervisor must happen after the VM itself is suspended.

Here is script #1 (to be run on the VM) It assumes you have already set up a private/public key pair to allow for passwordless login into the hypervisor from the VM.

#!/bin/sh
#ProxMox suspend script part 1 of 2
#Tto be run on the VM so it suspends before the hypervisor does

####### Variables #########
HYPERVISOR=HYPERVISOR_NAME_OR_IP
SSH_USER=SSH_USER_ON_HYPERVISOR
HYPERVISOR_SCRIPT_LOCATION=NAME_AND_LOCATION_OF_PART2_OF_SCRIPT

####### End Variables ######

#Execute server suspend script, then suspend VM
ssh $SSH_USER@$HYPERVISOR  $HYPERVISOR_SCRIPT_LOCATION &

#Suspend
systemctl suspend

Here is script #2 (which script #1 calls), to be run on the hypervisor

#!/bin/bash
#ProxMox suspend script part 2 of 2
#Script to run on the hypervisor, it waits for VM to suspend and then suspends itself
#It resumes the VM after it resumes itself

########### Variables ###############

#Specify VMid you wish to suspend
VMID=VMID_OF_VM_YOU_WANT_TO_SUSPEND

########### End Variables############

#Wait 5 seconds before doing anything to allow for VM to suspend
sleep 5

#Suspend hypervisor
systemctl suspend

#Resume after shutdown
qm resume $VMID

It works on my machine 🙂