Category Archives: CLI

piKVM pushover startup script

I’ve had an issue where I wasn’t sure if my dynamic DNS provider registered properly. I then realized that I have a piKVM attached to one of my servers that boots on powerup, even if the server does not. I could utilize this piKVM to help me out.

Thanks to inspiration from Chris Dzombak I was able to whip up a little script that runs on startup. This script waits 5 minutes to allow for my firewall and modem to boot up, then sends a pushover notification to let me know the piKVM is online and what its external IP address is.

To get it working on the piKVM I had to enter into RW mode, write and save the script, add execute permissions to the script, then configure a systemd service to run the script at startup.

Here is the script, saved under /root/boot-pushover.sh

#!/usr/bin/env bash
set -eu

#Wait 5 minutes to allow router bootup
sleep 300

TOKEN="PUSHOVER_APPLICATION_TOKEN"
USER="PUSHOVER_USER_TOKEN"
EXTERNAL_IP="$(curl ifconfig.me)"
MESSAGE="$(hostname) is online. External IP: $EXTERNAL_IP"

#Send pushover command to alert it's up and send its external IP
curl -s \
  --form-string "token=$TOKEN" \
  --form-string "user=$USER" \
  --form-string "message=$MESSAGE" \
  https://api.pushover.net/1/messages.json

Set executable: chmod +x /root/boot-pushover.sh

Here is the systemd service, saved under /etc/systemd/system/boot-pushover-notification.service

[Service]
Type=oneshot
ExecStart=/root/boot-pushover.sh
RemainAfterExit=yes
User=root
Group=root
RestartSec=15
Restart=on-failure

[Unit]
Wants=network.target
After=network.target nss-lookup.target

[Install]
WantedBy=multi-user.target

Reload daemons & enable startup:

systemctl daemon-reload
systemctl enable boot-pushover-notification.service

Test by exiting rw mode and rebooting the piKVM:

ro
reboot

It works really well!

Saltstack JINJA set variables within set parameter

It took me a while to understand how to insert variables into a JINJA set statement. I read this tutorial which was helpful but still didn’t get me what I wanted. I wanted to be able to set a variable within a {% set %} parameter, something like this:

{% set variable = salt['vault'].read_secret('super/secret/path/{{ variable }}/more/secret/path/{{ another_variable}}', 'username') %}

Except that didn’t work. It simply rendered the brackets instead of inserting the variable.

I finally came across this stackoverflow page which outlined what I needed to do – a lot like in C, I needed to quote things I wanted to be taken literally, then add a + sign to insert the variable, then another + sign for the rest of the parameter. The correct syntax is as follows:

{% set variable = salt['vault'].read_secret('super/secret/path/' + variable + '/more/secret/path/' + another_variable', 'username') %}

This worked beautifully.

rsync create directory tree on remote host

I ran into an issue where I want to use rsync to copy a folder to a remote host into a destination directory that doesn’t yet exist. I was frustrated to find that rsync doesn’t appear to be able to create a remote directory tree. It would keep erroring out with this message:

rsync: mkdir "/opt/splunk/var/run/searchpeers" failed: No such file or directory (2)

I discovered this workaround which allowed me to finally accomplish what I wanted in one line: create the remote directory structure first, then synchronize into it. This is done with the --rsync-path option. You can specify the mkdir -p command beforehand, then add the rsync command after double ampersand (&&)

My specific use case was to copy a Splunk search peer bundle from one indexer to another. This was my working one liner:

rsync -aP --rsync-path="sudo mkdir -p /opt/splunk/var/run/searchpeers && sudo rsync" /opt/splunk/var/run/searchpeers splunk-idx2.jeppson.org:/opt/splunk/var/run/searchpeers

Success.

Restore files from remote borg repository disk image

My off-site backup involves sending borgbackup archives of VM images to a remote synology server. I recently needed to restore a single file from one of the VM images stored within this borg backup repository on the remote server. My connection to this server is not very fast so I didn’t want to wait to download the entire image file to mount it locally.

My solution was to mount the remote borgbackup repository on my local machine over SSH so I could poke around for and copy the specific file I wanted. This requires the borgbackup binary to be present on the remote machine. Since it’s a synology, I simply copied the standalone binary over.

The restore process was complicated by the fact that the VM disk image is owned by root, so in order to access the file I needed to mount the remote repository as root.

This is the process:

  1. Set BORG_REMOTE_PATH
    1. export BORG_REMOTE_PATH=<PATH_TO_BORG_BINARY_ON_REMOTE_SYSTEM>
  2. (Arch Linux): install python-llfuse
  3. Mount repository over SSH:
    1. borg mount <USER>@<REMOTE_SYSTEM>:<PATH_TO_REMOTE_BORGBACKUP_REPOSITORY>::<BACKUP_NAME> <MOUNT_FOLDER>
  4. Follow disk image mounting process
    1. losetup -Pr -f <PATH_TO_MOUNTED_BORGBACKUP>/<FILENAME_OF_VM_IMAGE>
    2. mount -o ro /dev/loop0p2 /mnt/loop0/
  5. Follow reverse to unmount when done:
    1. umount /mnt/loop0
    2. losetup -d /dev/loop0
    3. borg umount <MOUNT_FOLDER>

Success! I was able to restore an individual file within a raw VM image backup on a remote Borgbackup repository using this method.

Access idrac6 java console in mac OS

I needed to access my aging Dell PowerEdge R610 iDRAC console on my shiny new 13″ Macbook Pro M1. Unfortunately just like in Linux I ran into the “Connection failed” problem described in this post.

It was actually pretty easy to do for Mac. I installed the latest java for Mac from Oracle’s website. Once installed, I needed to find the location of the Java home directory for my mac. I found this stackoverflow discussion which directed me to use the /usr/libexec/java_home command.

Armed with that command in a subshell I was able to get to the file I wanted to edit:

sudo vim "$(/usr/libexec/java_home)/lib/security/java.security"

Once there I removed RC4 from the

jdk.tls.disabledAlgorithms

line. It worked! It was an easier process than on Linux or Windows.

Guacamole docker quick and easy

Apache Guacamole as an awesome HTML5 remote access gateway. Unfortunately it can be very frustrating to set up. They have docker images that are supposed to make the process easier, but I still ran into a lot of problems trying to get everything configured and linked.

Fortunately, a docker compose file exists to make Guacamole much easier to set up. Simply follow the instructions as laid out in the github readme:

  • Install docker & docker-compose
  • Clone their repository, run the initial prep script (for SSL keys & database initialization), and bring it up with docker-compose:
git clone "https://github.com/boschkundendienst/guacamole-docker-compose.git"
cd guacamole-docker-compose
sudo ./prepare.sh
sudo docker-compose up -d

Done! If you didn’t change anything in the docker-compose.yml file, you will have a new instance of Guacamole running on HTTPS port 8443 of your docker host. If you need to make changes (or if you forgot to run the prepare.sh file with sudo), you can run the reset.sh script which will destroy everything. You can then modify docker-compose.yml to suit your needs:

  • Whether to use nginx for HTTPS or just expose guacamole on port 8080 non-https (in case you already have a reverse proxy set up)
  • postgres password

Config files for each container are located within various folders in your guacamole-docker-compose folder. This can all be changed by editing the docker-compose.yml file.

Note this does configuration does not work with WOL, but as I do not use this feature I don’t mind.

Troubleshooting

docker ps will show running containers (docker ps -a shows all containers) If one is not running that should be, docker logs <container name> gives valuable insight as to why. In my case guacd was erroring out because I hadn’t initialized the database properly. Running the reset.sh script and starting over, this time running as sudo, did the trick.

Mount LVM partitions in FreeBSD

I’ve been playing around with helloSystem, an up and coming FreeBSD desktop environment that mirrors the MacOS experience quite well. Since it’s based in FreeBSD I’ve had to brush up on a few FreeBSD-isms that are distinctly different from Linux.

Since I’m dual booting this helloSystem BSD system alongside my Arch Linux install, I want to be able to access files on my Arch system from the BSD system. My Arch system uses LVM, which posed a challenge as LVM is a distinctly Linux thing.

To get it to work I needed to load a couple modules (thanks to the FreeBSD forums for help)

  • fuse
  • geom_linux_lvm

You can do this at runtime by using the kldload command

kldload fuse
kldload /boot/kernel/geom_linux_lvm.ko

To make the kernel module loading survive a reboot, add them to /boot/loader.conf

geom_linux_lvm_load="YES"
fuse_load="YES"

You can now scan your BSD system for LVM partitions:

geom linux_lvm list

The LVM partitions are listed under /dev/linux_lvm. The last step is to mount them with FUSE:

fuse-ext2 -o rw+ /dev/linux_lvm/NAME_OF_LVM_PARTITION /mnt/DESIRED_MOUNT_FOLDER

rw+ indicates a read/write mount.

CReate a local yum repository

I had a need to copy some specific RPM files locally to my machine, but have the general YUM database recognize them (not using yum localinstall.) I found this lovely howto that explains how to do it.

In my case, I created a folder for one RPM I wanted in the local yum repository. I then installed the createrepo package, used it on my new directory containing my RPMs, then added a repository file pointing to the new local repository.

mkdir yumlocal
cp <DESIRED RPM FILES> yumlocal
yum install createrepo
cd yumlocal
createrepo .

The last piece was to create a yum repo file local.repo

[local]
name=CentOS-$releasever - local packages for $basearch
baseurl=file:///path/to/yumlocal/
enabled=1
gpgcheck=0
protect=1

That was it! Now I could use yum install <NAME OF PACKAGE IN LOCAL REPO FILE> and it works!

Transcribe audio with Google Cloud speech-to-text api

I had a few audio files of an interview done with a late relative that I wanted to have Google transcribe for me. I wanted to supply an audio file and have it spit out the results. There are many ways to do this but I went with using the Google Cloud Platfrom speech-to-text API.

First I signed up for a GCP free trial via https://cloud.google.com/speech-to-text/ For my usage, it will remain free as 0-60 minutes of transcription per month is not charged: https://cloud.google.com/speech-to-text/pricing

Next, I needed to create GCP storage bucket as audio more than 10 minutes long cannot reliably be transcribed via the “uploading local file” option. I did this following the documentation at https://cloud.google.com/storage/docs/creating-buckets which walks you through going to their storage browser and creating a new bucket. From that screen I uploaded my audio files (FLAC in my case.)

Then I needed to create API credentials to use. I did this by going speech API console’s credentials tab and creating a service account, then saving the key to my working directory on my local computer.

Also on said computer I installed google-cloud-sdk (on Arch Linux in my case, it was as simple as yay -S google-cloud-sdk)

With service account json file downloaded & google-cloud-sdk installed I exported the GCP service account credentials into my BASH environment like so

export GOOGLE_APPLICATION_CREDENTIALS=NAME_OF_SERVICE_ACCOUNT_KEYFILE_DOWNLADED_EARLIER.json 

I created .json files following the format outlined in command line usage outlined in the quickstart documentation. I tweaked to add a line “model”: “video” to get the API to use the premium Video recognition set (as it was more accurate for this type of recording.) This is what my JSON file looked like:

{
  "config": {
      "encoding":"FLAC",
      "sampleRateHertz": 16000,
      "languageCode": "en-US",
      "enableWordTimeOffsets": false,
      "model": "video"

  },
  "audio": {
      "uri":"gs://googlestorarge-bucket-name/family-memories.flac"
  }
}

I then used CURL to send the transcription request to Google. This was my command:

curl -s -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://speech.googleapis.com/v1/speech:longrunningrecognize -d @JSON_FILE_CREATED_ABOVE.json

If all goes well you will get something like this in response:

{
  "name": "4663803355627080910"
}

You can check the status of the transcription, which usually takes half the length of the audio file to do, by running this command:

curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) -H "Content-Type: application/json; charset=utf-8" "https://speech.googleapis.com/v1/operations/ID_NUMBER_ACQUIRED_ABOVE"

You will either get a percent progress, or if it’s done, the output of the transcription.

Success! It took some time to figure out but was still much better than manually transcribing the audio by hand.