Category Archives: CLI

Guacamole docker quick and easy

Apache Guacamole as an awesome HTML5 remote access gateway. Unfortunately it can be very frustrating to set up. They have docker images that are supposed to make the process easier, but I still ran into a lot of problems trying to get everything configured and linked.

Fortunately, a docker compose file exists to make Guacamole much easier to set up. Simply follow the instructions as laid out in the github readme:

  • Install docker & docker-compose
  • Clone their repository, run the initial prep script (for SSL keys & database initialization), and bring it up with docker-compose:
git clone "https://github.com/boschkundendienst/guacamole-docker-compose.git"
cd guacamole-docker-compose
sudo ./prepare.sh
sudo docker-compose up -d

Done! If you didn’t change anything in the docker-compose.yml file, you will have a new instance of Guacamole running on HTTPS port 8443 of your docker host. If you need to make changes (or if you forgot to run the prepare.sh file with sudo), you can run the reset.sh script which will destroy everything. You can then modify docker-compose.yml to suit your needs:

  • Whether to use nginx for HTTPS or just expose guacamole on port 8080 non-https (in case you already have a reverse proxy set up)
  • postgres password

Config files for each container are located within various folders in your guacamole-docker-compose folder. This can all be changed by editing the docker-compose.yml file.

Note this does configuration does not work with WOL, but as I do not use this feature I don’t mind.

Troubleshooting

docker ps will show running containers (docker ps -a shows all containers) If one is not running that should be, docker logs <container name> gives valuable insight as to why. In my case guacd was erroring out because I hadn’t initialized the database properly. Running the reset.sh script and starting over, this time running as sudo, did the trick.

Mount LVM partitions in FreeBSD

I’ve been playing around with helloSystem, an up and coming FreeBSD desktop environment that mirrors the MacOS experience quite well. Since it’s based in FreeBSD I’ve had to brush up on a few FreeBSD-isms that are distinctly different from Linux.

Since I’m dual booting this helloSystem BSD system alongside my Arch Linux install, I want to be able to access files on my Arch system from the BSD system. My Arch system uses LVM, which posed a challenge as LVM is a distinctly Linux thing.

To get it to work I needed to load a couple modules (thanks to the FreeBSD forums for help)

  • fuse
  • geom_linux_lvm

You can do this at runtime by using the kldload command

kldload fuse
kldload /boot/kernel/geom_linux_lvm.ko

To make the kernel module loading survive a reboot, add them to /boot/loader.conf

geom_linux_lvm_load="YES"
fuse_load="YES"

You can now scan your BSD system for LVM partitions:

geom linux_lvm list

The LVM partitions are listed under /dev/linux_lvm. The last step is to mount them with FUSE:

fuse-ext2 -o rw+ /dev/linux_lvm/NAME_OF_LVM_PARTITION /mnt/DESIRED_MOUNT_FOLDER

rw+ indicates a read/write mount.

CReate a local yum repository

I had a need to copy some specific RPM files locally to my machine, but have the general YUM database recognize them (not using yum localinstall.) I found this lovely howto that explains how to do it.

In my case, I created a folder for one RPM I wanted in the local yum repository. I then installed the createrepo package, used it on my new directory containing my RPMs, then added a repository file pointing to the new local repository.

mkdir yumlocal
cp <DESIRED RPM FILES> yumlocal
yum install createrepo
cd yumlocal
createrepo .

The last piece was to create a yum repo file local.repo

[local]
name=CentOS-$releasever - local packages for $basearch
baseurl=file:///path/to/yumlocal/
enabled=1
gpgcheck=0
protect=1

That was it! Now I could use yum install <NAME OF PACKAGE IN LOCAL REPO FILE> and it works!

Transcribe audio with Google Cloud speech-to-text api

I had a few audio files of an interview done with a late relative that I wanted to have Google transcribe for me. I wanted to supply an audio file and have it spit out the results. There are many ways to do this but I went with using the Google Cloud Platfrom speech-to-text API.

First I signed up for a GCP free trial via https://cloud.google.com/speech-to-text/ For my usage, it will remain free as 0-60 minutes of transcription per month is not charged: https://cloud.google.com/speech-to-text/pricing

Next, I needed to create GCP storage bucket as audio more than 10 minutes long cannot reliably be transcribed via the “uploading local file” option. I did this following the documentation at https://cloud.google.com/storage/docs/creating-buckets which walks you through going to their storage browser and creating a new bucket. From that screen I uploaded my audio files (FLAC in my case.)

Then I needed to create API credentials to use. I did this by going speech API console’s credentials tab and creating a service account, then saving the key to my working directory on my local computer.

Also on said computer I installed google-cloud-sdk (on Arch Linux in my case, it was as simple as yay -S google-cloud-sdk)

With service account json file downloaded & google-cloud-sdk installed I exported the GCP service account credentials into my BASH environment like so

export GOOGLE_APPLICATION_CREDENTIALS=NAME_OF_SERVICE_ACCOUNT_KEYFILE_DOWNLADED_EARLIER.json 

I created .json files following the format outlined in command line usage outlined in the quickstart documentation. I tweaked to add a line “model”: “video” to get the API to use the premium Video recognition set (as it was more accurate for this type of recording.) This is what my JSON file looked like:

{
  "config": {
      "encoding":"FLAC",
      "sampleRateHertz": 16000,
      "languageCode": "en-US",
      "enableWordTimeOffsets": false,
      "model": "video"

  },
  "audio": {
      "uri":"gs://googlestorarge-bucket-name/family-memories.flac"
  }
}

I then used CURL to send the transcription request to Google. This was my command:

curl -s -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://speech.googleapis.com/v1/speech:longrunningrecognize -d @JSON_FILE_CREATED_ABOVE.json

If all goes well you will get something like this in response:

{
  "name": "4663803355627080910"
}

You can check the status of the transcription, which usually takes half the length of the audio file to do, by running this command:

curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) -H "Content-Type: application/json; charset=utf-8" "https://speech.googleapis.com/v1/operations/ID_NUMBER_ACQUIRED_ABOVE"

You will either get a percent progress, or if it’s done, the output of the transcription.

Success! It took some time to figure out but was still much better than manually transcribing the audio by hand.

send test syslog messages with nc

I needed to send some test packets over UDP to make sure connectivity was working. I found this site which outlined how to do it really well

nc -u <IP/hostname> <port>

Then on the next line you can send test messages, then hit CTRL+D when done. In my case I wanted to test sending syslog data, so I did nc -u <hostname> 514, then wrote test messages. the -u specifies UDP and 514 is the syslog port. I was then able to confirm on the other end the message was received. Handy.

Move git subdirectory into new repo

I had a need to take a folder in one git repository and create a whole new git repository with it, preserving history for all files inside. My desire to keep git history made the process a bit more complicated than simply copying the directory into a new git repository.

First, create a new folder on the git server. I’m all command line, no GUI yet, so I need to make it a bare repository. (Thanks to geeksforgeeks on how to to do this)

#On the main git "server"
mkdir <reponame>.git
cd <reponame>.git
git init --bare

Now, on the desktop (not the git server) clone a copy of the repository with your desired folder into a new directory, remove the git origin server, then strip out everything except that directory (thanks to gbayer.com for the info)

#On the desktop
git clone <initial git repository url> <new_directory_name>
cd <new_directory_name>
git remote rm origin
git filter-branch --subdirectory-filter <directory_to_keep_history_of> -- --all

Lastly, (still on the desktop in the new repository directory) create a new origin with the path of the new repo you created above

#On the desktop, inside new_directory_name
git remote add origin <server>:<path_to_new_repo_folder>
git push --set-upstream origin master

Threadripper / Epyc processor core optimization

I had a pet project (folding@home) where I wanted to maximize computing power. I became frustrated with default CPU scheduling of my folding@home threads. Ideal performance would keep similar threads on the same CPU, but the threads were jumping all over the place, which was impacting performance.

Step one was to figure out which threads belonged to which physical cores. I found on this site that you can use cat to find out what your “sibling threads” are:

cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

The above command is for my Threadripper & Epyc systems, which each have 16 cores hyperthreaded to 32 cores. Adjust the {0..15} number to match your number of cores (core 0 being the fist core.) This was my output:

cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

0,16
1,17
2,18
3,19
4,20
5,21
6,22
7,23
8,24
9,25
10,26
11,27
12,28
13,29
14,30
15,31

Now that I know the sibling threads are offset by 16, I can use this information to optimize my folding@home VMs. I modified my CPU pinning script to take this into consideration. The script ensures that each VM is pinned to only use sibling threads (ensuring they all stay on the same physical CPU.)

This script should be used with caution. It pins processes to specific CPUs, which limits the kernel scheduler’s ability to move things around if needed. If configured badly this can cause the machine to lock up or VMs to be terminated.

I saw some impressive results spinning up four separate 8 core VMs and pinning them to sibling cores using this script. It almost doubled the rate at which I completed folding@home work units.

And now, the script:

#!/bin/bash
#Properly assign CPU cores to their respective die for EPYC/Threadripper systems
#Based on how hyperthreads are done in these systems
#cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

#The script takes two arguments - the ID of the Proxmox VM to modify, and the core to begin the VM on
#If running this against multiple VMs, make sure to increment this second number by half of the cores of the previous VM
#For example, if I have one 8 core VM and I run this script specifying 0 for the offset, if I spin up a second VM, the second argument would be 4
#this would ensure the second VM starts on core 4 (the 5th core) and assigns sibling cores to match

set -eo pipefail

#take First argument as which VMID to pin CPU cores to, the second argument is which core to start pinning to
VMID=$1
OFFSET=$2

#Determine offset for sibling threads
SIBLING_THREAD_OFFSET=$(cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list| sed 's/,/ /g' | awk '{print $2}')

#Function to determine number of CPU cores a VM has
cpu_tasks() {
	expect <<EOF | sed -n 's/^.* CPU .*thread_id=\(.*\)$/\1/p' | tr -d '\r' || true
spawn qm monitor $VMID
expect ">"
send "info cpus\r"
expect ">"
EOF
}

#Only act if VMID & OFFSET are set
if [[ -z $VMID  || -z $OFFSET ]]
then
	echo "Usage: cpupin.sh <VMID> <OFFSET>"
	exit 1
else
	#Get PIDs of each CPU core for VM, count number of VM cores, and get even/odd PIDs for assignment
	VCPUS=($(cpu_tasks))
	VCPU_COUNT="${#VCPUS[@]}"
	VCPU_EVEN_THREADS=($(for EVEN_THREAD in "${VCPUS[@]}"; do echo $EVEN_THREAD; done | awk '!(NR%2)'))
	VCPU_ODD_THREADS=($(for ODD_THREAD in "${VCPUS[@]}"; do echo $ODD_THREAD; done | awk '(NR%2)'))

	if [[ $VCPU_COUNT -eq 0 ]]; then
		echo "* No VCPUS for VM$VMID"
		exit 1
	fi

	echo "* Detected ${#VCPUS[@]} assigned to VM$VMID..."
	echo "* Resetting cpu shield..."

	#Start at offset CPU number, assign odd numbered PIDs to their own CPU thread, then increment CPU core number
	#0-3 if offset is 0, 4-7 if offset is 4, etc
	ODD_CPU_INDEX=$OFFSET
	for PID in "${VCPU_ODD_THREADS[@]}"
	do
		echo "* Assigning ODD thread $ODD_CPU_INDEX to $PID..."
		taskset -pc "$ODD_CPU_INDEX" "$PID"
		((ODD_CPU_INDEX+=1))
	done

	#Start at offset + CPU count, assign even number PIDs to their own CPU thread, then increment CPU core number
	#16-19 if offset is 0,	20-23 if offset is 4, etc
	EVEN_CPU_INDEX=$(($OFFSET + $SIBLING_THREAD_OFFSET))
	for PID in "${VCPU_EVEN_THREADS[@]}"
	do
		echo "* Assigning EVEN thread $EVEN_CPU_INDEX to $PID..."
		taskset -pc "$EVEN_CPU_INDEX" "$PID"
		((EVEN_CPU_INDEX+=1))
	done
fi

UBUNTU 20.04 cloned VM same DHCP IP fix

I cloned an Ubuntu 20.04 VM and was frustrated to see both boxes kept getting the same DHCP IP address despite having different network MAC addresses. I finally found on this helpful post which states Ubuntu 20.04 uses systemd-networkd for DHCP leases which behaves differently than dhclient. As wickedchicken states,

systemd-networkd uses a different method to generate the DUID than dhclientdhclient by default uses the link-layer address while systemd-networkd uses the contents of /etc/machine-id. Since the VMs were cloned, they have the same machine-id and the DHCP server returns the same IP for both.

To fix, replace the contents of one or both of /etc/machine-id. This can be anything, but deleting the file and running systemd-machine-id-setup will create a random machine-id in the same way done on machine setup.

So my fix was to run the following on the cloned machine:

sudo rm /etc/machine-id
sudo systemd-machine-id-setup
sudo reboot

That did the trick!


For the systems that registered their hostnames under the wrong IPs, I had to take the following action for my Ubuntu 20.04 desktop as well as my Ubiquiti USG-Pro 4

Ubiquiti: Clear DHCP lease

clear dhcp lease ip <ip_address>

Ubuntu desktop: Flush DNS

sudo systemd-resolve --flush-caches

Folding@home opencl error fix

I decided to contribute my GPU on my Ubuntu-based system to the Folding@Home effort for COVID-19. I kept getting this error message for my NVIDIA GeForce GTX 1050 TI when I tried:

ERROR:WU00:FS00:Failed to start core: OpenCL device matching slot 0 not found, make sure the OpenCL driver is installed or try setting 'opencl-index' manually

I had the nvidia opencl packages installed but apparently missed something. I finally found on the folding at home forum what I was missing – ocl-icd-opencl-dev

sudo apt install ocl-icd-opencl-dev

After running the above command and restarting the FAHClient service, the GPU started folding. For science!


EDIT 5/6/2020: After a re-install I had the issue where the GPU wouldn’t show up at all. It addition to ocl-icd-opencl-dev, it looks like you also need nvidia-cuda-dev.

sudo apt install ocl-icd-opencl-dev nvidia-cuda-dev