Category Archives: CLI

Transcribe audio with Google Cloud speech-to-text api

I had a few audio files of an interview done with a late relative that I wanted to have Google transcribe for me. I wanted to supply an audio file and have it spit out the results. There are many ways to do this but I went with using the Google Cloud Platfrom speech-to-text API.

First I signed up for a GCP free trial via https://cloud.google.com/speech-to-text/ For my usage, it will remain free as 0-60 minutes of transcription per month is not charged: https://cloud.google.com/speech-to-text/pricing

Next, I needed to create GCP storage bucket as audio more than 10 minutes long cannot reliably be transcribed via the “uploading local file” option. I did this following the documentation at https://cloud.google.com/storage/docs/creating-buckets which walks you through going to their storage browser and creating a new bucket. From that screen I uploaded my audio files (FLAC in my case.)

Then I needed to create API credentials to use. I did this by going speech API console’s credentials tab and creating a service account, then saving the key to my working directory on my local computer.

Also on said computer I installed google-cloud-sdk (on Arch Linux in my case, it was as simple as yay -S google-cloud-sdk)

With service account json file downloaded & google-cloud-sdk installed I exported the GCP service account credentials into my BASH environment like so

export GOOGLE_APPLICATION_CREDENTIALS=NAME_OF_SERVICE_ACCOUNT_KEYFILE_DOWNLADED_EARLIER.json 

I created .json files following the format outlined in command line usage outlined in the quickstart documentation. I tweaked to add a line “model”: “video” to get the API to use the premium Video recognition set (as it was more accurate for this type of recording.) This is what my JSON file looked like:

{
  "config": {
      "encoding":"FLAC",
      "sampleRateHertz": 16000,
      "languageCode": "en-US",
      "enableWordTimeOffsets": false,
      "model": "video"

  },
  "audio": {
      "uri":"gs://googlestorarge-bucket-name/family-memories.flac"
  }
}

I then used CURL to send the transcription request to Google. This was my command:

curl -s -H "Content-Type: application/json" -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) https://speech.googleapis.com/v1/speech:longrunningrecognize -d @JSON_FILE_CREATED_ABOVE.json

If all goes well you will get something like this in response:

{
  "name": "4663803355627080910"
}

You can check the status of the transcription, which usually takes half the length of the audio file to do, by running this command:

curl -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) -H "Content-Type: application/json; charset=utf-8" "https://speech.googleapis.com/v1/operations/ID_NUMBER_ACQUIRED_ABOVE"

You will either get a percent progress, or if it’s done, the output of the transcription.

Success! It took some time to figure out but was still much better than manually transcribing the audio by hand.

send test syslog messages with nc

I needed to send some test packets over UDP to make sure connectivity was working. I found this site which outlined how to do it really well

nc -u <IP/hostname> <port>

Then on the next line you can send test messages, then hit CTRL+D when done. In my case I wanted to test sending syslog data, so I did nc -u <hostname> 514, then wrote test messages. the -u specifies UDP and 514 is the syslog port. I was then able to confirm on the other end the message was received. Handy.

Move git subdirectory into new repo

I had a need to take a folder in one git repository and create a whole new git repository with it, preserving history for all files inside. My desire to keep git history made the process a bit more complicated than simply copying the directory into a new git repository.

First, create a new folder on the git server. I’m all command line, no GUI yet, so I need to make it a bare repository. (Thanks to geeksforgeeks on how to to do this)

#On the main git "server"
mkdir <reponame>.git
cd <reponame>.git
git init --bare

Now, on the desktop (not the git server) clone a copy of the repository with your desired folder into a new directory, remove the git origin server, then strip out everything except that directory (thanks to gbayer.com for the info)

#On the desktop
git clone <initial git repository url> <new_directory_name>
cd <new_directory_name>
git remote rm origin
git filter-branch --subdirectory-filter <directory_to_keep_history_of> -- --all

Lastly, (still on the desktop in the new repository directory) create a new origin with the path of the new repo you created above

#On the desktop, inside new_directory_name
git remote add origin <server>:<path_to_new_repo_folder>
git push --set-upstream origin master

Threadripper / Epyc processor core optimization

I had a pet project (folding@home) where I wanted to maximize computing power. I became frustrated with default CPU scheduling of my folding@home threads. Ideal performance would keep similar threads on the same CPU, but the threads were jumping all over the place, which was impacting performance.

Step one was to figure out which threads belonged to which physical cores. I found on this site that you can use cat to find out what your “sibling threads” are:

cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

The above command is for my Threadripper & Epyc systems, which each have 16 cores hyperthreaded to 32 cores. Adjust the {0..15} number to match your number of cores (core 0 being the fist core.) This was my output:

cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

0,16
1,17
2,18
3,19
4,20
5,21
6,22
7,23
8,24
9,25
10,26
11,27
12,28
13,29
14,30
15,31

Now that I know the sibling threads are offset by 16, I can use this information to optimize my folding@home VMs. I modified my CPU pinning script to take this into consideration. The script ensures that each VM is pinned to only use sibling threads (ensuring they all stay on the same physical CPU.)

This script should be used with caution. It pins processes to specific CPUs, which limits the kernel scheduler’s ability to move things around if needed. If configured badly this can cause the machine to lock up or VMs to be terminated.

I saw some impressive results spinning up four separate 8 core VMs and pinning them to sibling cores using this script. It almost doubled the rate at which I completed folding@home work units.

And now, the script:

#!/bin/bash
#Properly assign CPU cores to their respective die for EPYC/Threadripper systems
#Based on how hyperthreads are done in these systems
#cat /sys/devices/system/cpu/cpu{0..15}/topology/thread_siblings_list

#The script takes two arguments - the ID of the Proxmox VM to modify, and the core to begin the VM on
#If running this against multiple VMs, make sure to increment this second number by half of the cores of the previous VM
#For example, if I have one 8 core VM and I run this script specifying 0 for the offset, if I spin up a second VM, the second argument would be 4
#this would ensure the second VM starts on core 4 (the 5th core) and assigns sibling cores to match

set -eo pipefail

#take First argument as which VMID to pin CPU cores to, the second argument is which core to start pinning to
VMID=$1
OFFSET=$2

#Determine offset for sibling threads
SIBLING_THREAD_OFFSET=$(cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list| sed 's/,/ /g' | awk '{print $2}')

#Function to determine number of CPU cores a VM has
cpu_tasks() {
	expect <<EOF | sed -n 's/^.* CPU .*thread_id=\(.*\)$/\1/p' | tr -d '\r' || true
spawn qm monitor $VMID
expect ">"
send "info cpus\r"
expect ">"
EOF
}

#Only act if VMID & OFFSET are set
if [[ -z $VMID  || -z $OFFSET ]]
then
	echo "Usage: cpupin.sh <VMID> <OFFSET>"
	exit 1
else
	#Get PIDs of each CPU core for VM, count number of VM cores, and get even/odd PIDs for assignment
	VCPUS=($(cpu_tasks))
	VCPU_COUNT="${#VCPUS[@]}"
	VCPU_EVEN_THREADS=($(for EVEN_THREAD in "${VCPUS[@]}"; do echo $EVEN_THREAD; done | awk '!(NR%2)'))
	VCPU_ODD_THREADS=($(for ODD_THREAD in "${VCPUS[@]}"; do echo $ODD_THREAD; done | awk '(NR%2)'))

	if [[ $VCPU_COUNT -eq 0 ]]; then
		echo "* No VCPUS for VM$VMID"
		exit 1
	fi

	echo "* Detected ${#VCPUS[@]} assigned to VM$VMID..."
	echo "* Resetting cpu shield..."

	#Start at offset CPU number, assign odd numbered PIDs to their own CPU thread, then increment CPU core number
	#0-3 if offset is 0, 4-7 if offset is 4, etc
	ODD_CPU_INDEX=$OFFSET
	for PID in "${VCPU_ODD_THREADS[@]}"
	do
		echo "* Assigning ODD thread $ODD_CPU_INDEX to $PID..."
		taskset -pc "$ODD_CPU_INDEX" "$PID"
		((ODD_CPU_INDEX+=1))
	done

	#Start at offset + CPU count, assign even number PIDs to their own CPU thread, then increment CPU core number
	#16-19 if offset is 0,	20-23 if offset is 4, etc
	EVEN_CPU_INDEX=$(($OFFSET + $SIBLING_THREAD_OFFSET))
	for PID in "${VCPU_EVEN_THREADS[@]}"
	do
		echo "* Assigning EVEN thread $EVEN_CPU_INDEX to $PID..."
		taskset -pc "$EVEN_CPU_INDEX" "$PID"
		((EVEN_CPU_INDEX+=1))
	done
fi

UBUNTU 20.04 cloned VM same DHCP IP fix

I cloned an Ubuntu 20.04 VM and was frustrated to see both boxes kept getting the same DHCP IP address despite having different network MAC addresses. I finally found on this helpful post which states Ubuntu 20.04 uses systemd-networkd for DHCP leases which behaves differently than dhclient. As wickedchicken states,

systemd-networkd uses a different method to generate the DUID than dhclientdhclient by default uses the link-layer address while systemd-networkd uses the contents of /etc/machine-id. Since the VMs were cloned, they have the same machine-id and the DHCP server returns the same IP for both.

To fix, replace the contents of one or both of /etc/machine-id. This can be anything, but deleting the file and running systemd-machine-id-setup will create a random machine-id in the same way done on machine setup.

So my fix was to run the following on the cloned machine:

sudo rm /etc/machine-id
sudo systemd-machine-id-setup
sudo reboot

That did the trick!


For the systems that registered their hostnames under the wrong IPs, I had to take the following action for my Ubuntu 20.04 desktop as well as my Ubiquiti USG-Pro 4

Ubiquiti: Clear DHCP lease

clear dhcp lease ip <ip_address>

Ubuntu desktop: Flush DNS

sudo systemd-resolve --flush-caches

Folding@home opencl error fix

I decided to contribute my GPU on my Ubuntu-based system to the Folding@Home effort for COVID-19. I kept getting this error message for my NVIDIA GeForce GTX 1050 TI when I tried:

ERROR:WU00:FS00:Failed to start core: OpenCL device matching slot 0 not found, make sure the OpenCL driver is installed or try setting 'opencl-index' manually

I had the nvidia opencl packages installed but apparently missed something. I finally found on the folding at home forum what I was missing – ocl-icd-opencl-dev

sudo apt install ocl-icd-opencl-dev

After running the above command and restarting the FAHClient service, the GPU started folding. For science!


EDIT 5/6/2020: After a re-install I had the issue where the GPU wouldn’t show up at all. It addition to ocl-icd-opencl-dev, it looks like you also need nvidia-cuda-dev.

sudo apt install ocl-icd-opencl-dev nvidia-cuda-dev

Sort by middle of a string

I had a list of items I wanted to sort in a non-standard way:

app-function1.site1.jeppson.org
app-function2.site3.jeppson.org
app-function3.site4.jeppson.org
app-function4.site2.jeppson.org
app-function1.site6.jeppson.org
app-function3.site9.jeppson.org
app-function4.site7.jeppson.org

It’s a generalized list for publication but you get the idea. I wanted to sort by site name. Thanks to this post I found it’s relatively easy. You can tell the sort command to use a character as a tab delimiter (-t) and then specify which key “column” to sort by (-k)

In my case I sorted by site by specifying the dot character '.' as the delimiter, and the second “column” as the key '-k2'

The end result was this:

cat apps-by-site-unsorted.txt | sort -t. -k2
app-function1.site1.jeppson.org
app-function4.site2.jeppson.org
app-function2.site3.jeppson.org
app-function3.site4.jeppson.org
app-function1.site6.jeppson.org
app-function4.site7.jeppson.org
app-function3.site9.jeppson.org

Success

git checkout only specific directory from repo

I have a git repo where I just wanted a specific folder, not the entire repo, cloned to one of my virtual machines. Git doesn’t handle this straightforwardly, but thanks to this article I found there is a roundabout way of doing it., by combining a git sparse checkout and a git shallow checkout.

Below are the commands to run (I ran these directly in my home directory.) Replace FOLDER with the folder from within the repository you wish to clone.

git init <repo> 
cd <repo>
git remote add origin <url to remote repo> 
git config core.sparsecheckout true 
echo "FOLDER/*" >> .git/info/sparse-checkout 
git pull --depth=1 origin master 

Success! Now this particular machine only has the folder within the repo I want, not the entire git repository.

JQ select specific value from array

I had some AWS ec2 JSON output that I needed to parse. I wanted to grab a specific value from an array and it proved to be tricky for a JSON noob like me. I finally found this site which was very helpful: https://garthkerr.com/search-json-array-jq/. In my case I wanted the value of a specific AWS EC2 tag.

The trick is to grab down to the Tags[] array, and then pipe that to a select command. If your tags have dots in them (as mine did) then make sure to quote the tag name. Then add the .Value to the end of the select statement. This is my query:

jq -r '.Reservations[].Instances[].Tags[] | select (.Key == "EC2.Tag.Name").Value' jsonfile.json

The above query grabs all the tags (an array of Key,Value lines), then searches the result for a specific key “EC2.Tag.Name” and returns the Value line associated with it.