Category Archives: CLI

Increment modified date of files in a directory based on first file

I had an issue in Immich where it was sorting pictures by their modified date. The modified dates are random, but the filenames are not. I wanted the album to sort by filename, and to do that I needed to get each filename to have a modified time in the same order. This was my solution (run within the directory in question) :

date=$(date -r $(ls | head -1) +%s); for file in *.jpg; do touch -m -d "@$date" $file; ((date+=1)); done

This bash one-liner does the following:

  • Sets a date variable by taking the modified date of the first file in the directory and converting it to epoch time
  • Goes through each JPG file in the directory and executes a touch command to set the date of that file to the date variable
  • Increments the date variable by 1 before processing the next file

The end result is now the order the files are in by modified date match their filename order.

Rename directory contents with prefix of directory

Quick snippet to rename every file within a directory to have a prefix of the directory they reside in as part of the file name. If the directory name has a space in it, replace spaces with underscores for the file name. Run from within the directory in question.

base=$(basename "$PWD"| tr ' ' '_'); for file in *; do [ -f "$file" ] && mv "$file" "${base}_$file"; done

It does the following:

  • Gets the name of the current directory, replacing spaces with underscores, and saves into the variable base
  • Iterates through everything in the directory in a for loop
  • If the item is a regular file, execute the mv command to rename the file to include the contents of the base variable as a prefix
    • It uses BASH substitution to prepend the directory name to the new file name

This was helpful when dealing with a scanning project where many files had the same filename in different directories, which confused stacking images within Immich.

Get list of offline hosts with ping, grep & awk

Here is a simple bash one-liner that takes a list of hosts to check via stdin and attempts to ping a host a single time. If no response is received within 1 second, it prints that hostname and moves onto the next host. It’s designed to work with the output of another command that outputs hostnames (for example, an inventory file.)

|awk '{print $6}'| xargs -I {} sh -c 'ping -c 1 -w 1 {} | grep -B1 100% |  head -1' | awk '{print $2}'
  • Prints the 6th column of the output (you may or may not need this depending on what program is outputting hostnames)
  • xargs takes the output from the previous command and runs the ping command against it in a subshell
    • ping -c1 to only do it once, -w1 to wait 1 second for timeout
    • grep for 100%, grab the line before it (100% in this case means packet loss)
    • head -1 only prints the first line of the ping results
  • Awk prints only the second column in the resulting ping statistics output

It takes output like this:

PING examplehost (10.13.12.12) 56(84) bytes of data.

— examplehost ping statistics —
1 packets transmitted, 0 received, 100% packet loss, time 0ms

And simply outputs this:

examplehost

but only if the ping failed. No output otherwise.

I will note that the Anthropic’s Claude Sonnet AI helped me come to this conclusion, but not directly. Its suggestions for my problem didn’t work but were enough to point me in the right direction. The grep -B1 100% | head -1 portion need to be grouped together with the ping command in a separate shell, not appended afterward.

Generate list of youtube links from song titles

I needed to get a list of youtube links from a list of song titles. Thanks to this reddit post I was able to get what I needed. I did have to update it to use yt which is a fork of the referenced mps-youtube package.

After installing yewtube per https://github.com/mps-youtube/yewtube#installation I was able to get what I wanted with this one-liner:

while read song; do echo $song; yt search "$song", i 1, q|grep -i link| awk -F ': ' '{ print $2 }'; done < playlist

The above command looks at a playlist which is only artist & song names, prints the song name to the console for reference, then uses yewtube to search youtube for that song name and select the first result, then grab the link and print it to the screen.

I had to double check that the correct version of the song was selected, but for the most part it did exactly what I needed!

Add NVIDIA GPU to LXC container

I followed this guide to get NVIDIA drivers working on my Proxmox machine. However when I tried to get them working in my container I couldn’t see how to get nvidia-smi installed. Thankfully this blog had what I needed.

The step I missed was copying & installing the NVIDIA drivers into the container with this flag:

--no-kernel-module

That got me one step closer but I could not spin up open-webui in a container. I kept getting the error

Error response from daemon: could not select device driver "nvidia" with capabilities: [[gpu]]

The fix was to install the NVDIA Container Toolkit:

Configure the production repository:

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

Update the packages list from the repository:

sudo apt-get update

Install the NVIDIA Container Toolkit packages:

sudo apt-get install -y nvidia-container-toolkit

An additional hurtle I encountered was this error:

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running prestart hook #0: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: mount error: failed to add device rules: unable to find any existing device filters attached to the cgroup: bpf_prog_query(BPF_CGROUP_DEVICE) failed: operation not permitted: unknown

I found here that the fix is to change a line in /etc/nvidia-container-runtime/config.toml. Uncomment and change no-cgroups to true.

no-cgroups = true

Success.

Not working after reboot

I had a working config until I rebooted the host. It turns out that two services need to run on the host:

nvidia-persistenced
nvidia-smi

Configured cron tab to run these on reboot:

/etc/cron.d/nvidia:
@reboot root /usr/bin/nvidia-smi
@reboot root /usr/bin/nvidia-persistenced

Update 2025-05-06

I encountered an error when trying to set up alltalk tts:


nvidia-container-cli: mount error: stat failed: /dev/nvidia-modeset: no such file or directory: unknown

It turns out I needed to expose /dev/nvidia-modeset to the container as well. Thanks to this reddit post for the answer. The complete container passthrough config is now this:

lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 243:* rwm
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file

Recursively find files with the same filename

I needed a way to find files with the same filename, but were not identical files. Thankfully Reddit had the solution I was looking for: a combination of find, sort, and while loop with if statements.

https://www.reddit.com/r/bash/comments/fjsr8v/recursively_find_files_with_same_name_under_a/

find . -type f -printf '%f/%p\0' | { sort -z -t/ -k1; printf '\0';} |
while IFS=/ read -r -d '' name file; do
    if [[ "$name" = "$oldname" ]]; then
        repeated+=("$file")  # duplicate file
        continue
    fi
    if (( ${#repeated[@]} > 1)); then
        printf '%s\n' "$oldname" "${repeated[@]}" ''
        # do something with list "${repeated[@]}"
    fi
    repeated=("$file")
    oldname=$name
done

Configure Proxmox Mail Gateway to use AnyMX relay

I needed to configure Proxmox Mail Gateway to use an authenticated SMTP relay for outgoing mail. There is no way to add a username and password in the PMG GUI; however, you can do it in the command line and it follows standard postfix syntax. Thanks to this post for helping me get it set up: https://forum.proxmox.com/threads/relay-username-and-password.129586

To get it to work you have to drop to the CLI and configure your username and password. Then copy the template file over and make your changes there, as editing postfix directly gets overwritten with subsequent GUI changes.

mkdir /etc/pmg/templates/

cp /var/lib/pmg/templates/main.cf.in /etc/pmg/templates/main.cf.in

create /etc/postfix/smtp_auth and populate it:

relay.host.tld   username:password

Append the following to the /etc/pmg/templates/main.cf.in template:

smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/smtp_auth
smtp_sasl_security_options = noanonymous

Change permissions to smtp_auth file, run postmap to generate the db with the password, and run pmgconfig to refresh the configuration.

chmod 640 /etc/postfix/smtp_auth
postmap /etc/postfix/smtp_auth
pmgconfig sync --restart 1

This worked for me.

Change ceph network

My notes on changing which network your Proxmox CEPH cluster lives in. In my case I wanted to switch from a 10 gig network to a 40gig network in a different subnet. Source: https://forum.proxmox.com/threads/ceph-changing-public-network.119116

  1. Change network configuration in “ceph.conf”
    • Be sure to edit both cluster network and public network
  2. Destroy and recreate monitors (one by one);
  3. Destroy and recreate managers (one by one, leaving the active one for last);
  4. Destroy and recreate metadata servers (one by one, leaving the active one for last;
  5. Restart OSDs (one by one – or more, depending how many OSDs you have in the cluster – so you avoid restarting the hosts);

Get CEPH running on new Proxmox node

pveceph install –repository no-subscription

Move OSDs to new host

Source: https://forum.proxmox.com/threads/move-osd-to-another-node.33965/page-2

Follow a similar procedure above of downing each OSD one by one on the old host. Remove the drives and place them in the new host. Then run the following:

pvscan
ceph-volume lvm activate --all

Troubleshooting

Unable to remove monitor with unknown status

https://forum.proxmox.com/threads/ceph-cant-remove-monitor-with-unknown-status.63613

rm -r /var/lib/ceph/mon/ceph-pve2/

Remove failed host

I had to edit /etc/pve/ceph.conf manually, remove host when it failed. It wouldn’t work in the Proxmox GUI.

Install Apache Guacamole 1.5.5 with docker-compose

I decided I needed to update my Apache Guacamole instance to their latest version – 1.5.5. Unfortunately the git repo I provided in my last article about it – https://techblog.jeppson.org/2021/03/guacamole-docker-quick-and-easy/ – doesn’t appear to work properly, even with a fresh install. So, I set about to rebuild from scratch. I found this article which helped me to do it. I updated the version from 1.4.0 to 1.5.5 and it worked beautifully.

Make guacamole directory

mkdir guacamole
cd guacamole

Pull down images

docker pull guacamole/guacamole:1.5.5
docker pull guacamole/guacd:1.5.5
docker pull mariadb:10.9.5

Grab database initialization file

docker run --rm guacamole/guacamole:1.5.5 /opt/guacamole/bin/initdb.sh --mysql > initdb.sql

Make initial docker-compose.yml file with just the database for now:

services:
  guacdb:
    container_name: guacamoledb
    image: mariadb:10.9.5
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'MariaDBRootPass'
      MYSQL_DATABASE: 'guacamole_db'
      MYSQL_USER: 'guacamole_user'
      MYSQL_PASSWORD: 'MariaDBUserPass'
    volumes:
      - './db-data:/var/lib/mysql'
volumes:
  db-data:

Copy sql script into container and execute it

docker cp initdb.sql guacamoledb:/initdb.sql
sudo docker exec -it guacamoledb bash
cat /initdb.sql | mysql -u root -p guacamole_db
<insert MYSQL_ROOT_PASSWORD as defined earlier>
exit

Add the guacd & guacamole sections to your docker-compose.yml file

This is the end result:

services:
  guacdb:
    container_name: guacamoledb
    image: mariadb:10.9.5
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'MariaDBRootPass'
      MYSQL_DATABASE: 'guacamole_db'
      MYSQL_USER: 'guacamole_user'
      MYSQL_PASSWORD: 'MariaDBUserPass'
    volumes:
      - './db-data:/var/lib/mysql'
  guacd:
    container_name: guacd
    image: guacamole/guacd:1.4.0
    restart: unless-stopped
  guacamole:
    container_name: guacamole
    image: guacamole/guacamole:1.4.0
    restart: unless-stopped
    ports:
      - 8080:8080
    environment:
      GUACD_HOSTNAME: "guacd"
      MYSQL_HOSTNAME: "guacdb"
      MYSQL_DATABASE: "guacamole_db"
      MYSQL_USER: "guacamole_user"
      MYSQL_PASSWORD: "MariaDBUserPass"
      TOTP_ENABLED: "true"
    depends_on:
      - guacdb
      - guacd
volumes:
  db-data:

Start docker compose stack

Finally run docker compose up -d to get everything up and running again.

Remove /guacamole in the URL

The article says guacamole must have /guacamole at the end of the URL, but that is not correct. There is an environment variable you can pass to the container to tell the context to run in root instead of the guacamole subdirectory. If this is your desire, simply add

WEBAPP_CONTEXT: "ROOT"

to the guacamole section in your docker compose file and re-run sudo docker compose up -d

Here is my final docker compose file for Guacamole 1.5.5:

services:
  guacdb:
    container_name: guacamoledb
    image: mariadb:10.9.5
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'MariaDBRootPass'
      MYSQL_DATABASE: 'guacamole_db'
      MYSQL_USER: 'guacamole_user'
      MYSQL_PASSWORD: 'MariaDBUserPass'
    volumes:
      - './db-data:/var/lib/mysql'

  guacd:
    container_name: guacd
    image: guacamole/guacd:1.5.5
    restart: unless-stopped

  guacamole:
    container_name: guacamole
    image: guacamole/guacamole:1.5.5
    restart: unless-stopped
    ports:
      - 8080:8080
    environment:
      GUACD_HOSTNAME: "guacd"
      MYSQL_HOSTNAME: "guacdb"
      MYSQL_DATABASE: "guacamole_db"
      MYSQL_USER: "guacamole_user"
      MYSQL_PASSWORD: "MariaDBUserPass"
      TOTP_ENABLED: "true"
      WEBAPP_CONTEXT: "ROOT"
    depends_on:
      - guacdb
      - guacd

volumes:
  db-data:

Unbind vfio driver from device in Proxmox

I found myself with a Proxmox server that wouldn’t do anything with its network card. It took me a while to realize that at one point I had bound it to a VM. Even after removing it from the VM, the host wouldn’t do anything with it.

Discover which driver a device is using:

lspci -knn

In my case I found the culprit: the driver for the network card was still claimed by vfio-pci

08:00.0 Network controller [0280]: Mellanox Technologies MT27500 Family [ConnectX-3] [15b3:1003]
Subsystem: Mellanox Technologies MT27500 Family [ConnectX-3] [15b3:0050]
Kernel driver in use: vfio-pci
Kernel modules: mlx4_core

I finally found in this post how to tell the kernel to unbind from vfio-pci and bind to the network driver mlx4_core. Given the PCI bus location and device ID from the command, I was able to reclaim my network adapter to my host successfully:

echo -n "0000:08:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind
echo -n "15b3 1003" > /sys/bus/pci/drivers/vfio-pci/remove_id
echo -n "0000:08:00.0" > /sys/bus/pci/drivers/mlx4_core/bind

Success!