Fix no network after Proxmox 7 upgrade

I upgraded my proxmox server to version 7 and was dismayed to find it had no network connections after a reboot. After much digging I was finally able to find this post which mentioned:

After installing ifupdown2 everything works fine.

Sure enough, ifupdown2 was not installed anymore, and I had configured my networks with it. I had to manually assign an IP address to my node long enough to issue the command
apt install ifupdown2

Once I rebooted, everything came up like it should. Lesson learned: if you use ifupdown2, you must make sure it’s there before you reboot your server!

Reset root password on OPNSense with ZFS root

A lot of the guides for resetting the root password on an OPNSense box assume a UFS root partition. The password recovery steps do not work if you installed OPNSense with a ZFS root partition. If you try to follow the steps you get a lovely error about “unrecognized filesystem”

The process for ZFS (thanks to this article) is instead to run the following commands:

zfs set readonly=off zroot
zfs mount -a

Once that is done, you can proceed with the rest of the steps. Password recovery steps in full:

  1. Press the number 2 immediately on boot to go into single user mode
  2. Press enter when prompted for shell
  3. Make ZFS read/write:
    1. zfs set readonly=off zroot
    2. zfs mount -a
  4. Reset password
    1. opnsense-shell password
  5. Reboot

Saltstack JINJA set variables within set parameter

It took me a while to understand how to insert variables into a JINJA set statement. I read this tutorial which was helpful but still didn’t get me what I wanted. I wanted to be able to set a variable within a {% set %} parameter, something like this:

{% set variable = salt['vault'].read_secret('super/secret/path/{{ variable }}/more/secret/path/{{ another_variable}}', 'username') %}

Except that didn’t work. It simply rendered the brackets instead of inserting the variable.

I finally came across this stackoverflow page which outlined what I needed to do – a lot like in C, I needed to quote things I wanted to be taken literally, then add a + sign to insert the variable, then another + sign for the rest of the parameter. The correct syntax is as follows:

{% set variable = salt['vault'].read_secret('super/secret/path/' + variable + '/more/secret/path/' + another_variable', 'username') %}

This worked beautifully.

Connect Ubiquiti l2tp vpn with NetworkManager in Arch

I’ve recently moved and needed to connect to my (still existing) home network from my desktop. I’ve never had to VPN from my desktop before, so here my notes for getting it working.

Configuration

  1. Install necessary lt2p, pptp, and libreswan packages (I’m using yay as my package manager)
    yay -Sy community/networkmanager-l2tp community/networkmanager-pptp aur/networkmanager-libreswan aur/libreswan
  2. Configure VPN in GNOME settings (close settings window first if it was already open)
    1. Add VPN / Layer 2 Tunneling Protocol (L2TP)
    2. Gateway: IP/DNS of VPN
    3. User Authentication: Type: password
    4. IPsec Settings: Type: Pre-shared Key (PSK)
    5. PPP settings: Only check MSCHAPv2, check everything else. MPPE Security: 128-bit (most secure)

Troubleshooting

If something isn’t working the popup is not very descriptive. Network manager logs are stored in journald, so the best way to troubleshoot is to follow the logs: (-f for follow, -u for unit name)

sudo journalctl -f -u NetworkManager

In my case following the networkmanager logs I could see I didn’t have libreswan fully installed, and installing the libreswan package fixed it.

rsync create directory tree on remote host

I ran into an issue where I want to use rsync to copy a folder to a remote host into a destination directory that doesn’t yet exist. I was frustrated to find that rsync doesn’t appear to be able to create a remote directory tree. It would keep erroring out with this message:

rsync: mkdir "/opt/splunk/var/run/searchpeers" failed: No such file or directory (2)

I discovered this workaround which allowed me to finally accomplish what I wanted in one line: create the remote directory structure first, then synchronize into it. This is done with the --rsync-path option. You can specify the mkdir -p command beforehand, then add the rsync command after double ampersand (&&)

My specific use case was to copy a Splunk search peer bundle from one indexer to another. This was my working one liner:

rsync -aP --rsync-path="sudo mkdir -p /opt/splunk/var/run/searchpeers && sudo rsync" /opt/splunk/var/run/searchpeers splunk-idx2.jeppson.org:/opt/splunk/var/run/searchpeers

Success.

Restore files from remote borg repository disk image

My off-site backup involves sending borgbackup archives of VM images to a remote synology server. I recently needed to restore a single file from one of the VM images stored within this borg backup repository on the remote server. My connection to this server is not very fast so I didn’t want to wait to download the entire image file to mount it locally.

My solution was to mount the remote borgbackup repository on my local machine over SSH so I could poke around for and copy the specific file I wanted. This requires the borgbackup binary to be present on the remote machine. Since it’s a synology, I simply copied the standalone binary over.

The restore process was complicated by the fact that the VM disk image is owned by root, so in order to access the file I needed to mount the remote repository as root.

This is the process:

  1. Set BORG_REMOTE_PATH
    1. export BORG_REMOTE_PATH=<PATH_TO_BORG_BINARY_ON_REMOTE_SYSTEM>
  2. (Arch Linux): install python-llfuse
  3. Mount repository over SSH:
    1. borg mount <USER>@<REMOTE_SYSTEM>:<PATH_TO_REMOTE_BORGBACKUP_REPOSITORY>::<BACKUP_NAME> <MOUNT_FOLDER>
  4. Follow disk image mounting process
    1. losetup -Pr -f <PATH_TO_MOUNTED_BORGBACKUP>/<FILENAME_OF_VM_IMAGE>
    2. mount -o ro /dev/loop0p2 /mnt/loop0/
  5. Follow reverse to unmount when done:
    1. umount /mnt/loop0
    2. losetup -d /dev/loop0
    3. borg umount <MOUNT_FOLDER>

Success! I was able to restore an individual file within a raw VM image backup on a remote Borgbackup repository using this method.

Access idrac6 java console in mac OS

I needed to access my aging Dell PowerEdge R610 iDRAC console on my shiny new 13″ Macbook Pro M1. Unfortunately just like in Linux I ran into the “Connection failed” problem described in this post.

It was actually pretty easy to do for Mac. I installed the latest java for Mac from Oracle’s website. Once installed, I needed to find the location of the Java home directory for my mac. I found this stackoverflow discussion which directed me to use the /usr/libexec/java_home command.

Armed with that command in a subshell I was able to get to the file I wanted to edit:

sudo vim "$(/usr/libexec/java_home)/lib/security/java.security"

Once there I removed RC4 from the

jdk.tls.disabledAlgorithms

line. It worked! It was an easier process than on Linux or Windows.

Guacamole docker quick and easy

Apache Guacamole as an awesome HTML5 remote access gateway. Unfortunately it can be very frustrating to set up. They have docker images that are supposed to make the process easier, but I still ran into a lot of problems trying to get everything configured and linked.

Fortunately, a docker compose file exists to make Guacamole much easier to set up. Simply follow the instructions as laid out in the github readme:

  • Install docker & docker-compose
  • Clone their repository, run the initial prep script (for SSL keys & database initialization), and bring it up with docker-compose:
git clone "https://github.com/boschkundendienst/guacamole-docker-compose.git"
cd guacamole-docker-compose
sudo ./prepare.sh
sudo docker-compose up -d

Done! If you didn’t change anything in the docker-compose.yml file, you will have a new instance of Guacamole running on HTTPS port 8443 of your docker host. If you need to make changes (or if you forgot to run the prepare.sh file with sudo), you can run the reset.sh script which will destroy everything. You can then modify docker-compose.yml to suit your needs:

  • Whether to use nginx for HTTPS or just expose guacamole on port 8080 non-https (in case you already have a reverse proxy set up)
  • postgres password

Config files for each container are located within various folders in your guacamole-docker-compose folder. This can all be changed by editing the docker-compose.yml file.

Note this does configuration does not work with WOL, but as I do not use this feature I don’t mind.

Troubleshooting

docker ps will show running containers (docker ps -a shows all containers) If one is not running that should be, docker logs <container name> gives valuable insight as to why. In my case guacd was erroring out because I hadn’t initialized the database properly. Running the reset.sh script and starting over, this time running as sudo, did the trick.

Synchronize internet calendar to google calendar more frequently

Despite having my own e-mail server I still use Google Calendar for some things. I have an ICS file for the calendar for the Covid vaccination clinic I’m volunteering at. I ran into some frustrating sync problems when I tried to import it into my calendar. Google Calendar’s ICS sync process takes up to 12 hours, which was frustrating. I also had some mobile clients that wouldn’t even see the calendar imported from the ICS file.

I luckily found this post from Derek Antrican on stack exchange that outlines a script that you can configure to run at any given interval which will take all events in that ICS file and add/update/remove your calendar to match. It works beautifully. It’s a Google Apps script that you must copy into your own Google Scripts account to run.

First, go to the script here. Then go to Overview (i) and click “Make a Copy” in the top right (page icon.) Once the scripts are copied to your own script.google.com account, follow the instructions for configuring the script for your desired ICS URLs and other options, then click run.

My calendars are all synchronized and happy now.

Mount LVM partitions in FreeBSD

I’ve been playing around with helloSystem, an up and coming FreeBSD desktop environment that mirrors the MacOS experience quite well. Since it’s based in FreeBSD I’ve had to brush up on a few FreeBSD-isms that are distinctly different from Linux.

Since I’m dual booting this helloSystem BSD system alongside my Arch Linux install, I want to be able to access files on my Arch system from the BSD system. My Arch system uses LVM, which posed a challenge as LVM is a distinctly Linux thing.

To get it to work I needed to load a couple modules (thanks to the FreeBSD forums for help)

  • fuse
  • geom_linux_lvm

You can do this at runtime by using the kldload command

kldload fuse
kldload /boot/kernel/geom_linux_lvm.ko

To make the kernel module loading survive a reboot, add them to /boot/loader.conf

geom_linux_lvm_load="YES"
fuse_load="YES"

You can now scan your BSD system for LVM partitions:

geom linux_lvm list

The LVM partitions are listed under /dev/linux_lvm. The last step is to mount them with FUSE:

fuse-ext2 -o rw+ /dev/linux_lvm/NAME_OF_LVM_PARTITION /mnt/DESIRED_MOUNT_FOLDER

rw+ indicates a read/write mount.