All posts by nicholas

Self host postfix SMTP relay for Zimbra Mail Server

My notes for spinning up a small Debian linode server to act as an SMTP relay for my home network (note you will have to engage with linode support to enable mail ports for new accounts.)

Relay server configuration

Install postfix

sudo apt install postfix

Modify main.cf

sudo vim /etc/postfix/main.cf

Under TLS parameters, add TLS security to enable secure transfer of mail

smtp_tls_security_level = may
I decided not to open up postfix to the internet but instead my relay has a wireguard tunnel and postfix is allowed to relay only from that VPN subnet.

Add your subnets and relay restrictions further down:

mynetworks = 127.0.0.0/8 <YOUR_SERVER_SUBNET>
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated check_relay_domains
relay_domains = <MY_DOMAIN_NAME>
myhostname = <RELAYSERVER_HOSTNAME>
inet_interfaces = 127.0.0.1, <IP_OF_WIREGUARD_VPN_INTERFACE>

Zimbra configuration

In Zimmbra admin panel, edit your mail server

Configure / Servers / your_mail_server

MTA section

Add the DNS name and port of the relay system next to “Relay MTA for external deliverability”

If it won’t let you save, saying ::1 is required, you can add ::1 to MTA Trusted networks, however, on my Zimbra server this broke postfix. The symptoms were e-mails hanging and not sending. To fix, log into the Zimbra mail server and run as the zimbra user:

zmprov ms YOUR_MAIL_DOMAIN_NAME zimbraMtaMyNetworks ‘127.0.0.1 192.168.0.0/16’ (list of networks you had before but excluding ::1)

Then, issue postfix reload

That was it. A simple postfix SMTP relay which only accepts mail from my internal VPN (it doesn’t listen on the external interface at all.)

Troubleshooting

Realyed mail shows red unlock icon in Gmail (mail getting sent unencrypted)

Per postfix documentation I needed to enable secure transfer of mail by adding

smtp_tls_security_level = may

to main.cf

Mail does not send after adding ::1 to MTA Trusted Networks

Remove it via the CLI and reload postfix:

zmprov ms YOUR_MAIL_DOMAIN_NAME zimbraMtaMyNetworks '127.0.0.1 192.168.0.0/16' (list of networks you had before but excluding ::1)
postfix reload

Fix KVMD not starting after updating piKVM

piKVM is amazing. I had one controlling an old desktop of mine for over 18 months with no issues. I decided to update its software and ran into some problems.

The first problem was pacman was returning a 404 when trying to update. I guess in the last year and a half the repository URL had changed. I had to edit /etc/pacman.conf and update the URL:

[pikvm]
Server = https://files.pikvm.org/repos/arch/rpi4
SigLevel = Required DatabaseOptional

After fixing that, running pacman -Syu and answering yes, I rebooted, but found that kvmd would not start. The first symptom: HTTP 500 from nginx. Digging in I found that two services were failing to start: kvmd-tc358743.service and kvm-otg.service.

v4l2-ctl[429]: Cannot open device /dev/kvmd-video, exiting
kvmd-otg[398]: RuntimeError: Can't find any UDC

With these two services bailing the web UI wouldn’t start. I checked the kernel log and the tc358743 device was not detected at all. I was about to give up and just reflash the device when I noticed two files in /boot: cmdline.txt.pacsave and config.txt.pacsave. I know from my experience in arch that it means I had some configurations get clobbered. Running a diff between the two I found some very important lines omitted:

dtoverlay=tc358743
dtoverlay=disable-bt
dtoverlay=dwc2,dr_mode=peripheral

I restored the .pacsave files and rebooted, and it worked! Everything came back.

Next time I won’t wait so long between software updates.

piKVM pushover startup script

I’ve had an issue where I wasn’t sure if my dynamic DNS provider registered properly. I then realized that I have a piKVM attached to one of my servers that boots on powerup, even if the server does not. I could utilize this piKVM to help me out.

Thanks to inspiration from Chris Dzombak I was able to whip up a little script that runs on startup. This script waits 5 minutes to allow for my firewall and modem to boot up, then sends a pushover notification to let me know the piKVM is online and what its external IP address is.

To get it working on the piKVM I had to enter into RW mode, write and save the script, add execute permissions to the script, then configure a systemd service to run the script at startup.

Here is the script, saved under /root/boot-pushover.sh

#!/usr/bin/env bash
set -eu

#Wait 5 minutes to allow router bootup
sleep 300

TOKEN="PUSHOVER_APPLICATION_TOKEN"
USER="PUSHOVER_USER_TOKEN"
EXTERNAL_IP="$(curl ifconfig.me)"
MESSAGE="$(hostname) is online. External IP: $EXTERNAL_IP"

#Send pushover command to alert it's up and send its external IP
curl -s \
  --form-string "token=$TOKEN" \
  --form-string "user=$USER" \
  --form-string "message=$MESSAGE" \
  https://api.pushover.net/1/messages.json

Set executable: chmod +x /root/boot-pushover.sh

Here is the systemd service, saved under /etc/systemd/system/boot-pushover-notification.service

[Service]
Type=oneshot
ExecStart=/root/boot-pushover.sh
RemainAfterExit=yes
User=root
Group=root
RestartSec=15
Restart=on-failure

[Unit]
Wants=network.target
After=network.target nss-lookup.target

[Install]
WantedBy=multi-user.target

Reload daemons & enable startup:

systemctl daemon-reload
systemctl enable boot-pushover-notification.service

Test by exiting rw mode and rebooting the piKVM:

ro
reboot

It works really well!

Fix no network after Proxmox 7 upgrade

I upgraded my proxmox server to version 7 and was dismayed to find it had no network connections after a reboot. After much digging I was finally able to find this post which mentioned:

After installing ifupdown2 everything works fine.

Sure enough, ifupdown2 was not installed anymore, and I had configured my networks with it. I had to manually assign an IP address to my node long enough to issue the command
apt install ifupdown2

Once I rebooted, everything came up like it should. Lesson learned: if you use ifupdown2, you must make sure it’s there before you reboot your server!

Reset root password on OPNSense with ZFS root

A lot of the guides for resetting the root password on an OPNSense box assume a UFS root partition. The password recovery steps do not work if you installed OPNSense with a ZFS root partition. If you try to follow the steps you get a lovely error about “unrecognized filesystem”

The process for ZFS (thanks to this article) is instead to run the following commands:

zfs set readonly=off zroot
zfs mount -a

Once that is done, you can proceed with the rest of the steps. Password recovery steps in full:

  1. Press the number 2 immediately on boot to go into single user mode
  2. Press enter when prompted for shell
  3. Make ZFS read/write:
    1. zfs set readonly=off zroot
    2. zfs mount -a
  4. Reset password
    1. opnsense-shell password
  5. Reboot

Saltstack JINJA set variables within set parameter

It took me a while to understand how to insert variables into a JINJA set statement. I read this tutorial which was helpful but still didn’t get me what I wanted. I wanted to be able to set a variable within a {% set %} parameter, something like this:

{% set variable = salt['vault'].read_secret('super/secret/path/{{ variable }}/more/secret/path/{{ another_variable}}', 'username') %}

Except that didn’t work. It simply rendered the brackets instead of inserting the variable.

I finally came across this stackoverflow page which outlined what I needed to do – a lot like in C, I needed to quote things I wanted to be taken literally, then add a + sign to insert the variable, then another + sign for the rest of the parameter. The correct syntax is as follows:

{% set variable = salt['vault'].read_secret('super/secret/path/' + variable + '/more/secret/path/' + another_variable', 'username') %}

This worked beautifully.

Connect Ubiquiti l2tp vpn with NetworkManager in Arch

I’ve recently moved and needed to connect to my (still existing) home network from my desktop. I’ve never had to VPN from my desktop before, so here my notes for getting it working.

Configuration

  1. Install necessary lt2p, pptp, and libreswan packages (I’m using yay as my package manager)
    yay -Sy community/networkmanager-l2tp community/networkmanager-pptp aur/networkmanager-libreswan aur/libreswan
  2. Configure VPN in GNOME settings (close settings window first if it was already open)
    1. Add VPN / Layer 2 Tunneling Protocol (L2TP)
    2. Gateway: IP/DNS of VPN
    3. User Authentication: Type: password
    4. IPsec Settings: Type: Pre-shared Key (PSK)
    5. PPP settings: Only check MSCHAPv2, check everything else. MPPE Security: 128-bit (most secure)

Troubleshooting

If something isn’t working the popup is not very descriptive. Network manager logs are stored in journald, so the best way to troubleshoot is to follow the logs: (-f for follow, -u for unit name)

sudo journalctl -f -u NetworkManager

In my case following the networkmanager logs I could see I didn’t have libreswan fully installed, and installing the libreswan package fixed it.

rsync create directory tree on remote host

I ran into an issue where I want to use rsync to copy a folder to a remote host into a destination directory that doesn’t yet exist. I was frustrated to find that rsync doesn’t appear to be able to create a remote directory tree. It would keep erroring out with this message:

rsync: mkdir "/opt/splunk/var/run/searchpeers" failed: No such file or directory (2)

I discovered this workaround which allowed me to finally accomplish what I wanted in one line: create the remote directory structure first, then synchronize into it. This is done with the --rsync-path option. You can specify the mkdir -p command beforehand, then add the rsync command after double ampersand (&&)

My specific use case was to copy a Splunk search peer bundle from one indexer to another. This was my working one liner:

rsync -aP --rsync-path="sudo mkdir -p /opt/splunk/var/run/searchpeers && sudo rsync" /opt/splunk/var/run/searchpeers splunk-idx2.jeppson.org:/opt/splunk/var/run/searchpeers

Success.

Restore files from remote borg repository disk image

My off-site backup involves sending borgbackup archives of VM images to a remote synology server. I recently needed to restore a single file from one of the VM images stored within this borg backup repository on the remote server. My connection to this server is not very fast so I didn’t want to wait to download the entire image file to mount it locally.

My solution was to mount the remote borgbackup repository on my local machine over SSH so I could poke around for and copy the specific file I wanted. This requires the borgbackup binary to be present on the remote machine. Since it’s a synology, I simply copied the standalone binary over.

The restore process was complicated by the fact that the VM disk image is owned by root, so in order to access the file I needed to mount the remote repository as root.

This is the process:

  1. Set BORG_REMOTE_PATH
    1. export BORG_REMOTE_PATH=<PATH_TO_BORG_BINARY_ON_REMOTE_SYSTEM>
  2. (Arch Linux): install python-llfuse
  3. Mount repository over SSH:
    1. borg mount <USER>@<REMOTE_SYSTEM>:<PATH_TO_REMOTE_BORGBACKUP_REPOSITORY>::<BACKUP_NAME> <MOUNT_FOLDER>
  4. Follow disk image mounting process
    1. losetup -Pr -f <PATH_TO_MOUNTED_BORGBACKUP>/<FILENAME_OF_VM_IMAGE>
    2. mount -o ro /dev/loop0p2 /mnt/loop0/
  5. Follow reverse to unmount when done:
    1. umount /mnt/loop0
    2. losetup -d /dev/loop0
    3. borg umount <MOUNT_FOLDER>

Success! I was able to restore an individual file within a raw VM image backup on a remote Borgbackup repository using this method.

Access idrac6 java console in mac OS

I needed to access my aging Dell PowerEdge R610 iDRAC console on my shiny new 13″ Macbook Pro M1. Unfortunately just like in Linux I ran into the “Connection failed” problem described in this post.

It was actually pretty easy to do for Mac. I installed the latest java for Mac from Oracle’s website. Once installed, I needed to find the location of the Java home directory for my mac. I found this stackoverflow discussion which directed me to use the /usr/libexec/java_home command.

Armed with that command in a subshell I was able to get to the file I wanted to edit:

sudo vim "$(/usr/libexec/java_home)/lib/security/java.security"

Once there I removed RC4 from the

jdk.tls.disabledAlgorithms

line. It worked! It was an easier process than on Linux or Windows.