I ran into some issues when trying to configure an OpenVPN tunnel between my Ubiquiti USG Pro 4 and a Debian VPS. I was very disappointed to discover that the version of OpenVPN on the USG only supports TLS 1.0! My issue was the Debian side rejecting that as insecure.
Thankfully, it was fairly painless to get Wireguard configured on the USG Pro 4. I was hesitant to do so at first because I knew every time my USG was updated I would lose the wireguard pacakge. Fortunately that can be resolved by configuring a post-install script. Thanks to ilar.in and calypte.cc and this github gist for the steps on how to do so.
Before committing your config.gateway.json code, test it line by line by SSHing into the USG-Pro 4 and entering config mode. Then type out your JSON lines one at a time, with each key being a new argument separated by a space. The first section above would look like this:
set wireguard wg0 address WIREGUARD_ADDRESS
set wireguard wg0 listen-port WG_LISTEN_PORT
set wireguard wg0 peer ENDPOINT_CLIENT_PUBLIC_KEY allowed-ips 0.0.0.0/0
set wireguard wg0 peer ENDPOINT_CLIENT_PUBLIC_KEY endpoint ENDPOINT_ADDRESS:ENDPOINT_PORT
set wireguard wg0 private-key /config/auth/priv.key
set wireguard wg0 route-allowed-ips false
If the commit works without error, you can then drop out of the configure section and look at your wireguard config:
sudo wg show
If all looks well, then copy your config.gateway.json to your controller and trigger a reprovision.
With debug on, ping again and check kernel messages (dmesg)
[Tue Dec 21 22:16:12 2021] wireguard: wg0: No peer has allowed IPs matching 10.99.13.2
This showed I didn’t have my access control properly configured. Modify /etc/wireguard/wg0.conf on your client config and make sure your AllowedIPs are properly letting traffic through.
AllowedIPs = 10.99.13.0/24
USG not allowing connections
Clients unable to connect to USG despite having a good config. Double check your firewall rules. I had neglected to create a WAN LOCAL rule allowing UDP packets on my wireguard port. Once that was configured, handshakes completed successfully.
Firewalls can’t ping each other
I had an issue where the firewalls would pass traffic through, but they couldn’t ping each other. The solution was to add the VPN subnet you created to allowed-ips on both sides of the connection.
I have a Debian linode box acting as a wireguard server. I wanted to join my opnsense firewall to it to allow devices behind it to access the box through the wireguard tunnel. It was not as straightforward as I had hoped, but thankfully I got it all working.
Install the os-wireguard package. Manually drop to the CLI and install the wireguard package as well: sudo pkg install wireguard
Configure Local instance
Name and listen port can be random. Tunnel address is the subnet you wish to expose to the other end (the subnet you wish to have access to the tunnel.)
Leave everything else blank and hit save
Edit your new connection and copy the Public key, this will need to be sent to the Debian server
Name: hostname of Debian server
Public Key: Public key of server (can be obtained by running wg show on the server)
Shared Secret: blank (unless you’ve configured it on the server)
Allowed IPs: IPs or subnets on the Debian server you wish to expose to the client side (the OPNSense box)
Endpoint address: DNS name of Debian server
Endpoint port: Port Debian wireguard instance is listening on
Enable the VPN
General tab / Enable wireguard checkbox and hit apply.
Take down the tunnel
sudo wg-quick down wg0
Edit wireguard config to add peer
sudo vim /etc/wireguard/wg0.conf
[Peer] PublicKey = <PUBLIC_KEY_YOU_COPIED_IN_LOCAL_INSTANCE_STEP> AllowedIPs = <IPs or Subnets behind the OPNSense side you wish to be exposed to the Debian side>
sudo wg-quick up wg0
Example wg show output below with dummy IPs:
sudo wg show
public key: f+/J4JO0aL6kwOaudAvZVa1H2mDzR8Nh3Vfeqq+anF8=
private key: (hidden)
listening port: 12345
allowed ips: 10.0.0.2/32
latest handshake: 17 seconds ago
transfer: 5.14 KiB received, 3.81 KiB sent
allowed ips: 192.168.1.1/32
latest handshake: 7 minutes, 8 seconds ago
transfer: 5.89 MiB received, 952.20 MiB sent
The endpoint: line gets populated when a successful VPN connection is made. If it’s missing, the tunnel was not established.
Nothing happens after saving information and enabling tunnel
Make sure latest wireguard package is installed
sudo pkg install wireguard
Get more log output by opening a shell on your OPNSense box and running
sudo /usr/local/etc/rc.d/wireguard start
In my case I was getting this interesting message
[!] Missing WireGuard kernel support (ifconfig: SIOCIFCREATE2: Invalid argument). Falling back to slow userspace implementation.
[#] wireguard-go wg0
│ Running wireguard-go is not required because this │
│ kernel has first class support for WireGuard. For │
│ information on installing the kernel module, │
│ please visit │
│ https://www.wireguard.com/install/ │
I fixed this problem by manually installing wireguard with the pkg install command above.
Wireguard config not saving
make sure to stop the tunnel first, otherwise your changes get overwritten by the daemon.
sudo wg-quick down wg0
sudo wg-quick up wg0
I’ve had an issue where I wasn’t sure if my dynamic DNS provider registered properly. I then realized that I have a piKVM attached to one of my servers that boots on powerup, even if the server does not. I could utilize this piKVM to help me out.
Thanks to inspiration from Chris Dzombak I was able to whip up a little script that runs on startup. This script waits 5 minutes to allow for my firewall and modem to boot up, then sends a pushover notification to let me know the piKVM is online and what its external IP address is.
To get it working on the piKVM I had to enter into RW mode, write and save the script, add execute permissions to the script, then configure a systemd service to run the script at startup.
Here is the script, saved under /root/boot-pushover.sh
#Wait 5 minutes to allow router bootup
MESSAGE="$(hostname) is online. External IP: $EXTERNAL_IP"
#Send pushover command to alert it's up and send its external IP
curl -s \
--form-string "token=$TOKEN" \
--form-string "user=$USER" \
--form-string "message=$MESSAGE" \
Set executable: chmod +x /root/boot-pushover.sh
Here is the systemd service, saved under /etc/systemd/system/boot-pushover-notification.service
A lot of the guides for resetting the root password on an OPNSense box assume a UFS root partition. The password recovery steps do not work if you installed OPNSense with a ZFS root partition. If you try to follow the steps you get a lovely error about “unrecognized filesystem”
The process for ZFS (thanks to this article) is instead to run the following commands:
zfs set readonly=off zroot
zfs mount -a
Once that is done, you can proceed with the rest of the steps. Password recovery steps in full:
Press the number 2 immediately on boot to go into single user mode
I’ve recently moved and needed to connect to my (still existing) home network from my desktop. I’ve never had to VPN from my desktop before, so here my notes for getting it working.
Install necessary lt2p, pptp, and libreswan packages (I’m using yay as my package manager) yay -Sy community/networkmanager-l2tp community/networkmanager-pptp aur/networkmanager-libreswan aur/libreswan
Configure VPN in GNOME settings (close settings window first if it was already open)
I needed to send some test packets over UDP to make sure connectivity was working. I found this site which outlined how to do it really well
nc -u <IP/hostname> <port>
Then on the next line you can send test messages, then hit CTRL+D when done. In my case I wanted to test sending syslog data, so I did nc -u <hostname> 514, then wrote test messages. the -u specifies UDP and 514 is the syslog port. I was then able to confirm on the other end the message was received. Handy.
Assuming you’re using network manager for your connections, create a bridge (thanks to ciberciti.biz & the arch wiki for information on how to do so.) Replace interface names with ones corresponding to your machine:
sudo nmcli connection add type bridge ifname br0 stp no
sudo nmcli connection add type bridge-slave ifname enp4s0 master br0
sudo nmcli connection show
#Make note of the active connection name
sudo nmcli connection down "Wired connection 2" #from above
sudo nmcli connection up bridge-br0
Create a second bridge bound to lo0 for host-only communication. Change IP as desired:
sudo nmcli connection add type bridge ifname br99 stp no ip4 192.168.2.1/24
sudo nmcli connection add type bridge-slave ifname lo master br99
sudo nmcli connection up bridge-br99
When creating the passthrough VM, make sure chipset is Q35.
Set the CPU model to host-passthrough (type it in, there is no dropdown for it.)
When adding disks / other devices, set the device model to virtio
Add your GPU by going to Add Hardware and finding it under PCI Host Device.
Windows 10 specific tweaks
If your passthrough VM is going to be windows based, some tweaks are required to get the GPU to work properly within the VM.
Ignore MSRs (blue screen fix)
Later versions of Windows 10 instantly bluescreen with kmode_exception_not_handled unless you pass an option to ignore MSRs. Add the kvm ignore_msrs=1 option in /etc/modprobe.d/kvm.conf to do so. Optionally add the report_ignored_msrs=0 option to squelch massive amounts of kernel messages every time an MSR was ignored.
Use the virsh edit command to make some tweaks to the VM configuration. We need to hide the fact that this is a VM otherwise the GPU drivers will not load and will throw Error 43. We need to add a vendor_id in the hyperv section, and create a kvm section enabling hidden state, which hides certain CPU flags that the drivers use to detect if they’re in a VM or not.
From the above output I see my CPU core 0 is shared by CPUs 0 & 16, meaning CPU 0 and CPU 16 (as seen by the Linux kernel) are hyperthreaded to the same physical CPU core.
Especially for gaming, you want to keep all threads on the same CPU cores (for multithreading) and the same CPU die (on my threadripper, CPUs 0-7 reside on one physical die, and CPUs 8-15 reside on the other, within the same socket.)
In my case I want to dedicate one CPU die to my VM with its accompanying hyperthreads (CPUs 0-7 & hyperthreads 16-23) You can accomplish this using the virsh edit command and creating a cputune section (make sure you have a matching vcpu count for the number of cores you’re configuring.) Also edit CPU mode with the proper topology of 1 socket, 1 die, 8 cores with 2 threads. Lastly, configure memory to only be from the proper NUMA node the CPU cores your VM is using (Read here for more info.)
Append default_hugepagesz=1G hugepagesz=1G hugepages=16 to the kernel line in /etc/default/grub and re-run sudo grub-mkconfig -o /boot/grub/grub.cfg
Configure FIFO CPU scheduling
The Arch Wiki mentions to run qemu-system-x86_64 with taskset and chrt but doesn’t mention how to do so if you’re using virt-manager. Fortunately this reddit thread outlined how to accomplish it: libvirt hooks. Create the following script and place it in /etc/libvirt/hooks/qemu , change the VM variable to match the name of your VM, mark that new file as executable (chmod +x /etc/libvirt/hooks/qemu ) and restart libvirtd
#Hook to change VM to FIFO scheduling to decrease latency
#Place this file in /etc/libvirt/hooks/qemu and mark it executable
#Change the VM variable to match the name of your VM
if [ "$1" == "$VM" ] && [ "$2" == "started" ]; then
if pid=$(pidof qemu-system-x86_64); then
chrt -f -p 1 $pid
echo $(date) changing CPU scheduling to FIFO for VM $1 pid $pid >> /var/log/libvirthook.log
echo $(date) Unable to acquire PID of $1 >> /var/log/libvirthook.log
#echo $(date) libvirt hook arg1=$1 arg2=$2 arg3=$3 arg4=$4 pid=$pid >> /var/log/libvirthook.log
Update 7/28/20: I no longer do this in favor of the qemu hook script above, which prioritizes to p1 the qemu process for the cores it needs. I’m leaving this section here for historical/additional tweaking purposes.
Update 6/28/20: Additional tuning since I was having some stuttering and framerate issues. Also read here about the emulatorpin option
Dedicate CPUs to the VM (host will not use them) – append isolcups, nohz_full & rcu_nocbs kernel parameters into /etc/default/grub
I cloned an Ubuntu 20.04 VM and was frustrated to see both boxes kept getting the same DHCP IP address despite having different network MAC addresses. I finally found on this helpful post which states Ubuntu 20.04 uses systemd-networkd for DHCP leases which behaves differently than dhclient. As wickedchicken states,
To fix, replace the contents of one or both of /etc/machine-id. This can be anything, but deleting the file and running systemd-machine-id-setup will create a random machine-id in the same way done on machine setup.
So my fix was to run the following on the cloned machine: