sideload Gears of War 5 on Windows 10

Sideloading Gears 5 is similar to sideloading Gears 4. You need to grab the URL the store is using to download the game with a proxy tool like fiddler, then download that URL with a download manager.

Gears 5 is an msixvc file instead of an eappx file. You can still install this via the add-appxpackage command.

I ran into issues trying to run add-appxpackage from a network drive. It worked after copying to a local drive and running the command again.

Why go through the trouble? Because the Microsoft store’s DRM is so bad it requires complete re-installs when anything goes wrong. This is very annoying for those of us on less than gigabit internet connections trying to reinstall a 60+ GB game.

Transfer linode VM over ssh

I love Linode for their straightforward pricing. I can use them for temporary infrastructure and not have to worry about getting overcharged. When it comes time to transfer infrastructure back, the process is fairly straightforward. In my case I wanted to keep a disk image of my Linode VM for future use.

The linode documentation is very good. I used their copy an image over ssh article combined with their rescue and rebuild article sprinkled with a bit of gzip compression and use of pv to grab my linode image locally, complete with a progress bar.

First, boot your linode into recovery mode via dashboard / Linodes / <name of your linode>, then click on Rescue tab, map your drives as needed.

Launch console (top right) to get into the recovery shell. In my case I wanted to SSH into my linode to grab the image, so I set a password and started the ssh service:

passwd
/etc/init.d/ssh start

Then on your end, pipe ssh , gzip, pv and dd together to grab the compressed disk image with progress monitoring:

ssh root@ "dd if=/dev/sda | gzip -1 -" | pv | dd of=linode-image.gz

Success.

ProxMox VMs reboot when switch is rebooted

I came across an interesting situation where if I rebooted my Ubiquiti UniFi Switch 24 for a firmware upgrade, all my VM hosts would reboot themselves. It turned out to be due to my having enabled HA in ProxMox. The hosts would temporarily lose connectivity to each other and begin to fence themselves off from the cluster. This caused HA to kill the VMs on those hosts. Then once connectivity was restored everything would eventually come back up.

The proper way to fix this would be to have multiple paths for each host to talk to each other, so if one switch goes down the cluster is still able to communicate. In my case, where I only have one switch, the “poor man’s fix” was to simply disable HA altogether during the switch reboot, as outlined here. Then, once the switch is back up, re-enable HA.

On each node, stop the pve-ha-lrm service. Once it’s stopped on all hosts, stop the pve-ha-crm service. Then reboot your switch.

After the switch is back up, start pve-ha-lrm on each node first, then pve-ha-crm (if it doesn’t auto start itself) to re-enable HA.

Using ProxMox as a NAS

Lately I’ve been very unhappy with latest FreeBSD causing reboots randomly during disk resilvering. I simply cannot tolerate random reboots of my fileserver. This fact combined with the migration of OpenZFS to the ZFS on Linux code base means it’s time for me to move from a FreeBSD based ZFS NAS to a Linux-based one.

Sadly there aren’t many options in this space yet. I wanted something where basic tasks were taken care of, like what FreeNAS does, but also supports ZFS. The solution I settled on was ProxMox, which is a hypervisor, but it also has ZFS support.

The biggest drawback of ProxMox vs FreeNAS is the GUI. There are some disk-related GUI options in ProxMox, but mostly it’s VM focused. Thus, I had to configure my required services via CLI.

Following are the settings I used when I configured my NAS to run ProxMox.

Repo setup

If you don’t want to pay for a proxmox license, change the PVE enterprise repository to the free version by modifying /etc/apt/sources.list.d/pve-enterprise.list to the following:

deb http://download.proxmox.com/debian/pve buster pve-no-subscription

Then run at apt update & apt upgrade.

Email alerts

Postfix configuration

Edit /etc/postfix/main.cf and tweak your mail server config as needed (relayhost). Restart postfix after editing:

systemctl restart postfix

Forward mail for root to your own email

Edit /etc/aliases and add an alias for root to forward to your desired e-mail address. Add this line:

root: YOUR_EMAIL_ADDRESS

Afterward run:

newaliases

ZFS configuration

Pool Import

Import the pool using the zpool import -f command (-f to force import despite having been active in a different system)

zpool import -f  

By default they’re imported into the main root directory (/). If you want to have them go to /mnt, use the zfs set mountpoint command:

zfs set mountpoint=/mnt/ 

Monitoring

Install and configure zfs-zed

apt install zfs-zed

Modify /etc/zfs/zed.d/zed.rc and uncomment ZED_EMAIL_ADDR, ZED_EMAIL_PROG, and ZED_EMAIL_OPTS. Edit them to suit your needs (default values work fine, they just need to be uncommented.) Optionally uncomment ZED_NOTIFY_VERBOSE and change to 1 if you want more verbose notices like what FreeNAS does (scrub notifications, for example.)

After modifying /etc/zfs/zed.d/zed.rc, restart zed:

systemctl restart zfs-zed

Scrubbing

By default ProxMox scrubs each of your datasets on the second Sunday of every month. This cron job is located in /etc/cron.d/zfsutils-linux. Modify to your liking.

Snapshot & Replication

There are many different snapshot & replication scripts out there. I landed on Sanoid. Thanks to SvennD for helping me grasp how to get it working.

Install sanoid :

#Install necessary packages
apt install debhelper libcapture-tiny-perl libconfig-inifiles-perl pv lzop mbuffer git
# Clone repo, build deb, install
git clone https://github.com/jimsalterjrs/sanoid.git cd sanoid
ln -s packages/debian . 
dpkg-buildpackage -uc -us 
apt install ../sanoid_*_all.deb 

Snapshots

Edit /etc/sanoid/sanoid.conf with a backup and retention schedule for each of your datasets. Example taken from sanoid documentation:

[data/home]
	use_template = production
[data/images]
	use_template = production
	recursive = yes
	process_children_only = yes
[data/images/win7]
	hourly = 4

#############################
# templates below this line #
#############################

[template_production]
        frequently = 0
        hourly = 36
        daily = 30
        monthly = 3
        yearly = 0
        autosnap = yes
        autoprune = yes

Once sanoid.conf is to your liking, create a cron job to launch sanoid every hour (sanoid determines whether any action is needed when executed.)

crontab -e
#Add this line, save and exit
0 * * * * /usr/sbin/sanoid --cron

Replication

syncoid (part of sanoid) easily replicates snapshots. The syntax is pretty straightforward:

syncoid <source> <destination> -r 
#-r means recursive and is optional

For remote locations specify a username@ before the ip/hostname, then a colon and the dataset name, for example:

syncoid root@10.0.0.1:sourceDataset localDataset -r

You can even have a remote source go to a different remote destination, which is pretty neat.

Other syncoid options of interest:

--debug  #for seeing everything happening, useful for logging
--exclude #Regular expression to exclude certain datasets
--src-bwlimit #Set an upload limit so you don't saturate your bandwidth
--quiet #don't output anything unless it's an error

Automate synchronization by placing the same syncoid command into a cronjob:

0 */4 * * * /usr/sbin/syncoid --exclude=bigdataset1 --source-bwlimit=1M --recursive pool/data root@192.168.1.100:pool/data
#if you don't want status emails when the cron job runs, add --quiet

NFS

Install the nfs-kernel-server package and specify your NFS exports in /etc/exports.

apt install nfs-kernel-server portmap

Example /etc/exports :

/mnt/example/DIR1 192.168.0.0/16(rw,sync,all_squash,anonuid=0,anongid=0)

Restart nfs-server after modifying your exports:

systemctl restart nfs-server

Samba

Install samba, configure /etc/samba/smb.conf, and add users.

apt install samba
systemctl enable smbd

/etc/samba/smb.conf syntax is fairly straightforward. See the samba documentation for more information. Example share configuration:

[exampleshare]
comment = Example share
path = /mnt/example
valid users = user1 user2
writable = yes

Add users to the system itself with the adduser command:

adduser user1

Add those same users to samba with the smbpasswd -a command. Example:

smbpasswd -a user1

Restart samba after making changes:

systemctl restart smbd

SMART monitoring

Taken from https://pve.proxmox.com/wiki/Disk_Health_Monitoring:

By default, smartmontools daemon smartd is active and enabled, and scans the disks under /dev/sdX and /dev/hdX every 30 minutes for errors and warnings, and sends an e-mail to root if it detects a problem. 

Edit the file /etc/smartd.conf to suit your needs. You can specify/exclude devices, smart attributes, etc there. See here for more information. Restart the smartd service after modifying.

UPS monitoring

apc-upsd was easiest for me to configure, so I went with it. Thanks to this blog for giving me the information to get started.

First, install apcupsd:

apt install apcupsd apcupsd-doc

As soon as it was installed my console kept getting spammed about IRQ issues. To stop these errors I stopped the apcupsd daemon:

 systemctl stop apcupsd

Now modify /etc/apcupsd/apcupssd.conf to suit your needs. The section I added for my CyberPower OR2200LCDRT2U was simply:

UPSTYPE usb
DEVICE

Then modify /etc/default/apcupsd to specify it’s configured:

#/etc/default/apcupsd
ISCONFIGURED=yes

After configuring, you can restart the apcupsd service

systemctl start apcupsd

To check the status of your UPS, you can run the apcaccess status command:

/sbin/apcaccess status

Log monitoring

Install Logwatch to monitor system events. Here is a good primer on all of Logwatch’s options.

apt install logwatch

Modify /usr/share/logwatch/default.conf/logwatch.conf to suit your needs. By default it runs daily (defined in /etc/cron.daily/00logwatch). I added the following lines for my config to filter out unwanted information:

Service = "-zz-disk_space"
Service = "-postfix"
Service = "vsmartd"
Service = "-zz-lm_sensors"

Manually run logwatch to get a preview of what you’ll see:

logwatch --range today --mailto YOUR_EMAIL_ADDRESS

UPDATE 8/29/2020
I discovered additional tweaking to logwatch to get it exactly how I like it (thanks to this post and this one at serverfault.)

Defaults for monitored services are located in /usr/share/logwatch/default.conf/services/

You can copy this default file to /etc/logwatch/conf/services/<filename.conf> and then modify the service as needed. In my case I wanted to ignore logins for a particular user from a particular machine. This can be done by copying & editing sshd.conf and adding the following:

# Ignore these hosts
*Remove = 192.168.100.1
*Remove = X.Y.123.123
# Ignore these usernames
*Remove = testuser
# Ignore other noise. Note that we need to escape the ()
*Remove = "pam_succeed_if\(sshd:auth\): error retrieving information about user netscan.*

Troubleshooting

ZFS-ZED not sending email

If ZED isn’t sending emails it’s likely due to an error in the config. For some reason default values still need to be uncommented for zed to work, even if left unaltered. Thanks to this post for the info.

Samba share access denied

If you get access denied when trying to write to a SMB share, double check the file permissions on the server level. Execute chmod / chown as appropriate. Example:

chown user1 -R /mnt/example/user1

mountpoint check script

I have a few NFS mounts that I want to be working at all times. If there is a power outage, sometimes NFS clients come up before the NFS server does, and thus the mounts are not there. I wrote a quick little bash script to fix this utilizing the mountpoint command.

Behold (Change the mountpoint(s) to the one you want to monitor.)

#!/bin/bash
#Simple bash script to check mount points and re-mount them if they're not mounted
#Authored by Nick Jepspon 8/11/2019

### Variables ###
# Changes these to suit your needs

MOUNTPOINTS=(/mnt/1 /mnt/2 /mnt/3)  #space separated list of mountpoints to monitor

### End Variables ###


for mount in $MOUNTPOINTS 
do
    if  ! mountpoint -q $mount
    then
        echo "$mount is not mounted, attempting to mount."
        mount $mount
    fi
    #otherwise do nothing
done

I have this set as a cronjob running every 5 minutes

*/5 * * * * /mountcheck.sh

Now the system will continually try to mount the specified folder if it isn’t already mounted.

OPENVPN site to site vpn between USG and openwrt

A new firewall means a new site to site VPN configuration. My current iteration of this is a USG Pro 4 serving as an OpenVPN server and a Netgear Nighthawk R8000 serving as a VPN client joining their two networks together.

First, I had to wrap my head around some concepts. To set this up you need three sets of certificates and a DH file:

  • CA: To generate and validate certificates
  • Server: To encrypt/decrypt traffic for the Server
  • Client: To encrypt/decrypt traffic from the Client
  • DH: Not a certificate but still needed by the server for encryption

The server and client will also need openvpn configurations containing matching encryption/hashing methods, CA public key, and protocol/port settings.

Generate certificates

If you already have PKI infrastructure in place you simply need to generate two sets of keys and a DH file for the server/client to use. If you don’t, the easy-rsa project comes to the rescue. This tutorial uses easy-rsa version 3.

I didn’t want to generate the certificates on my firewall so I picked a Debian system to do the certificate generation. First, install easy-rsa:

sudo apt install easy-rsa

In Debian easy-rsa is installed to /usr/share/easy-rsa/

Optional: Set desired variables by moving /usr/share/easy-rsa/vars.example to /usr/share/easy-rsa/vars and un-commenting / editing to suit your needs (in my case I like to extend the life of my certificates beyond two years.)

Next, create your PKI and generate CA certificates:

/usr/share/easy-rsa/easyrsa init-pki
/usr/share/easy-rsa/easyrsa build-ca

Now create your DH file. Grab a cup of coffee for this one, it can take up to ten minutes to complete:

/usr/share/easy-rsa/easyrsa gen-dh

Then create your server & client certificates. For this guide we are calling the server ovpn-server and the client ovpn-client

#For the server
/usr/share/easy-rsa/easyrsa gen-req ovpn-server nopass 
/usr/share/easy-rsa/easyrsa sign-req server ovpn-server

#For the client
/usr/share/easy-rsa/easyrsa gen-req ovpn-client nopass
/usr/share/easy-rsa/easyrsa sign-req client ovpn-client

You will be asked for a common name. Remember what you put here, you will need it later. If you just hit enter and accept the default the common name will match what was passed in the above commands (ovpn-server for the server certificate and ovpn-client for the client certificate.)

Lastly, copy these files to their respective hosts:

USG Server: CA, Server key & cert, and DH file. (substitute with IP of your device)

scp pki/dh.pem pki/ca.crt pki/private/ovpn-server.key  pki/issued/ovpn-server.crt admin@IP_OF_YOUR_USG:/config/auth/

OpenWRT Client: Client key & cert, and CA cert:

scp pki/private/ovpn-client.key pki/issued/ovpn-client.crt pki/ca.crt root@IP_OF_YOUR_OPENWRT:/etc/config/

USG: VPN Server

Documentation for the EdgeRouter is much easier to find than for the USG. Since they use the same operating system I based this off of this guide from Logan Marchione for the EdgeRouter. SSH into your USG and issue the following, substituting the $variables with the values you desire for your network.

Explanation of variables:

VPN_SUBNET: Used for VPN communication. Must be different from both server and client subnets.
SERVER_SUBNET: Subnet on server side you wish to pass to client network
VPN_PORT: Change this to desired listening port for the OpenVPN server
REMOTE_SUBNET: Subnet on client side you wish to pass to server network
REMOTE_NETMASK: Netmask of client subnet
REMOTE_VPN_IP: Static IP you wish to give the client on the VPN subnet.
REMOTE_CERT_NAME: Common name given to client certificate generated previously.

Replace $variables below before pasting into USG terminal:

configure
#OpenVPN config
set interfaces openvpn vtun0
set interfaces openvpn vtun0 description "OpenVPN Site to Site"
set interfaces openvpn vtun0 mode server
set interfaces openvpn vtun0 encryption aes256
set interfaces openvpn vtun0 hash sha256
set interfaces openvpn vtun0 server subnet $VPN_SUBNET
set interfaces openvpn vtun0 server push-route $SERVER_SUBNET
set interfaces openvpn vtun0 tls ca-cert-file /config/auth/ca.crt
set interfaces openvpn vtun0 tls cert-file /config/auth/ovpn-client.crt
set interfaces openvpn vtun0 tls key-file /config/auth/ovpn-client.key
set interfaces openvpn vtun0 tls dh-file /config/auth/dh.pem
set interfaces openvpn vtun0 openvpn-option "--port $VPN_PORT"
set interfaces openvpn vtun0 openvpn-option --tls-server
set interfaces openvpn vtun0 openvpn-option "--comp-lzo yes"
set interfaces openvpn vtun0 openvpn-option --persist-key
set interfaces openvpn vtun0 openvpn-option --persist-tun
set interfaces openvpn vtun0 openvpn-option "--keepalive 10 120"
set interfaces openvpn vtun0 openvpn-option "--user nobody"
set interfaces openvpn vtun0 openvpn-option "--group nogroup"
set interfaces openvpn vtun0 openvpn-option "--route $REMOTE_SUBNET $REMOTE_NETMASK $REMOTE_VPN_IP"
set interfaces openvpn vtun0 server client $REMOTE_CERT_NAME ip $REMOTE_VPN_IP
set interfaces openvpn vtun0 server client $REMOTE_CERT_NAME subnet $REMOTE_SUBNET $REMOTE_NETMASK

#Firewall config
set firewall name WAN_LOCAL rule 50 action accept
set firewall name WAN_LOCAL rule 50 description "OpenVPN Site to Site"
set firewall name WAN_LOCAL rule 50 destination port $VPN_PORT
set firewall name WAN_LOCAL rule 50 log enable
set firewall name WAN_LOCAL rule 50 protocol udp
commit

If the code above commits successfully, the next step is to add the config to config.gateway.json. The USG’s config is managed by its Unifi controller, so for any of the changes made above to stick we must copy them to /usr/lib/unifi/data/sites/default/config.gateway.json on the controller (create the file if it doesn’t already exist.)

A quick shortcut is to run the mca-ctrl -t dump-cfg command, then parse out the parts you want to go into config.gateway.json as outlined in the UniFi documentation. For the lazy, here is the config.gateway.json generated from the above commands (be sure to modify $variables to suit your needs.)

{
  "firewall": {
    "WAN_LOCAL": {
      "rule": {
        "50": {
          "action": "accept",
          "description": "OpenVPN Site to Site",
          "destination": {
            "port": "$VPN_PORT"
          },
          "log": "enable",
          "protocol": "udp"
        }
      }
    }
  },
  "interfaces": {
    "openvpn": {
      "vtun0": {
        "description": "OpenVPN Site to Site",
        "encryption": "aes256",
        "hash": "sha256",
        "mode": "server",
        "openvpn-option": [
          "--port $VPN_PORT",
          "--tls-server",
          "--comp-lzo yes",
          "--persist-key",
          "--persist-tun",
          "--keepalive 10 120",
          "--user nobody",
          "--group nogroup",
          "--route $REMOTE_SUBNET $REMOTE_NETMASK $REMOTE_VPN_IP"
        ],
        "server": {
          "client": {
            "$REMOTE_CERT_NAME": {
              "ip": "$REMOTE_VPN_IP",
              "subnet": [
                "$REMOTE_SUBNET $REMOTE_NETMASK"
              ]
            }
          },
          "push-route": [
            "$SERVER_SUBNET"
          ],
          "subnet": "$VPN_SUBNET"
        },
        "tls": {
          "ca-cert-file": "/config/auth/ca.crt",
          "cert-file": "/config/auth/ovpn-client.crt",
          "dh-file": "/config/auth/dh.pem",
          "key-file": "/config/auth/ovpn-client.key"
        }
      }
    }
  }
}

OpenWRT: VPN client

Configuration is doable from the GUI but I found much easier with the command line. I got a lot of the configuration from this gist from braian87b

Install openvpn and the luci-app-openvpn packages:

opkg update
opkg install openvpn luci-app-openvpn

OpenVPN config files are located in /etc/config. In addition to the certificates we copied there earlier, we will also want to copy the openvpn client configuration to that directory.

Here is the config file matching the configuration generated above. Again, remember to replace $variables with your config matching what was generated above. Save it to /etc/config/site2site.conf

#/etc/config/site2site.conf
client
dev tun
proto udp
remote $DNS_OR_IP_OF_USG_OPENVPN_SERVER $VPN_PORT
cipher AES-256-CBC
auth SHA256
resolv-retry infinite
nobind
comp-lzo yes
persist-key
persist-tun
verb 3
ca /etc/config/ca.crt
cert /etc/config/ovpn-client.crt
key /etc/config/ovpn-client.key
remote-cert-tls server

With the openvpn config file, client certificate & key, and CA certificate we are ready to configure firewall rules and instruct the router to initiate the VPN connection.

# a new OpenVPN instance:
uci set openvpn.site2site=openvpn
uci set openvpn.site2site.enabled='1'
uci set openvpn.site2site.config='/etc/config/site2site.conf'

# a new network interface for tun:
uci set network.site2sitevpn=interface
uci set network.site2sitevpn.proto='none' #dhcp #none
uci set network.site2sitevpn.ifname='tun0'

# a new firewall zone (for VPN):
uci add firewall zone
uci set firewall.@zone[-1].name='vpn'
uci set firewall.@zone[-1].input='ACCEPT'
uci set firewall.@zone[-1].output='ACCEPT'
uci set firewall.@zone[-1].forward='ACCEPT'
uci set firewall.@zone[-1].masq='1'
uci set firewall.@zone[-1].mtu_fix='1'
uci add_list firewall.@zone[-1].network='site2sitevpn'

# enable forwarding from LAN to VPN:
uci add firewall forwarding
uci set firewall.@forwarding[-1].src='lan'
uci set firewall.@forwarding[-1].dest='vpn'

# Finally, you should commit UCI changes:
uci commit

Monitor VPN connection progress by using logread. If all goes well you will see the successful connection established message. If not, you’ll be able to get an idea of what’s wrong.

logread -f

If all goes well you’ll now have a bidirectional VPN between your two sites; however, traffic from the server’s subnet going directly to the client router itself (the OpenWRT device’s IP) will be considered as coming from the WAN interface and will be blocked. If you need to access the OpenWRT device directly from the USG’s subnet, you’ll need to add a firewall rule allowing it to do so:

uci add firewall rule
uci set firewall.@rule[-1]=rule
uci set firewall.@rule[-1].enabled='1'
uci set firewall.@rule[-1].target='ACCEPT'
uci set firewall.@rule[-1].src='wan'
uci set firewall.@rule[-1].name='Allow VPN to access router'
uci set firewall.@rule[-1].src_ip='$SERVER_SUBNET'
uci set firewall.@rule[-1].dest_ip='$INTERNALL_IP_OF_OPENWRT_ROUTER'
uci commit

Troubleshooting

One-sided VPN

I fought for some time with the fact that the VPN was established, but only traffic going from the Client network to the Server network would work. Traffic from the OpenVPN server subnet to the OpenVPN client subnet would simply hang and not work.

I finally found on the ubiquiti forums that this is due to default OpenVPN behavior of restricting traffic from the server subnet to the client subnet (see the OpenVPN how-to for more information.) The solution is to add lines in the server config informing it of the client network and to allow traffic to it. Below is an example USG config allowing informing it of remote subnet 192.168.230/24 and assigning the Client an IP of 10.0.76.253:

set interfaces openvpn vtun5 server client client1 ip 10.0.76.253
set interfaces openvpn vtun5 server client client1 subnet 192.168.230.0/24

VPN status stays “stopped” in OpenWRT

The best way to troubleshoot is to look at the logs in realtime. SSH to the OpenWRT router and run the command “logread -f” then try to initiate the connection again. The errors there will point you to the problem.

zfs drive removal ‘part of active pool’ fix

Occasionally I will manually offline a disk in my ZFS pool for one reason or another. Annoyingly I will sometimes get this error when I try to online that same disk back into the pool:

cannot online /dev/sda: cannot relabel '/dev/sda': unable to read disk capacity

The fix, thankfully, is fairly simple. Simply run the following command (make double sure you’re doing it on the correct device!)

sudo wipefs -a <DEVICE>

After I ran that command ZFS automatically picked the disk back and resilvered it into the pool.

Thanks to this superuser.com discussion for the advice!

Automate USG config deploy with Ubiquiti API in Bash

I have a new Ubiquiti Unifi Security Gateway Pro 4 which is pretty neat; however, the Unifi web interface is pretty limited. Most advanced firewall functions must be configured outside of the GUI. One must create a .json file with the configuration they need, copy that file to the Unifi controller, and then force a provision of the gateway to get it to pick up the new config.

I wanted a way to automate this process but very frustratingly Ubiquiti hasn’t documented their Unifi Controller API. I had to resort to reverse engineering their API by using my browser’s developer console to figure out which API calls were needed to do what I wanted. I then took the API functions from https://dl.ui.com/unifi/5.10.25/unifi_sh_api (the current unifi controller software download link which has unifi_sh_api) and embedded them into a bash script. Thanks to this forum post for the information on how to do this.

This bash script copies the specified config file to the Unifi controller via SCP, then uses curl to issue the API call to tell the controller to force a provision to the device having the supplied mac address.

#!/bin/bash
# Written by Nick Jeppson 08/01/2019
# Inspired by posts made from ubiquiti forums: https://community.ui.com/questions/API/82a3a9c7-60da-4ec2-a4d1-cac68e86b53c
# API interface functions taken from unifi_sh_api shipped with controller version 5.10.25, https://dl.ui.com/unifi/5.10.25/unifi_sh_api
#
# This bash script copies the specified config file to the Unifi controller via SCP
# It then uses curl to issue an API call to tell the controller to force a provision to the device with the supplied mac address. 

#### BEGIN VARIABLES ####
#Fill out to match your environment

gateway_mac="12:34:56:78:90:ab" #MAC address of the gateway you wish to manage
config_file="your_config_file.json"   #Path to config file
unifi_server="unifi_server_name"         #Name/IP of unifi controller server
unifi_gateway_path="/usr/lib/unifi/data/sites/default/config.gateway.json"    #Path to config.gateway.json on the controller
ssh_user="root"                 #User to SSH to controller as
username="unifi_admin_username"             #Unifi username
password="unifi_admin_password" #Unifi password
baseurl="https://unifi_server_name:8443" #Unifi URL
site="default"                  #Unifi site the gateway resides in

#### END VARIABLES ####

#Copy updated config to controller
scp $config_file $ssh_user@$unifi_server:$unifi_gateway_path

#API interface functions
cookie=$(mktemp)
curl_cmd="curl --tlsv1 --silent --cookie ${cookie} --cookie-jar ${cookie} --insecure "
unifi_login() {
    # authenticate against unifi controller
    ${curl_cmd} --data "{\"username\":\"$username\", \"password\":\"$password\"}" $baseurl/api/login
}

unifi_logout() {
    # logout
    ${curl_cmd} $baseurl/logout
}

unifi_api() {
    if [ $# -lt 1 ] ; then
        echo "Usage: $0 <uri> [json]"
        echo "    uri example /stat/sta "
        return
    fi
    uri=$1
    shift
    [ "${uri:0:1}" != "/" ] && uri="/$uri"
    json="$@"
    [ "$json" = "" ] && json="{}"
    ${curl_cmd} --data "$json" $baseurl/api/s/$site$uri
}

#Trigger a provision
unifi_login 
unifi_api /cmd/devmgr {\"mac\": \"$gateway_mac\", \"cmd\": \"force-provision\"}
unifi_logout

No more manually clicking provision after manually editing the config file on the controller!

rsync to remote directory containing spaces

I ran into an issue where if I ran rsync to a remote server with a path that contained spaces it would simply terminate at the first space instead of copying the whole path. I learned you need to do some quote tricks to get this to work properly (thanks to this site for helping me.)

The trick is to enclose the entire remote path, server included, in SINGLE quotes. In addition to the single quotes you still need to escape every space with a backslash ( \ )

rsync -aP <source_folder> 'REMOTE_SERVER:/full\ path\ with\ escaped\ spaces/folder/'