Fix no internet in pfSense OpenVPN

I came across an issue with pfSense where I had created an openVPN connection but it would not work with internet traffic. The VPN connection established fine and I could connect to all internal hosts, but all internet traffic simply didn’t work. I checked the firewall logs and there were no firewall denies.

After scratching my head for a while I came across this post which suggests it might be due to the fact that there was no outbound NAT defined for the VPN network I created.

I went to Firewall / NAT / Outbound and sure enough, there was no outbound NAT rule for my VPN network. I manually created it for my VPN network and voila! Internet over the VPN!

Fix no bluetooth devices found in Linux Mint

I had a peculiar issue today where I suddenly lost the ability to see any bluetooth devices on my Linux Mint 18.2 desktop utilizing a Plugable USB Bluetooth adapter.

The fix involved checking kernel messages for anything insightful. In my case this is what led me to the solution:

[ 608.988353] Bluetooth: hci0: BCM: Patch brcm/BCM20702A1-0a5c-21e8.hcd not found
[ 609.156320] Bluetooth: hci0: BCM: chip id 63
[ 609.172330] Bluetooth: hci0: LPP-3389-WIN
[ 609.173313] Bluetooth: hci0: BCM20702A1 (001.002.014) build 1764
[ 609.173347] bluetooth hci0: Direct firmware load for brcm/BCM20702A1-0a5c-21e8.hcd failed with error -2

After some googling I finally came across the solution here. The fix is to download the firmware for your bluetooth adapter and place it in the place the bluetooth kernel module expects it to be in, then to reload the bluetooth kernel module.

sudo mkdir -p /lib/firmware/brcm
sudo wget https://s3.amazonaws.com/plugable/bin/fw-0a5c_21e8.hcd -O /lib/firmware/brcm/BCM20702A1-0a5c-21e8.hcd
sudo rmmod btusb bnep bluetooth btrtl btintel bnep btbcm
sudo modprobe btusb bnep bluetooth btrtl btintel bnep btbcm

That did the trick! You can also reboot your machine instead of removing / re-loading the kernel modules and it will accomplish the same thing.

Fix WordPress “Sorry, you are not allowed to access this page.”

I recently came across an issue with my WordPress installation. It’s situated behind a load balancer where SSL is terminated. The load balancer takes HTTPS traffic, then forwards it as HTTP on port 80 to the wordpress server.

I was running issues with a redirect loop after installing wordpress. The solution was to add this bit of code to wp-config.php:

define('FORCE_SSL_ADMIN', true);
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
 $_SERVER['HTTPS']='on';

This solves the redirect loop issue but then I ran into a different problem. When I tried to sign into wp-admin I would get this message:

Sorry, you are not allowed to access this page.

After much digging I found this post which emphasizes that you must place that code BEFORE anything else in wp-config.php (except for the beginning PHP tag.) Success!

Fix no sound in Wine

Lately I’ve been doing 100% of my gaming in Linux. The latest versions of wine in Arch Linux have been fantastic (for the most part.) I recently installed a game called Gauntlet (a windows-only steam game.) For some reason I had no sound. Sound worked fine in other Wine games, just not this one.

After much digging I found this post on the Arch Linux forums which fixed my issue. The issue was not having the proper 32bit sound libraries installed. The fix was as simple as:

sudo pacman -Sy lib32-alsa-plugins lib32-libpulse lib32-openal

Success!

Backup your systems with urBackup

In addition to my ZFS snapshots I decided to implement a secondary backup system. I decided to land on urbackup for ease of use and, more importantly, it was easier to set up.

Server Install

Assuming a Cent-based system:

cd /etc/yum.repos.d/
sudo wget http://download.opensuse.org/repositories/home:uroni/CentOS_7/home:uroni.repo
sudo yum -y install urbackup-server
sudo systemctl enable urbackup-server
sudo systemctl start urbackup-server

Open up necessary ports for the server:

sudo firewall-cmd --add-port=55413-55415/tcp --permanent
sudo systemctl reload firewalld

By default urbackup listens on port 55414 for connections. You can change this to port 80 and/or 443 for HTTPS by installing nginx and having it proxy the connections for you.

sudo yum -y install nginx
sudo systemctl enable nginx
sudo setsebool -P httpd_can_network_connect 1 #if you're using selinux

Copy the following into /etc/nginx/conf.d/urbackup.conf (make sure to change server_name to suit your needs)

server {
        server_name backup;

        location / {
                proxy_pass http://localhost:55414/;
        }
}

Then start nginx:

sudo systemctl start nginx

You should then be able to access the urbackup console by navigating to the IP / hostname of your backup server in a browser.

Client Install:

Urbackup can use a snapshot system known as dattobd. You should use it if you can in order to get more consistent backups, otherwise urbackup will simply copy files from the host which isn’t always desirable (databases, for example)

Install dattobd (optional):

sudo yum -y update
# reboot if your kernel ends up being updated
sudo yum -y localinstall https://cpkg.datto.com/datto-rpm/repoconfig/datto-el-rpm-release-$(rpm -E %rhel)-latest.noarch.rpm
sudo yum -y install dkms-dattobd dattobd-utils

Install urbackup client:

TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF
#Select dattobd when prompted if desired

Configure Firewall:

sudo firewall-cmd --add-port=35621-35623/tcp --permanent
sudo systemctl reload firewalld

Once a client is installed, assuming they’re on the same network as the backup server, they will automatically add themselves and begin backing up. If they don’t show up it’s usually a firewall issue.

Restore

Restoration of individual files is easily done through the web console. If you have a windows system, restoring from an image backup is also easy.

Linux hosts

Recovery is trickier if you want to restore a Linux system. Install an empty system of same distribution. Give it the same hostname. Install the client as outlined above, then run:

sudo /usr/local/bin/urbackupclientctl restore-start -b last

Troubleshooting

If for some reason the client not showing up after removing it from the GUI: Uninstall & re-install client software

sudo /usr/local/sbin/uninstall_urbackupclient
TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF

Create & Mount disc images in Linux

When working with hard drives it is always a good idea to back the entire thing up before proceeding. I wanted to write down the procedure so I don’t keep forgetting it.

Create disc image

dd does the trick here.

sudo dd if=/dev/<drive device file> of=image.img bs=64M

If you wish to see the progress of the above dd command you can open up a separte window and issue the kill command

kill -USR1 `pidof dd`

Mount disc image read only

You can now disconnect the drive and work with its image instead (great for forensics or dealing with a dying drive.)

In later versions of Linux you can do this with losetup and partprobe.

sudo losetup -Pr -f <path to image file>
sudo losetup #find which loop device file corresponds with your image here
sudo mount -o ro /dev/<loopdevice>p<partition number> <mountpoint>

For example, this is what I did on my system for my aunt’s laptop (I was interested in the 2nd partition on her drive, the one containing Windows files)

Note: remove the r from the above command if you want to mount read/write (required for LVM partitions)

sudo losetup -Pr -f susan-ssd.img
sudo losetup

NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO
/dev/loop0 0 0 0 1 /home/partimag/susan-ssd.img 0

sudo mount -o ro /dev/loop0p2 mount/

When you’re done, unmount the image and delete the image mapping:

umount <path to mount directory>
sudo losetup -d <loop file obtained earlier>

Fix pfSense OpenVPN unable to bind to VIP issue

A while ago I bought multiple static IP addresses from my ISP. I configured the IPs as Virtual IP addresses through Firewall / Virtual IPs. Everything was dandy.. until I tried to assign OpenVPN to listen on one of my new IP addresses. No matter what I tried I could only get it to work if it listened on my gateway IP.. none of my other static IP addresses would work. The GUI would let me save the configuration, but if I headed over to Status / Openvpn I would see the following:

[error] Unable to contact daemon Service not running?

Digging further in the logs by going to Status / System Logs and then selecting the OpenVPN tab revealed the following snippet:

Time Process PID Message
May 15 19:42:00 openvpn 73195 Exiting due to fatal error
May 15 19:42:00 openvpn 73195 TCP/UDP: Socket bind failed on local address [AF_INET]<redacted>:443: Can’t assign requested address

After much digging I finally stumbled upon this post in the pfsense threads. In it they mention that in the Firewall / Virtual IPs screen not to bind  (in the interface option) your IP addresses to the WAN interface, but rather to bind them to localhost. That did the trick!

Site to site VPN between pfSense and DD-WRT

I’ve been trying to establish a site-to-site VPN connection between my house and my parents’ for a couple years now. Each time I try I become frustrated and eventually give up. No longer! I’ve finally gotten a site to site VPN working between my pfSense router and my parents’ Netgear Nighthawk R8000 running DD-WRT v3.

It was quite the undertaking for me to get these two systems talking. I drew a lot of inspiration from here. In order to get this to work you need to keep these things in consideration:

  • Protocol, port, device type, encryption cipher, hash algorithm, TLS authentication, and certificate settings all need to match
  • VPN IP addresses need to be assigned to both the server and the client
  • Routes for each network need to be established on both devices
  • Firewalls need to be configured to allow traffic to/from each network through the VPN tunnel

I used the following settings to get things working between these two devices:

OpenVPN Server (pfSense)

If you haven’t already, generate a certificate authority and server certificate. Do this in System / Cert Manager and click Add. When generating the certificate make sure the Certificate Type is Server Certificate.

General

  • Server mode: Peer to Peer (SSL / TLS)
  • Protocol: UDP
  • Device mode: tap

Cryptographic settings

  • Enable authentication of TLS packets: checked
  • Automatically generate a shared TLS authentication key: checked
  • Peer certificate authority & Server Certificate:  Select appropriate CA / certificate from the dropdowns here
  • DH Parameter length: 2048
  • Encryption algorithm: AES-256-CBC
  • Auth digest algorithm: SHA256 (256 BIT)
  • Certificate depth: One (Client + Server)

Tunnel Settings

  • IPv4 Tunnel network: Enter a unique (not existing on either netwonk) network here in CIDR format, ex 10.1.1.0/24
  • Ipv4 Local Network(s): Enter the networks you would like the remote network to access
  • IPv4 Remote Network(s): Enter the networks you would like the local network to access
  • Compression: Enabled with Adaptive Compression

Advanced Configuration

Optional? I found that for some reason the routing table wasn’t properly populated with the remote network on my pfSense server. I added a custom option to take care of that:

route <remote network> <remote subnet mask> <remote VPN IP>

In my example it ended up being “route 192.168.98.0 255.255.255.0 10.54.98.2”

Key export

You will need to export the Certificate Authority certificate as well as the client certificate & private key files for use with dd-wrt. Do this by going to System / Cert Manager. There are little icons to the right of the certificates where you can click to download them.

Export the CA certificate as well as both the certificate and key from whatever was specified in the Server Certificate section from the above OpenVPN configuration.

OpenVPN client (dd-wrt)

Go to Services / VPN and look for the OpenVPN Client section

  • Start OpenVPN Client: Enable
  • Tunnel Device: TAP
  • Tunnel Protocol: UDP
  • Encryption cipher: AES-256 CBC
  • Hash Algorithm: SHA256
  • User Pass Authentication: Disable
  • Advanced Options: Enable
  • LZO Compression: Adaptive
  • NAT: Enable
  • Firewall protection: Disable
  • Bridge TAP to br0: Enable
  • TLS Auth Key: <Paste the contents of the “Key” section under pfSense’s Cryptographic settings area of the OpenVPN server configuration>
  • CA Cert: <Paste contents of downloaded CRT file from pfsense’s CA>
  • Public Client Cert: <Paste contents of downloaded CRT file from pfsense’s certificate section>
  • Private Client Key: <Paste contents of downloaded .key file from pfsense’s certificate section>

IN THEORY this should be all that you have to do. The tunnel should establish and traffic should flow between both networks. Sadly it wasn’t that simple for my setup.

Troubleshooting

One-way VPN

After setting up the tunnel I saw everything was connected but the traffic was unidirectional (remote could ping local network but not visa versa.)

On the pfsense router, I ran

netstat -r | grep 192.168

(change the grep to whatever your remote network is)

I noticed that there were no routes to the remote network. To fix this I appended a server option in the OpenVPN server config (adjust this to match your networks)

route 192.168.98.0 255.255.255.0 10.54.98.2

Blocked by iptables

Adding the route on the server helped but things still weren’t getting through. I enabled logging on the DD-WRT side, consoled into the router, and ran the following:

watch -n 1 "dmesg | grep 192.168.98"

This revealed a lot of dropped packets from my OpenVPN server’s network. After a lot of digging I found this forum post that suggested a couple custom iptables rules to allow traffic between the bridged network and the OpenVPN network (adjust interface names as necessary)

iptables -I FORWARD -i br0 -o tap1 -j ACCEPT 
iptables -I FORWARD -i tap1 -o br0 -j ACCEPT

This doesn’t survive a reboot, so you’ll want to enter those two commands in the Administration / Commands section of the dd-wrt web configuration and click “Save Firewall”

Success

Finally, after a custom routing rule on the pfsense side and a custom iptables rule on the dd-wrt side, two way VPN has been established!

CentOS7, nginx, reverse proxy, & let’s encrypt

With the loss of trust of Startcom certs I found myself needing a new way to obtain free SSL certificates. Let’s Encrypt is perfect for this. Unfortunately SophosUTM does not support Let’s Encrypt. It became time to replace Sophos as my reverse proxy. Enter nginx.

The majority of the information I used to get this up and running came from digitalocean with help from howtoforge. My solution involves CentOS7, nginx, and the let’s encrypt software.

Install necessary packages

sudo yum install nginx letsencrypt
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo systemctl enable nginx

Inform selinux to allow nginx to make http network connections:

sudo setsebool -P httpd_can_network_connect 1

Generate certificates

Generate your SSL certificates with the letsencrypt command. This command relies on being able to reach your site over the internet using port 80 and public DNS. Replace arguments below to reflect your setup

sudo letsencrypt certonly -a webroot --webroot-path=/usr/share/nginx/html -d example.com -d www.example.com

The above command places the certs in /etc/letsencrypt/live/<domain_name>

Sophos UTM certificates

In my case I had a few paid SSL certificates I wanted to copy over from Sophos UTM to nginx. In order to do this I had to massage them a little bit as outlined here.

Download p12 from Sophos, also download certificate authority file, then use openssl to convert the p12 to a key bundle nginx will take.

openssl pkcs12 -nokeys -in server-cert-key-bundle.p12 -out server.pem
openssl pkcs12 -nocerts -nodes -in server-cert-key-bundle.p12 -out server.key
cat server.pem Downloaded_CA_file.pem > server-ca-bundle.pem

Once you have your keyfiles you can copy them wherever you like and use them in your site-specific SSL configuration file.

Auto renewal

First make sure that the renew command works successfully:

sudo letsencrypt renew

If the output is a success (a message saying not up for renewal) then add this to a cron job to check monthly for renewal:

sudo crontab -e
30 2 1 * * /usr/bin/letsencrypt renew >> /var/log/le-renew.log
35 2 1 * * /bin/systemctl reload nginx

Configure nginx

Uncomment the https settings block in /etc/nginx/nginx.conf to allow for HTTPS connections.

Generate a strong DH group:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Create SSL conf snippets in /etc/nginx/conf.d/ssl-<sitename>.conf. Make sure to include the proper location of your SSL certificate files as generated with the letsencrypt command.

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

Here is a sample ssl.conf file:

server {
        listen 443;

        ssl_certificate /etc/letsencrypt/live/<HOSTNAME>/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/<HOSTNAME>/privkey.pem;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        access_log /var/log/<HOSTNAME.log>;

        server_name <HOSTNAME>;

        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;

                proxy_pass http://<BACKEND_HOSTNAME>/;
        }
}

 

Redirect http to https by creating a redirect configuration file (optional)

sudo vim /etc/nginx/conf.d/redirect.conf
server {
	server_name
		<DOMAIN_1>
                ...
		<DOMAIN_N>;

        location /.well-known {
              alias /usr/share/nginx/html/.well-known;
              allow all;
	}
	location / {
               return 301 https://$host$request_uri; 
	}
}

 

Restart nginx:

sudo systemctl restart nginx

Troubleshooting

HTTPS redirects always go to the host at the top of the list

Solution found here:  use the $host variable instead of the $server_name variable in your configuration.

Websockets HTTP 400 error

Websockets require a bit more massaging in the configuration file as outlined here. Modify your site-specific configuration to add these lines:

# we're in the http context here
map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {     proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
}

 

Fix FreeNAS multipath error with USB drives

I have a Mediasonic 4 bay USB drive enclosure plugged into a FreeNAS system as a backup. This enclosure randomizes hardware ID information which  confuses FreeNAS. The main symptom of this is the error message about multipath not being optimal.

The way to fix this is to drop to a command line and destroy the mistaken multipath configuration (thanks to this site for the information)

For Freenas 9.x:

gmultipath destroy disk1 disk2 disk3
reboot

For FreeNAS Corral it’s a bit different. You have to brute force remove the kernel module for multipath (thanks to this site for the explanation)

mv /boot/kernel/geom_multipath.ko /boot/kernel/geom_multipath.ko.old

Keep in mind that you may have to do this after each update.

An additional problem is that for some reason GUI doesn’t see the drives but the OS does. If you want smart checking on those other drives you have to do a bit of hackery by creating a SMART job for the visible drives, then manually dropping into a shell to add the invisible ones. Below are my notes for when I did this (I’m not sure these changes survive a reboot)

sudo vi /etc/local/smartd.conf
/dev/da2 -a -n never -W 0,0,0 -m root -M exec /usr/local/www/freenasUI/tools/sma
 rt_alert.py -s S/(01|02|03|04|05|06|07|08|09|10|11|12)/../(1|2|3|4|5|6|7)/(00)

ps aux|grep smartd

kill -9 (pidof smartd)
sudo /usr/local/sbin/smartd -i 1800 -c /usr/local/etc/smartd.conf -p /var/run/smartd.pid