Category Archives: CLI

Create & Mount disc images in Linux

When working with hard drives it is always a good idea to back the entire thing up before proceeding. I wanted to write down the procedure so I don’t keep forgetting it.

Create disc image

dd does the trick here.

sudo dd if=/dev/<drive device file> of=image.img bs=64M

If you wish to see the progress of the above dd command you can open up a separte window and issue the kill command

kill -USR1 `pidof dd`

Mount disc image read only

You can now disconnect the drive and work with its image instead (great for forensics or dealing with a dying drive.)

In later versions of Linux you can do this with losetup and partprobe.

sudo losetup -Pr -f <path to image file>
sudo losetup #find which loop device file corresponds with your image here
sudo mount -o ro /dev/<loopdevice>p<partition number> <mountpoint>

For example, this is what I did on my system for my aunt’s laptop (I was interested in the 2nd partition on her drive, the one containing Windows files)

Note: remove the r from the above command if you want to mount read/write (required for LVM partitions)

sudo losetup -Pr -f susan-ssd.img
sudo losetup

NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO
/dev/loop0 0 0 0 1 /home/partimag/susan-ssd.img 0

sudo mount -o ro /dev/loop0p2 mount/

When you’re done, unmount the image and delete the image mapping:

umount <path to mount directory>
sudo losetup -d <loop file obtained earlier>

Site to site VPN between pfSense and DD-WRT

I’ve been trying to establish a site-to-site VPN connection between my house and my parents’ for a couple years now. Each time I try I become frustrated and eventually give up. No longer! I’ve finally gotten a site to site VPN working between my pfSense router and my parents’ Netgear Nighthawk R8000 running DD-WRT v3.

It was quite the undertaking for me to get these two systems talking. I drew a lot of inspiration from here. In order to get this to work you need to keep these things in consideration:

  • Protocol, port, device type, encryption cipher, hash algorithm, TLS authentication, and certificate settings all need to match
  • VPN IP addresses need to be assigned to both the server and the client
  • Routes for each network need to be established on both devices
  • Firewalls need to be configured to allow traffic to/from each network through the VPN tunnel

I used the following settings to get things working between these two devices:

OpenVPN Server (pfSense)

If you haven’t already, generate a certificate authority and server certificate. Do this in System / Cert Manager and click Add. When generating the certificate make sure the Certificate Type is Server Certificate.

General

  • Server mode: Peer to Peer (SSL / TLS)
  • Protocol: UDP
  • Device mode: tap

Cryptographic settings

  • Enable authentication of TLS packets: checked
  • Automatically generate a shared TLS authentication key: checked
  • Peer certificate authority & Server Certificate:  Select appropriate CA / certificate from the dropdowns here
  • DH Parameter length: 2048
  • Encryption algorithm: AES-256-CBC
  • Auth digest algorithm: SHA256 (256 BIT)
  • Certificate depth: One (Client + Server)

Tunnel Settings

  • IPv4 Tunnel network: Enter a unique (not existing on either netwonk) network here in CIDR format, ex 10.1.1.0/24
  • Ipv4 Local Network(s): Enter the networks you would like the remote network to access
  • IPv4 Remote Network(s): Enter the networks you would like the local network to access
  • Compression: Enabled with Adaptive Compression

Advanced Configuration

Optional? I found that for some reason the routing table wasn’t properly populated with the remote network on my pfSense server. I added a custom option to take care of that:

route <remote network> <remote subnet mask> <remote VPN IP>

In my example it ended up being “route 192.168.98.0 255.255.255.0 10.54.98.2”

Key export

You will need to export the Certificate Authority certificate as well as the client certificate & private key files for use with dd-wrt. Do this by going to System / Cert Manager. There are little icons to the right of the certificates where you can click to download them.

Export the CA certificate as well as both the certificate and key from whatever was specified in the Server Certificate section from the above OpenVPN configuration.

OpenVPN client (dd-wrt)

Go to Services / VPN and look for the OpenVPN Client section

  • Start OpenVPN Client: Enable
  • Tunnel Device: TAP
  • Tunnel Protocol: UDP
  • Encryption cipher: AES-256 CBC
  • Hash Algorithm: SHA256
  • User Pass Authentication: Disable
  • Advanced Options: Enable
  • LZO Compression: Adaptive
  • NAT: Enable
  • Firewall protection: Disable
  • Bridge TAP to br0: Enable
  • TLS Auth Key: <Paste the contents of the “Key” section under pfSense’s Cryptographic settings area of the OpenVPN server configuration>
  • CA Cert: <Paste contents of downloaded CRT file from pfsense’s CA>
  • Public Client Cert: <Paste contents of downloaded CRT file from pfsense’s certificate section>
  • Private Client Key: <Paste contents of downloaded .key file from pfsense’s certificate section>

IN THEORY this should be all that you have to do. The tunnel should establish and traffic should flow between both networks. Sadly it wasn’t that simple for my setup.

Troubleshooting

One-way VPN

After setting up the tunnel I saw everything was connected but the traffic was unidirectional (remote could ping local network but not visa versa.)

On the pfsense router, I ran

netstat -r | grep 192.168

(change the grep to whatever your remote network is)

I noticed that there were no routes to the remote network. To fix this I appended a server option in the OpenVPN server config (adjust this to match your networks)

route 192.168.98.0 255.255.255.0 10.54.98.2

Blocked by iptables

Adding the route on the server helped but things still weren’t getting through. I enabled logging on the DD-WRT side, consoled into the router, and ran the following:

watch -n 1 "dmesg | grep 192.168.98"

This revealed a lot of dropped packets from my OpenVPN server’s network. After a lot of digging I found this forum post that suggested a couple custom iptables rules to allow traffic between the bridged network and the OpenVPN network (adjust interface names as necessary)

iptables -I FORWARD -i br0 -o tap1 -j ACCEPT 
iptables -I FORWARD -i tap1 -o br0 -j ACCEPT

This doesn’t survive a reboot, so you’ll want to enter those two commands in the Administration / Commands section of the dd-wrt web configuration and click “Save Firewall”

Success

Finally, after a custom routing rule on the pfsense side and a custom iptables rule on the dd-wrt side, two way VPN has been established!

CentOS7, nginx, reverse proxy, & let’s encrypt

With the loss of trust of Startcom certs I found myself needing a new way to obtain free SSL certificates. Let’s Encrypt is perfect for this. Unfortunately SophosUTM does not support Let’s Encrypt. It became time to replace Sophos as my reverse proxy. Enter nginx.

The majority of the information I used to get this up and running came from digitalocean with help from howtoforge. My solution involves CentOS7, nginx, and the let’s encrypt software.

Install necessary packages

sudo yum install nginx letsencrypt
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo systemctl enable nginx

Inform selinux to allow nginx to make http network connections:

sudo setsebool -P httpd_can_network_connect 1

Generate certificates

Generate your SSL certificates with the letsencrypt command. This command relies on being able to reach your site over the internet using port 80 and public DNS. Replace arguments below to reflect your setup

sudo letsencrypt certonly -a webroot --webroot-path=/usr/share/nginx/html -d example.com -d www.example.com

The above command places the certs in /etc/letsencrypt/live/<domain_name>

Sophos UTM certificates

In my case I had a few paid SSL certificates I wanted to copy over from Sophos UTM to nginx. In order to do this I had to massage them a little bit as outlined here.

Download p12 from Sophos, also download certificate authority file, then use openssl to convert the p12 to a key bundle nginx will take.

openssl pkcs12 -nokeys -in server-cert-key-bundle.p12 -out server.pem
openssl pkcs12 -nocerts -nodes -in server-cert-key-bundle.p12 -out server.key
cat server.pem Downloaded_CA_file.pem > server-ca-bundle.pem

Once you have your keyfiles you can copy them wherever you like and use them in your site-specific SSL configuration file.

Auto renewal

First make sure that the renew command works successfully:

sudo letsencrypt renew

If the output is a success (a message saying not up for renewal) then add this to a cron job to check monthly for renewal:

sudo crontab -e
30 2 1 * * /usr/bin/letsencrypt renew >> /var/log/le-renew.log
35 2 1 * * /bin/systemctl reload nginx

Configure nginx

Uncomment the https settings block in /etc/nginx/nginx.conf to allow for HTTPS connections.

Generate a strong DH group:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Create SSL conf snippets in /etc/nginx/conf.d/ssl-<sitename>.conf. Make sure to include the proper location of your SSL certificate files as generated with the letsencrypt command.

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

Here is a sample ssl.conf file:

server {
        listen 443;

        ssl_certificate /etc/letsencrypt/live/<HOSTNAME>/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/<HOSTNAME>/privkey.pem;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        access_log /var/log/<HOSTNAME.log>;

        server_name <HOSTNAME>;

        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;

                proxy_pass http://<BACKEND_HOSTNAME>/;
        }
}

 

Redirect http to https by creating a redirect configuration file (optional)

sudo vim /etc/nginx/conf.d/redirect.conf
server {
	server_name
		<DOMAIN_1>
                ...
		<DOMAIN_N>;

        location /.well-known {
              alias /usr/share/nginx/html/.well-known;
              allow all;
	}
	location / {
               return 301 https://$host$request_uri; 
	}
}

 

Restart nginx:

sudo systemctl restart nginx

Troubleshooting

HTTPS redirects always go to the host at the top of the list

Solution found here:  use the $host variable instead of the $server_name variable in your configuration.

Websockets HTTP 400 error

Websockets require a bit more massaging in the configuration file as outlined here. Modify your site-specific configuration to add these lines:

# we're in the http context here
map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {     proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
}

 

Fix FreeNAS multipath error with USB drives

I have a Mediasonic 4 bay USB drive enclosure plugged into a FreeNAS system as a backup. This enclosure randomizes hardware ID information which  confuses FreeNAS. The main symptom of this is the error message about multipath not being optimal.

The way to fix this is to drop to a command line and destroy the mistaken multipath configuration (thanks to this site for the information)

For Freenas 9.x:

gmultipath destroy disk1 disk2 disk3
reboot

For FreeNAS Corral it’s a bit different. You have to brute force remove the kernel module for multipath (thanks to this site for the explanation)

mv /boot/kernel/geom_multipath.ko /boot/kernel/geom_multipath.ko.old

Keep in mind that you may have to do this after each update.

An additional problem is that for some reason GUI doesn’t see the drives but the OS does. If you want smart checking on those other drives you have to do a bit of hackery by creating a SMART job for the visible drives, then manually dropping into a shell to add the invisible ones. Below are my notes for when I did this (I’m not sure these changes survive a reboot)

sudo vi /etc/local/smartd.conf
/dev/da2 -a -n never -W 0,0,0 -m root -M exec /usr/local/www/freenasUI/tools/sma
 rt_alert.py -s S/(01|02|03|04|05|06|07|08|09|10|11|12)/../(1|2|3|4|5|6|7)/(00)

ps aux|grep smartd

kill -9 (pidof smartd)
sudo /usr/local/sbin/smartd -i 1800 -c /usr/local/etc/smartd.conf -p /var/run/smartd.pid

Install Linux on Chromebook Pixel 2 (Samus)

I’ve run crouton on my Chromebook Pixel 2  (2015, codename Samus) for some time now but I’ve found myself wanting more. Virtualbox, kernel access, graphics, and more don’t perform well in a chroot. Thankfully it’s actually pretty easy to dual boot Chrome OS and Linux on your chromebook thanks to chrx (pronounced “marshmallow”?)

Installation

The first part of installation is identical to setting up crouton:

  • Enter developer mode:
    Press ESC, Refresh, power simultaneously (when the chromebook is on)

    • Every time you power on the chromebook from now on you’ll get a scary screen. Press CTRL-D to bypass it (or wait 30 seconds)
    • If you hit space on this screen instead of CTRL+D it will powerwash (nuke) your data
      A scary screen will pop up saying the OS is missing or damaged. Press CTRL D, then press Enter when the OS verification screen comes up.
  • Wait several minutes for developer mode to be installed. Note it will wipe your device to do this.

Enable SeaBIOS:

Open up a shell (CTRL + ALT + T, shell, enter) and enter the following

sudo crossystem dev_boot_usb=1 dev_boot_legacy=1

and reboot.

Next, download and run the chrx script twice. The first run will partition and powerwash your system; the second run will actually install GalliumOS (or Ubuntu or Fedora) alongside ChromeOS.

cd ; curl -Os https://chrx.org/go && sudo sh go

reboot after partition, run shell again. You can specify a number of arguments to the go script; I wanted to use Cinnamon on Fedora so these are the ones I used:

cd ; curl -Os https://chrx.org/go && sudo sh go -d fedora -e cinnamon -r latest -H <hostname> -U <username> -Z <timezone>

Fedora took quite a long time to install (1 hour in my case.) Just let the script do its thing. Once complete you can reboot and press CTRL + D for chromeOS or CTRL + L for Linux.

After that, reboot into your new linux environment!

Cleaning up

There were a few samus-specific things I needed to do.

Locale

For some reason my locale was set to an African country.  Correct by doing this:  (thanks to here) I added SELinux commands because for some reason I was getting permission denied errors.

sudo setenforce 0
localectl set-locale LANG=en_US.utf8
sudo setenforce 1

Audio doesn’t work (no sound)

This issue stems from the fact that the sound card is not presented as the first available card. The system defaults to HDMI sound instead. Fortunately this page has instructions on how to fix this. If you’re running GalliumOS default you can follow the instructions from the link above. In my case I had to get a bit creative.

  1. Download the samus patches from here
    wget https://github.com/GalliumOS/galliumos-samus/archive/master.zip
  2. Extract subfolders inside said zip file to root directory
  3. Reboot
  4. run the following:
    1. cp -r /etc/skel/.config $HOME
      sudo samus-alsaenable-speakers
      sudo samus-touch-reset

Success! You can now dual boot between Full blown Linux and ChromeOS on your Chromebook Pixel.

Touchpad / touchscreen stop working after resume

Occasionally my touchpad and touchscreen stop responding after resuming from sleep. The galliumOS-samus fix mentioned above has a handy reset script that fixes this. Simply run:

sudo samus-touch-reset

and your touch functionality is restored. I bound this command to a key shortcut to make things easier.

Virtualbox won’t start

After installing virtualbox I got a strange error message when trying to start VMs:

Failed to load VMMR0.r0 (VERR_SUPLIB_OWNER_NOT_ROOT)

I found this mention saying that /usr has to be owned by root. Easy enough of a fix:

sudo chown root:root /usr/

Simple network folder mount script for Linux

I wrote a simple little network mount script for Linux desktops. I wanted to replicate my Windows box as best as I could where a bunch of network drives are mapped upon user login. This script relies on having gvfs-mount and the cifs utilities installed (installed by default in Ubuntu.)

#!/bin/bash
#Simple script to mount network drives

#Specify network paths here, one per line
#use forward slash instead of backslash
FOLDER=(
  server1/folder1
  server1/folder2
  server2/folder2/folder3
  server3/
)

#Create a symlink to gvfs mounts in home directory
ln -s $XDG_RUNTIME_DIR/gvfs ~/Drive_Mounts

for mountpoint in "${FOLDER[@]}"
do
  gvfs-mount smb://$mountpoint
done

Mark this script as executable and place it in /usr/local/bin. Then make it a default startup application for all users:

vim /etc/xdg/autostart/drive-mount.desktop
[Desktop Entry]
Name=Mount Network Drives
Type=Application
Exec=/usr/local/bin/drive-mount.sh
Terminal=false

Voila, now you’ve got your samba mount script starting up for every user.

Monitor your servers with phpservermonitor

I have a handful of servers and for years I’ve been wanting to get some sort of monitoring in place. Today I tried out php server monitor and found it was pretty easy to set up and use.

Download

The installation process was pretty straightforward.

  • Install PHP, mysql, and apache
  • Create database, user, password, and access rights for mysql
  • Download .tar.gz and extract to /var/www
  • Configure Apache site file to point to phpservermonitor directory
  • Navigate to the IP / URL of your apache server and run the installation script

The above process is documented fairly well on their website. I configured this to run on my Raspberry Pi 2. This was my process:

Install dependencies:

sudo apt-get install php5 php5-curl php5-mysql mysql-server

Configure mysql:

sudo mysql_secure_installation

Create database:

mysql -u root -p
create database phpservermon;
create user 'phpservermon'@'localhost' IDENTIFIED BY 'password';
 grant all privileges on phpservermon.* TO 'phpservermon'@'localhost'; 
flush privileges;

Extract phpservermon to /var/www and grant permissions

tar zxvf <phpservermon_gzip_filename> -C /var/www
sudo chown -R www-data /var/www/*

Configure php:

sudo vim /etc/php5/apache2/php.ini
#uncomment date.timezone and set your timezone
date.timezone = "America/Boise"

Configure apache:

sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/phpservermon

#Modify /etc/apache2/sites-available/phpservermon server root to point to directory above, also add a ServerName if desired

sudo a2ensite phpservermon
sudo service apache2 reload

Configure cron (I have it check every minute but you can configure whatever you like)

*/1 * * * * /usr/bin/php /var/www/phpservermon/cron/status.cron.php

Navigate to the web address you’ve configured in apache and follow the wizard.

It’s pretty simple but it works! A nice php application to monitor websites and services.

 

Hot swapped disk missing in FreeNAS fix

I hot removed a malfunctioning drive in my FreeNAS unit recently. The problem is its replacement would not show up in available drives. Camcontrol devlist wouldn’t reveal the device, even after camcontrol rescan all.

I found from this site that another command exists – camcontrol reset. I found out which bus to reset (instead of resetting all of them) by looking at logs and noticing the scbus number. Once obtained, I ran the following commands (the last number refers to the bus my drive was on)

sudo camcontrol reset 10
sudo camcontrol rescan 10

That did the trick! The drive was suddenly visible by the FreeNAS system once more.

Generate static html files from any website with httrack

I came across a need to take a cakePHP dynamically generated site and turn it into a collection of HTML files. After some trial and error I came across httrack which fit the need beautifully (thanks to this site for pointing me there.)

To install httrack run the following (for ubuntu-based distros)

sudo add-apt-repository ppa:upubuntu-com/web
sudo apt-get update
sudo apt-get install webhttrack httrack

Oncne httrack is installed simply launch it from the command line:

httrack

Follow the prompts and in no time you’ll have a folder with static HTML files for your entire website. Easy!

Fix vmware-view RDP issues in Linux

I’ve been using vmware view as a means to remote into my computers at work for some time now. An update to the linux client appears to have broken my ability to remote into work machines over the RDP protocol. This issue affects multiple distros.

The symptom is the fact that after you log into vmware-view and double click on a computer you wish to connect to over the RDP protocol, the screen flashes for a second and then takes you right back to where you started – no error message. Frustrating.

If you launch vmware-view in a console you get a little more insight into what’s going on:

RDP Client(10222): WARNING: Unknown -r argument
2017-02-26 14:06:53.817-07:00: vmware-view 7858| RDP Client(10222): 
2017-02-26 14:06:53.817-07:00: vmware-view 7858| RDP Client(10222): Possible arguments are: comport, disk, lptport, printer, sound, clipboard, scard

After much frustration I was able to combine documentation from vmware and freerdp in order to finally get the right combination of arguments to get things working again. I read that freerdp works better than rdesktop with this version, so I tried launching vmware-view with this option:

vmware-view --rdpclient="xfreerdp"

Progress – at least now the error message was different.

RDP Client(20799): [14:04:04:097] [20799:20803] [ERROR][com.freerdp.crypto] - certificate not trusted, aborting.

After more investigation the culprit turned out to be crypto negotiation. Since I’m already connected to the truste work VMware server, I don’t really care about certificate validation. This is what finally got me up and running. The key components are the rdpclient and the /cert-ignore options.

vmware-view --rdpclient="xfreerdp" --xfreerdpOptions="-wallpaper /sound:sys:alsa /cert-ignore"

You can include these options in your ~/.vmware/view-preferences config file so you don’t have to manually add all those switches:

echo 'view.rdpClient = "xfreerdp"
view.xfreerdpOptions = "-wallpaper /sound:sys:alsa /cert-ignore"' >> ~/.vmware/view-preferences

Finally RDP via vmware-view is working in Linux again.


Update 11/18/2018: I was having a very annoying issue with sound on my Debian Jessie box. If the system I was remoted into made any sound, it would corrupt the sound on my machine. It would make every sound sound tinny and fuzzy. I would have to run pulseaudio -k to fix it.

It turns out this was caused by /sound:sys:alsa above. I changed that to be /sound:sys:pulse and all was well.