Category Archives: CLI

Randomize files in a folder

I wanted to make a simple slideshow for a cheap Kindle turned photo frame in my office. Windows Movie Maker (free, already installed program) does not have a randomize function when importing photos. I had a lot of photos I wanted imported and I wanted them randomized. Movie Maker doesn’t include subfolders for some dumb reason, so I also needed a way to grab pictures from various directories and put them in a single directory.

My solution (not movie-maker specific) was to use bash combined with find, ln, and mv to get the files exactly how I want them. The process goes as follows:

  1. Create a temporary folder
  2. Use the find command to find files you want
    1. Use -type f argument to find only files (don’t replicate directory structure)
    2. Use the -exec argument to call the ln command to create links to each file found
    3. Use the -s argument of ln to create symbolic links
    4. Use the -b argument of ln to ensure duplicate filenames are not overwritten
  3. Invoke a one line bash command to randomize the filenames of those symbolic links

It worked beautifully. The commands I ended up using were as follows:

mkdir temp
cd temp
find /Pictures/2013/ -type f -exec ln -s -b {} . \;
#repeat for each subfolder as needed, unless you want all folders in which case you can just specify the directory beneath it.
find /Pictures/2014/ -type f -exec ln -s -b {} . \;
find /Pictures/2015/ -type f -exec ln -s -b {} . \;

for i in *.JPG; do mv "$i" "$RANDOM.jpg"; done
#repeat for all permutations. The -b argument of ln creates files with tildes in the extension - don't forget about them.
for i in *.jpg; do mv "$i" "$RANDOM.jpg"; done
for i in *.JPG~; do mv "$i" "$RANDOM.jpg"; done
for i in *.jpg~; do mv "$i" "$RANDOM.jpg"; done

The end result was a directory full of pictures with random filenames, ready to be dropped into any crappy slideshow making software of your choosing 🙂

Xenserver SSH backup script

Citrix Xenserver is an amazing hypervisor with pretty much every function released to you for free. One thing they do not handle, however, is automated backups.

I have hacked together a backup script for myself that seems to work fairly well. It is my own mix of this and this script along with some logic for e-mail reporting that I came up with myself. It does not require any modification of the xenserver host at all (no need to mount anything!) with the exception of adding a public key to the xenserver’s authorized_keys file.

This script can be run on anything with BASH and the appropriate UNIX tools (even other xenservers if you want) and uses SSH to initiate and transfer the backups to a location of your choosing.

Place this script on the machine you want to be initiating / saving the backups on. It requires that you generate an SSH public/private key pair, which can be done with this command:

ssh-keygen

Add the resulting .pub file’s contents to your xenserver’s /root/.ssh/authorized_keys file (create it if it doesn’t exist.) Take note of the location of the private key file that was generated with that command and put that path in the script.

You can download the script here or view it below. This script has been working pretty well for me. Note it will not work with any VMs that have spaces in their names. I was too lazy to debug this so I just renamed the problem VMs to remove the spaces. Enjoy!

#!/bin/bash

# Modified August 30, 2015 by Nicholas Jeppson
# Taken from http://discussions.citrix.com/topic/345960-xenserver-automated-snapshots-script/ and modified to allow for ssh backups
# Additional insight taken from https://github.com/cepa/xen-backup

# [Usage & Config]
# This script involves two computers: a xenserver machine and a backup machine.
# Put this script on the backup server and run with any account that has privileges to the desired export directory.
#
# This script assumes you have already created a private and public key pair on the backup server
# as well as adding respective the public key to the xenserver authorized_keys file.
#
# [How it works]
# Step1: Snapshots each VM on the xen server
# Step2: Backs up the snapshots to specified location
# Step3: Deletes temporary snapshot created in step 1
# Step4: Deletes old VM backups as defined later in this file
#
# [Note]
# This script will only work with VMs that don't have spaces in their names
# Please make sure you have enough disk space for BACKUP_PATH, or backup will fail
#
# Tested on xenserver 6.5
#
# Modify the variables in the config section below to suit your particular environment's needs.
#

############### Config section ###############

#Location where you want backups to go
BACKUP_PATH="/mnt/backup/OS/"

#SSH configuration
SSH_CIPHER="arcfour128"

#Number of backups to keep
NUM_BACKUPS=2

#Xenserver ssh configuration
#This dictates the address and location of keyfiles as they reside on the xenserver
XEN_ADDRESS="192.168.1.1"
XEN_KEY_LOCATION="/home/backup/.ssh/backup"
XEN_USER="root"

#E-mail configuration
EMAIL_ADDRESS="youremail@provider.here"
EMAIL_SUBJECT="`hostname -s | awk '{print "["toupper($1)"]"}'` VM Backup Report: `date +"%A %b %d %Y"`"

########## End of Config section ###############

ret_code=0

#Replace any spaces found with backslashes because dd doesn't like them
BACKUP_PATH_ESCAPED="`echo $BACKUP_PATH | sed 's/ /\\\ /g'`"

# SSH command
remote_exec() {
chmod 0600 $XEN_KEY_LOCATION
ssh -i $XEN_KEY_LOCATION -o "StrictHostKeyChecking no" -c $SSH_CIPHER $XEN_USER@$XEN_ADDRESS $1
}

backup() {
echo "======================================================"
echo "VM Backup started: `date`"
begin="$(date +%s)"
echo "Backup location: ${BACKUP_PATH}"
echo

#add a slash to the end of the backup path if it doesn't exist
if [[ "$BACKUP_PATH" != */ ]]; then
BACKUP_PATH="$BACKUP_PATH/"
fi

#Build array of VM names
VMNAMES=$(remote_exec "xe vm-list is-control-domain=false | grep name-label | cut -d ':' -f 2 | tr -d ' '")

for VMNAME in $VMNAMES
do
echo "======================================================"
echo "$VMNAME backup started `date`"
echo
before="$(date +%s)"

# create snapshot
TIMESTAMP=`date '+%Y%m%d-%H%M%S'`
SNAPNAME="$VMNAME-$TIMESTAMP"
SNAPUUID=$(remote_exec "xe vm-snapshot vm=\"$VMNAME\" new-name-label=\"$SNAPNAME\"")

# export snapshot
# remote_exec "xe snapshot-export-to-template snapshot-uuid=$SNAPUUID filename= | gzip" | gunzip | dd of="$BACKUP_PATH/$SNAPNAME.xva"
remote_exec "xe snapshot-export-to-template snapshot-uuid=$SNAPUUID filename=" | dd of="$BACKUP_PATH/$SNAPNAME.xva"

#if export was unsuccessful, return error
if [ $? -ne 0 ]; then
echo "Failed to export snapshot name = $snapshot_name$backup_ext"
ret_code=1

else
#calculate backup time, print results
after="$(date +%s)"
elapsed=`bc -l <<< "$after-$before"`
elapsedMin=`bc -l <<< "$elapsed/60"`
echo "Snapshot of $VMNAME saved to $SNAPNAME.xva"
echo "Backup completed in `echo $(printf %.2f $elapsedMin)` minutes"

# destroy snapshot
remote_exec "xe snapshot-uninstall force=true snapshot-uuid=$SNAPUUID"

#remove old backups (uses num_backups variable from top)
BACKUP_COUNT=$(find $BACKUP_PATH -name "$VMNAME*.xva" | wc -l)

if [[ "$BACKUP_COUNT" -gt "$NUM_BACKUPS" ]]; then

OLDEST_BACKUP=$(find $BACKUP_PATH -name "$VMNAME*" -print0 | xargs -0 ls -tr | head -n 1)
echo
echo "Removing oldest backup: $OLDEST_BACKUP"
rm $OLDEST_BACKUP
if [ $? -ne 0 ]; then
echo "Failed to remove $OLDEST_BACKUP"
fi
fi
echo "======================================================"
fi
done

end=$"$(date +%s)"
total_time=`bc -l <<< "$end-$begin"`
total_time_min=`bc -l <<< "$total_time/60"`
echo "Backup completed: `date`"
echo "VM Backup completed in `echo $(printf %.2f $total_time_min)` minutes"
echo
}

#Run the backup function and save all output to a variable, including stderr
BACKUP_OUTPUT=$(backup 2>&1)

#Clean up the output of the backup function
#Remove records count from dd, do some basic math to make dd's numbers more human readable
BACKUP_OUTPUT_HUMANIZED=$(echo "$BACKUP_OUTPUT" | sed -r '/.*records /d' | tr -d '()' \
| awk '{sub(/.*bytes /, $1/1024/1024/1024" GB "); sub(/in .* secs/, "in "$5/60" mins "); sub(/mins .*/, "mins (" $7/1024/1024" MB/sec)"); print}')

#Send a report e-mail with the backup results
echo "$BACKUP_OUTPUT_HUMANIZED" | mail -s "$EMAIL_SUBJECT" "$EMAIL_ADDRESS"

exit $ret_code

Avoid prompts when installing FreeBSD ports

The FreeBSD ports tree is wonderful for installing software but sometimes it can be a real pain. Recently I was trying to install Emby in FreeBSD because why not? The instructions were easy enough except for when I ran

make install clean

I was constantly barraged with choices for things. I want to assume the default on all of these and not be barraged with questions.

Thanks to stack exchange I learned it’s relatively easy to bypass all these questions. Simply add:

BATCH=yes

to the end of your make install clean statement to assume the defaults to all the questions for the package. The Emby guide is pretty comprehensive, but I would add this command at the bottom:

make install clean BATCH=yes

Handy.

Measure SSH transfer speeds

SSH is a beautiful thing. In addition to remotely administering machines you can use it to transfer files. To do this one simply pipes the cat command on both ends. For example, to copy hello.txt on the source host to hi.txt on the destination host, the command would be:

ssh remote_host cat hello.txt | cat > hi.txt

The command takes the contents of hello.txt and pipes it over to the remote host. The cat command on the remote host takes what was piped to it as  input and the > sign instructs cat to take its input and output it to hi.txt.

A great way to measure transfer speeds using ssh between two hosts is to take /dev/zero on the source host and output it to /dev/null on the destination host. This bypasses any disk speed bottlenecks and only measures network throughput. Combine this with the pv command to get a nice graphical view of how fast the transfer is going.

ssh remote_host cat /dev/zero | pv | cat > /dev/null

The default options between my machines result in about a 65 megabytes a second transfer speed.

1

It turns out that the encryption cipher used makes a big difference on transfer speeds. Use the -c command to specify which cipher to use and see how much of a difference this makes. -o compression=no can also help with transfer speeds.

The fastest cipher I’ve found is arcfour. It’s touted as less secure, but for my local network I can accept the risk (thanks to slashdot for the discussion.)

ssh -c arcfour -o Compression=no remote_host cat /dev/zero | pv | cat > /dev/null

2

Using acrfour more than doubles the speed for me! Amazing.

Allow non-root users to mount disks

I came across a need today to allow a regular (non-root) user to mount disks in Ubuntu 14.04 Trusty Tahr. I usually use sudo but in this case I needed to be able to run photorec as a regular user.

The way to accomplish this is to add the regular user to the disk group. To accomplish this, run this command:

sudo usermod -a -G disk <username>

If you are logged in as that user, you will have to log out and log back in to receive the permissions. Once this is done you should be able to mount disks without using sudo or being root.

Install Ubuntu chroot on your Chromebook

I recently got a Chromebook Pixel 2015 LS. It is a very nice device. Chromium OS is great but a power user like myself wants a little more functionality out of this beautiful machine.

Fortunately it’s not too difficult to get an Ubuntu chroot running side by side with chromium. The Google developers have made a script to automate the process.

Below is my experience installing an Ubuntu Trusty chroot on my chromebook 2015 LS.

Prepwork

  • Enter developer mode:
    Press ESC, Refresh, power simultaneously (when the chromebook is on)

    • Every time you power on the chromebook from now on you’ll get a scary screen. Press CTRL-D to bypass it (or wait 30 seconds)
    • If you hit space on this screen instead of CTRL+D it will powerwash (nuke) your data
      A scary screen will pop up saying the OS is missing or damaged. Press CTRL D, then press Enter when the OS verification screen comes up.
  • Wait several minutes for developer mode to be installed. Note it will wipe your device to do this.

Install Crouton

Now that we’re in developer mode we will use a script called crouton to install an Ubuntu chroot (thanks to lifehacker for the guidance.)

  1. Download Crouton:  https://github.com/dnschneid/crouton
  2. Press CTRL ALT T to open terminal
  3. Type ‘shell’ (without quotes) and hit enter
  4. sudo sh ~/Downloads/crouton -r trusty -t touch,extension,unity-desktop,keyboard,cli-extra -e -n unity
    1. Note the arguments are suited to my needs. You will want to read up on the documentation to decide which options you want, i.e. desktop environment
  5. Install this crouton extension to integrate clipboard (in conjuction with the ‘extension’ parameter above)

General points of interest / lessons learned

  • Don’t enter the chroot and type startx. It will hard freeze your chromebook.
  • You don’t need to blow your chroot away if you want a different desktop environment, simply install desired environment on your existing chroot
  • To switch between chroots pres Ctrl + Alt + Shift + F2 or F3 (back or forward arrows on top row, not to be confused with the arrows on the bottom right of the keyboard)

High DPI

High DPI screens are a pain to deal with. Here are my tweaks:

  • Go to System settings / Displays / Scale for menu and title bars. I like 1.75
  • Alternatively you can change your resolution. If you mess up and X won’t start properly, delete ~/.config/monitors.xml (thanks to askubuntu)
  • Use the setres script to enable other resolutions in the display manager
    • setres 1440 960
  • Firefox fix tiny text:
    • go to about:config and modify layout.css.devPixelsPerPx, set to 2

Other tweaks:

  • Make trackpad match Chrome:
    • System settings / mouse and trackpad / Check “Natural Scrolling”
  • Remove lens suggestions:
    • Install unity-tweak-tool, notify-osd, overlay-scrollbar, unity-webapps-service
    • Run unity-tweak-tool and uncheck “search online sources” from the search tab
  • Move docky bar to the left:
    • sudo apt-get install gconf-editor
    • Press Alt+F2, enter: gconf-editor and in this configuration editor, navigate to “apps -> docky-2 -> Docky -> Interface -> DockPreferences -> Dock1″
    • On the right side there are some properties with their corresponding values, including the position of the dock which you can change from “Bottom” to “Top/Left/Right” to move Docky to the upper part of the desktop.
  • Install Mac OSX theme
  • Install elementary OS chroot

Garbled mouse cursor when switching between chroots

Sometimes the mouse cursor would get all weird when switching between my chroots. The fix is to install the latest Intel drivers within your chroot.

sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository https://download.01.org/gfx/ubuntu/14.04/main
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add -
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add -
sudo apt-get update
sudo apt-get upgrade

That’s it.. for now 🙂


 

Update 07/27/2015

I discovered that creating chroots was taking a very long time due to the mirror being chosen. I discovered the -m parameter of crouton which allows you to specify a mirror of your choosing. My updated setting is thus:

sudo sh ~/Downloads/crouton -r trusty -t touch,extension,kde-desktop,keyboard,cli-extra -e -n unitykde -m http://mirrors.xmission.com/ubuntu

If you happened to do a CTRL + C to cancel an existing chroot install that was going slowly, you can simply append the -m parameter above along with -u -u to resume with the updated mirror:

sudo sh ~/Downloads/crouton -r trusty -t touch,extension,kde-desktop,keyboard,cli-extra -e -n unitykde -m http://mirrors.xmission.com/ubuntu -u -u

Migrate from Sophos UTM to pfSense part 1

I’ve been using a Sophos UTM virtual appliance as my main firewall / threat manager appliance for about two years now. I’ve had some strange issues with this solution off and on but for the most part it worked. The number of odd issues has begun to build, though.

Recently it decided to randomly drop some connections even though logs showed no dropped packets. The partial connections spanned across various networks and devices. I never did figure out what was wrong. After two days of furiously investigating (including disconnecting all devices from the network), the problem went away completely on its own with no action on my part. It was maddening – enough to drive me to pfSense.

As of version 2.2 pfSense can be fully virtualized in Xen, thanks to FreeBSD 10.1. This allowed me the option to migrate. Below are the initial steps I’ve taken to move to pfSense.

Features checklist

I am currently using the following functions in Sophos UTM. My goal is to move these functions to equivalents in pfSense:

  • Network firewall
  • Web Application Firewall, also known as a reverse proxy.
  • NTP server
  • PPPOE client
  • DHCP server
  • DNS server
  • Transparent proxy for content filtering and reporting
  • E-mail server / SPAM protection
  • Intrusion Detection system
  • Anti-virus
  • SOCKS proxy
  • Remote access portal (for downloading VPN configurations, etc)
  • Citrix Xenserver support (for live migration etc)
  • Log all events to a syslog server
  • VPN server
  • Daily / weekly / monthly e-mail reports on bandwidth usage, CPU, most visited sites, etc.

I haven’t migrated all of these function over to pfSense which is why this article is only Part 1. Here is what I have done so far.

Xenserver support

Installing xen tools is fairly straightforward thanks to this article. It’s simply a matter of dropping to a shell on your pfSense VM to install and enable xen tools

pkg install xe-guest-utilities
echo "xenguest_enable=\"YES\"" >> /etc/rc.conf.local
ln -s /usr/local/etc/rc.d/xenguest /usr/local/etc/rc.d/xenguest.sh 
service xenguest start

PPPoE client

The wizard works fine for configuring PPPOE, however I experienced some very strange issues with internet speed. Downstream would be fine but upstream would be incredibly slow. Another symptom was NAT / port forwarding appearing not to work at all.

It turns out the issue was pfSense’s virtualized status. There is a bug in the virtio driver that handles virtualized networking. You have to disable all hardware offloading on both the xenserver hypervisor and the pfSense VM to work around the bug. Details on how to do this can be found here. After that fix was implemented, speed and performance went back to normal.

DNS server

To get this working like it did in Sophos you have to disable the default DNS resolver service and enable the DNS forwarder service instead. Once DNS forwarder is enabled, check the box “register DHCP leases in DNS” so that DHCP hostnames come through to clients.

Syslog

Navigate to Status / system logs / settings tab and  tick “Send log messages to remote syslog server” and fill out the appropriate settings.

Note for Splunk users: the Technology Add-on for parsing pfsense logs expects the sourcetype to equal pfsense (not syslog). Create a manual input for logs coming from pfsense so it’s tagged as pfsense and not syslog (thanks to this post for the solution on how to get the TA to work properly.)

VPN

OpenVPN – wizard ran fine. Install OpenVPN Client Export utility package for easy exporting to clients. Once package is installed go to VPN / OpenVPN and you will see a new tab – Client Export.

Note you will need to create a user and check the “create certificate” checkbox or add a user certificate to existing user by going to System / User manager, Editing the user and clicking the plus next to User Certificates. The export utility will only show users that have valid certificates attached to them. If no users have valid certificates the Client Export tab will be blank.

Firewall

One useful setting to note is to enable NAT reflection. This allows you to access NATed resources as if you were outside the network, even though you are inside it. Do this by going to System / Advanced and clicking on the Firewall / NAT tab. Scroll halfway down to find the Network Address Translation section. Change NAT reflection mode for port forwards to Enable (Pure NAT)

It’s also very helpful to configure host and port aliases by going to Firewall / Aliases. This is roughly equivalent to creating Network and Host definitions in Sophos. When you write firewall rules you can simply use the alias instead of writing out hosts IPs and ports.

So far so good

This is the end of part 1. I’ve successfully moved the following services from Sophos UTM to pfSense:

  • Network firewall
  • PPPOE client
  • Log all events to a syslog server
  • VPN server
  • NTP server
  • DHCP server
  • DNS server
  • Xenserver support

I’m still working on moving the other services over. I’ve yet to find a viable alternative to the web application firewall but I haven’t given up yet.

Use a freedompop phone for OOB management

I’ve been wanting to have an out of band (OOB) way to manage my home servers for some time now. Why OOB? Sometimes the regular band fails you, like when the internet connection goes down or when I remote into my firewall and fat finger a setting causing the WAN link to go down. Everything is still up, I just can’t get to it remotely. I then have to drive home to fix it all like a luser. This is especially difficult if I’m out of town.

Enter Freedompop. Freedompop is as Sprint MVNO that offers data-only 4g access for cheap – in this case, completely free if you stay under 500 MB of data a month. I got a freedompop phone so my wife could play Ingress before the iOS client came out. Once the iOS Ingress client was released, the freedompop phone began collecting dust.

Note that this would all be a lot simpler if I just did things the “proper” way, such as purchasing a freedompop access point or paying for a tethering plan.. but what’s the fun in that? My solution is quite convoluted and silly – it uses all three types of SSH tunnels –  but it works.. and it was fun!

Hardware

  • Local, out of band server: An old laptop with a broken screen running Xubuntu 14.04
    • Ethernet cable attached to my local network
  • Remote, regularly banded SSH server: My parents’ dd-wrt powered router (any remote ssh server you have access to will do.)
  • Phone: Sprint Samsung Galaxy S3 activated on the Freedompop network
    • Attached to the out of band server via USB

Software

  • openssh-server: Install this on your out of band server so you can SSH into it
    • ssh-keygen: used to generate RSA private/public key pair to allow for passwordless SSH logins
  • screen: Allows programs to keep running even if SSH is disconnected (optional)
  • autossh: Monitors the state of your ssh connection and will continually attempt to re-connect if the connection is lost, effectively creating a persistent tunnel.
  • tsocks: Allows you to tunnel all traffic for a specified command through a SOCKS proxy
  • Android SSH Server (I use SSH server by Icecold apps.) This is an SSH server for android devices, no root required.

Procedure

My strategy uses a system of tunnels through the intertubes to accomplish what I want.

  1. Install the OS, openssh-server, autossh, and tsocks on your OOB server, then disconnect it from the internet while keeping it on your local network (I manually configured the IP to not have a default gateway.)
  2. Install an ssh server on your phone
  3. Tether your phone to the computer via USB
  4. Create a dynamic SOCKS proxy tunnel between the OOB server and your phone (Freedompop appears to block internet traffic through USB tethers unless you have a tethering plan, so I had to get creative.)
  5. Configure tsocks on your OOB server to point to the socks proxy established in step 4
  6. Use autossh in conjuction with tsocks to initiate a reverse tunnel between your OOB server and the remote SSH server over the 4G connection from your freedompop phone
  7. (From some other network) SSH into your remote SSH server and create a local tunnel pointing to the remote tunnel created in step 6.
  8. SSH into your locally created tunnel, and… profit?

Adventure, here we come.

Create private / public keys

First, since we’re going to be creating a persistent tunnel, passwordless login is required. We do this by generating an RSA private/public key pair. Create the key pair on your server as per these instructions:

cd ~/.ssh
ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/home/nicholas/.ssh/id_rsa): oob
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in oob.
Your public key has been saved in oob.pub.

Copy the public key generated (oob.pub in my case) to your phone.

Configure phone SSH server

Create an SSH server on your phone. Configure the user you created to use the public key generated above. Once that’s configured, start the ssh server on the phone, plug the phone into USB cable and plug the other end into server, and activate USB tethering in Android settings.

On the OOB server, find out IP address of tether by issuing the route command. Look for the gateway on the usb0 interface.

route
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.42.129 0.0.0.0 UG 0 0 0 usb0

In my case the gateway IP is 192.168.42.129.

Configure OOB server

Initiate an autossh connection to the usb0 gateway, creating a dynamic (socks) proxy in the process.

autossh 192.168.42.129 -l nicholas -i ~/.ssh/oob -p 34097 -D9999

Argument breakdown:

  • -l  username to log in as
  • -i keyfile to use (passwordless login, optional but recommended)
  • -p port to ssh to. This will be random and told to you by the android ssh server on the phone.
  • -D port for your dynamic (socks) proxy to bind to. This can be anything of your choosing.

The phone <-> OOB server tunnel is now established. This tunnel will be used to provide 4G internet access to your server.

Next, configure tsocks to use our newly created tunnel. The options we want to modify are Local Networks, server, and server_port.

sudo vi /etc/tsocks.conf

local = 192.168.0.0/255.255.0.0
server = 127.0.0.1
server_port = 9999

You can now use the 4G internet if you prepend tsocks in front of the program you want to use the internet.

Update 07/01/2015 I discovered my off site router changes SSH host keys every reboot. This was causing SSH to fail due to host key mismatches. Disable strict SSH host key checking per this post to get around this:

vi ~/.ssh/config

Host *
    StrictHostKeyChecking no

Update 08/04/2015

I found an even better way than the one above to avoid SSH key changing errors. Simply add the following options to your ssh command:

-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null

That takes care of changing ssh keys for good. Thanks to this site for the info.

Establish tunnel with external server

Now that we have 4g internet we can use autossh to call out to an external SSH server (my parent’s router in my case.) This time we will be initiating a reverse tunnel. It will cause the remote server to listen on a specified port and tunnel all traffic on that port through the SSH tunnel to your local server. Note that you will have to copy your public key generated earlier to this remote host as well.

tsocks autossh <remote server IP/DNS> -p <remote port> -l<remote user> -i <remote keyfile> -R5448:localhost:22

The local <-> remote server tunnel is now established. The remote port to listen on (-R argument)  can be anything of your choosing. Remember what port you used for the last step.

Automating connections

Since the SSH connections are interactive I’ve found it easier to run these commands via screen as I outline in this post. To have these tunnels form automatically on startup we will have to make a quick and dirty upstart script as detailed here and further clarified here.

vi /etc/init/tunnel.conf
author "Your name goes here - optional"
description "What your daemon does shortly - optional"

start on started dbus
stop on stopping dbus

# console output # if you want daemon to spit its output to console... ick
# respawn # it will respawn if crashed/killed

script
 screen -dm bash -c "autossh 192.168.42.129 -l nicholas -i /home/nicholas/.ssh/oob -p34097 -D9999"
 sleep 5
 screen -dm bash -c "tsocks autossh dana.jeppson.org -l root -p 443 -i /home/nicholas/.ssh/oob -R5448:localhost:22"
end script
sudo initctl reload-configuration

Accessing your server out of band

Now that we have a tunnel established between our local and remote servers, we can access our local server through the remote server. On the remote server:

ssh localhost -p5448

The -p command of ssh specifies which port to connect to. Since we have a reverse tunnel listening on port 5448, the server will take the ssh connection you’ve initiated and send it through the intertubes to your OOB server over its 4G connection.

If you would rather SSH into your OOB server directly from your laptop instead of through your remote SSH server, you will need to create more tunnels, this time regular local port forwarding tunnels. Why would you possibly want more tunnels? If you want to access things like SSH, VNC or RDP for servers on your network through your OOB tunnel directly to your laptop, it will be necessary to create even more tunnels through the tubes.

First tunnel (to expose the OOB server’s SSH port to your laptop)

ssh <remote server> -L2222:localhost:5448

-L specifies which port your laptop will listen on. The other two parts  specify where your laptop will send traffic it sees on that port (from the perspective of the remote SSH server.)

Second tunnel (to expose ports of servers of your choosing to your laptop as well as give you shell access to your local OOB server) You can add as many -L arguments as you wish, one for each address/port combination you wish to access.

ssh localhost -p2222 -L3333:192.168.1.10:3389 ... ...

Why?

If you’ve made it this far, congratulations. This was an exercise in accessing my home network even if the internet connection goes down. You could bypass half of these tunnels if you set up an openvpn server on your out of band server, but that’s a tutorial for another time.

If you followed this madness you would have the following tunnels through the tubes:

  • SOCKS proxy tunnel from server to phone
  • Remote port forward tunnel from OOB server to remote server
  • Local port forward from remote server to your computer
  • Local port forward(s) from your computer to anything on your local network through the tunnel created above

Upgrade Linux Mint 16 to 17.1

I realized recently that my desktop system is quite out of date. It has worked so well for so long that I didn’t realize for a while that it was end of support. I was running Linux Mint 16 – Petra.

Thanks to this site the upgrade was fairly painless – a few repository updates, upgrade, and reboot. Simple! The steps I took are below

Update all repositories

Use sed in conjunction with find to quickly and easily update all your repository files from saucy to trusty, and from petra to rebecca, making a backup of files modified.

sudo find /etc/apt/sources.list.d/ -type f -exec sed --in-place=.bak 's/saucy/trusty/' {} \;
sudo find /etc/apt/sources.list.d/ -type f -exec sed --in-place=.bak 's/petra/rebecca/' {} \;

Update your system

This took a while. It had to download 1.5GB of data and install it.

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get upgrade

Cleanup

After running the upgrade I had a notice that I had many packages that were installed but no longer required. To remove unnecessary packages after the upgrade:

sudo apt-get autoremove

Install new language settings:

sudo apt-get install mintlocale

Install gvfs-backends (for Thunar)

sudo apt-get install gvfs-backends

Reboot

Flawless! It worked on the first try. Awesome.

Use BASH to append an incremental suffix to directories

In migrating Splunk indexes I came across the need to add an incrementing suffix of _(numbers) to a bunch of directories. For example, renaming directories

test
test1
test2
test3

to

test_100
test1_101
test2_102
test3_103

After a bunch of searching I settled on this approach of using a simple bash loop to accomplish this:

I=100; for F in db_*; do mv $F $F\_$I; I=$((I+1)); done

There is an initial variable, I, which is set to 100. The for loop goes through each file beginning with db_ and then renames it adding the proper suffix in the iteration (_###). The last step is to increment the value of I.

Replace I for whatever you want you suffix to begin with and change db_* with whatever the criteria for the files you want to rename are. Simple, but it works.