Category Archives: CLI

Install Ubuntu chroot on your Chromebook

I recently got a Chromebook Pixel 2015 LS. It is a very nice device. Chromium OS is great but a power user like myself wants a little more functionality out of this beautiful machine.

Fortunately it’s not too difficult to get an Ubuntu chroot running side by side with chromium. The Google developers have made a script to automate the process.

Below is my experience installing an Ubuntu Trusty chroot on my chromebook 2015 LS.

Prepwork

  • Enter developer mode:
    Press ESC, Refresh, power simultaneously (when the chromebook is on)

    • Every time you power on the chromebook from now on you’ll get a scary screen. Press CTRL-D to bypass it (or wait 30 seconds)
    • If you hit space on this screen instead of CTRL+D it will powerwash (nuke) your data
      A scary screen will pop up saying the OS is missing or damaged. Press CTRL D, then press Enter when the OS verification screen comes up.
  • Wait several minutes for developer mode to be installed. Note it will wipe your device to do this.

Install Crouton

Now that we’re in developer mode we will use a script called crouton to install an Ubuntu chroot (thanks to lifehacker for the guidance.)

  1. Download Crouton:  https://github.com/dnschneid/crouton
  2. Press CTRL ALT T to open terminal
  3. Type ‘shell’ (without quotes) and hit enter
  4. sudo sh ~/Downloads/crouton -r trusty -t touch,extension,unity-desktop,keyboard,cli-extra -e -n unity
    1. Note the arguments are suited to my needs. You will want to read up on the documentation to decide which options you want, i.e. desktop environment
  5. Install this crouton extension to integrate clipboard (in conjuction with the ‘extension’ parameter above)

General points of interest / lessons learned

  • Don’t enter the chroot and type startx. It will hard freeze your chromebook.
  • You don’t need to blow your chroot away if you want a different desktop environment, simply install desired environment on your existing chroot
  • To switch between chroots pres Ctrl + Alt + Shift + F2 or F3 (back or forward arrows on top row, not to be confused with the arrows on the bottom right of the keyboard)

High DPI

High DPI screens are a pain to deal with. Here are my tweaks:

  • Go to System settings / Displays / Scale for menu and title bars. I like 1.75
  • Alternatively you can change your resolution. If you mess up and X won’t start properly, delete ~/.config/monitors.xml (thanks to askubuntu)
  • Use the setres script to enable other resolutions in the display manager
    • setres 1440 960
  • Firefox fix tiny text:
    • go to about:config and modify layout.css.devPixelsPerPx, set to 2

Other tweaks:

  • Make trackpad match Chrome:
    • System settings / mouse and trackpad / Check “Natural Scrolling”
  • Remove lens suggestions:
    • Install unity-tweak-tool, notify-osd, overlay-scrollbar, unity-webapps-service
    • Run unity-tweak-tool and uncheck “search online sources” from the search tab
  • Move docky bar to the left:
    • sudo apt-get install gconf-editor
    • Press Alt+F2, enter: gconf-editor and in this configuration editor, navigate to “apps -> docky-2 -> Docky -> Interface -> DockPreferences -> Dock1″
    • On the right side there are some properties with their corresponding values, including the position of the dock which you can change from “Bottom” to “Top/Left/Right” to move Docky to the upper part of the desktop.
  • Install Mac OSX theme
  • Install elementary OS chroot

Garbled mouse cursor when switching between chroots

Sometimes the mouse cursor would get all weird when switching between my chroots. The fix is to install the latest Intel drivers within your chroot.

sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository https://download.01.org/gfx/ubuntu/14.04/main
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg -O - | sudo apt-key add -
wget --no-check-certificate https://download.01.org/gfx/RPM-GPG-KEY-ilg-2 -O - | sudo apt-key add -
sudo apt-get update
sudo apt-get upgrade

That’s it.. for now 🙂


 

Update 07/27/2015

I discovered that creating chroots was taking a very long time due to the mirror being chosen. I discovered the -m parameter of crouton which allows you to specify a mirror of your choosing. My updated setting is thus:

sudo sh ~/Downloads/crouton -r trusty -t touch,extension,kde-desktop,keyboard,cli-extra -e -n unitykde -m http://mirrors.xmission.com/ubuntu

If you happened to do a CTRL + C to cancel an existing chroot install that was going slowly, you can simply append the -m parameter above along with -u -u to resume with the updated mirror:

sudo sh ~/Downloads/crouton -r trusty -t touch,extension,kde-desktop,keyboard,cli-extra -e -n unitykde -m http://mirrors.xmission.com/ubuntu -u -u

Migrate from Sophos UTM to pfSense part 1

I’ve been using a Sophos UTM virtual appliance as my main firewall / threat manager appliance for about two years now. I’ve had some strange issues with this solution off and on but for the most part it worked. The number of odd issues has begun to build, though.

Recently it decided to randomly drop some connections even though logs showed no dropped packets. The partial connections spanned across various networks and devices. I never did figure out what was wrong. After two days of furiously investigating (including disconnecting all devices from the network), the problem went away completely on its own with no action on my part. It was maddening – enough to drive me to pfSense.

As of version 2.2 pfSense can be fully virtualized in Xen, thanks to FreeBSD 10.1. This allowed me the option to migrate. Below are the initial steps I’ve taken to move to pfSense.

Features checklist

I am currently using the following functions in Sophos UTM. My goal is to move these functions to equivalents in pfSense:

  • Network firewall
  • Web Application Firewall, also known as a reverse proxy.
  • NTP server
  • PPPOE client
  • DHCP server
  • DNS server
  • Transparent proxy for content filtering and reporting
  • E-mail server / SPAM protection
  • Intrusion Detection system
  • Anti-virus
  • SOCKS proxy
  • Remote access portal (for downloading VPN configurations, etc)
  • Citrix Xenserver support (for live migration etc)
  • Log all events to a syslog server
  • VPN server
  • Daily / weekly / monthly e-mail reports on bandwidth usage, CPU, most visited sites, etc.

I haven’t migrated all of these function over to pfSense which is why this article is only Part 1. Here is what I have done so far.

Xenserver support

Installing xen tools is fairly straightforward thanks to this article. It’s simply a matter of dropping to a shell on your pfSense VM to install and enable xen tools

pkg install xe-guest-utilities
echo "xenguest_enable=\"YES\"" >> /etc/rc.conf.local
ln -s /usr/local/etc/rc.d/xenguest /usr/local/etc/rc.d/xenguest.sh 
service xenguest start

PPPoE client

The wizard works fine for configuring PPPOE, however I experienced some very strange issues with internet speed. Downstream would be fine but upstream would be incredibly slow. Another symptom was NAT / port forwarding appearing not to work at all.

It turns out the issue was pfSense’s virtualized status. There is a bug in the virtio driver that handles virtualized networking. You have to disable all hardware offloading on both the xenserver hypervisor and the pfSense VM to work around the bug. Details on how to do this can be found here. After that fix was implemented, speed and performance went back to normal.

DNS server

To get this working like it did in Sophos you have to disable the default DNS resolver service and enable the DNS forwarder service instead. Once DNS forwarder is enabled, check the box “register DHCP leases in DNS” so that DHCP hostnames come through to clients.

Syslog

Navigate to Status / system logs / settings tab and  tick “Send log messages to remote syslog server” and fill out the appropriate settings.

Note for Splunk users: the Technology Add-on for parsing pfsense logs expects the sourcetype to equal pfsense (not syslog). Create a manual input for logs coming from pfsense so it’s tagged as pfsense and not syslog (thanks to this post for the solution on how to get the TA to work properly.)

VPN

OpenVPN – wizard ran fine. Install OpenVPN Client Export utility package for easy exporting to clients. Once package is installed go to VPN / OpenVPN and you will see a new tab – Client Export.

Note you will need to create a user and check the “create certificate” checkbox or add a user certificate to existing user by going to System / User manager, Editing the user and clicking the plus next to User Certificates. The export utility will only show users that have valid certificates attached to them. If no users have valid certificates the Client Export tab will be blank.

Firewall

One useful setting to note is to enable NAT reflection. This allows you to access NATed resources as if you were outside the network, even though you are inside it. Do this by going to System / Advanced and clicking on the Firewall / NAT tab. Scroll halfway down to find the Network Address Translation section. Change NAT reflection mode for port forwards to Enable (Pure NAT)

It’s also very helpful to configure host and port aliases by going to Firewall / Aliases. This is roughly equivalent to creating Network and Host definitions in Sophos. When you write firewall rules you can simply use the alias instead of writing out hosts IPs and ports.

So far so good

This is the end of part 1. I’ve successfully moved the following services from Sophos UTM to pfSense:

  • Network firewall
  • PPPOE client
  • Log all events to a syslog server
  • VPN server
  • NTP server
  • DHCP server
  • DNS server
  • Xenserver support

I’m still working on moving the other services over. I’ve yet to find a viable alternative to the web application firewall but I haven’t given up yet.

Use a freedompop phone for OOB management

I’ve been wanting to have an out of band (OOB) way to manage my home servers for some time now. Why OOB? Sometimes the regular band fails you, like when the internet connection goes down or when I remote into my firewall and fat finger a setting causing the WAN link to go down. Everything is still up, I just can’t get to it remotely. I then have to drive home to fix it all like a luser. This is especially difficult if I’m out of town.

Enter Freedompop. Freedompop is as Sprint MVNO that offers data-only 4g access for cheap – in this case, completely free if you stay under 500 MB of data a month. I got a freedompop phone so my wife could play Ingress before the iOS client came out. Once the iOS Ingress client was released, the freedompop phone began collecting dust.

Note that this would all be a lot simpler if I just did things the “proper” way, such as purchasing a freedompop access point or paying for a tethering plan.. but what’s the fun in that? My solution is quite convoluted and silly – it uses all three types of SSH tunnels –  but it works.. and it was fun!

Hardware

  • Local, out of band server: An old laptop with a broken screen running Xubuntu 14.04
    • Ethernet cable attached to my local network
  • Remote, regularly banded SSH server: My parents’ dd-wrt powered router (any remote ssh server you have access to will do.)
  • Phone: Sprint Samsung Galaxy S3 activated on the Freedompop network
    • Attached to the out of band server via USB

Software

  • openssh-server: Install this on your out of band server so you can SSH into it
    • ssh-keygen: used to generate RSA private/public key pair to allow for passwordless SSH logins
  • screen: Allows programs to keep running even if SSH is disconnected (optional)
  • autossh: Monitors the state of your ssh connection and will continually attempt to re-connect if the connection is lost, effectively creating a persistent tunnel.
  • tsocks: Allows you to tunnel all traffic for a specified command through a SOCKS proxy
  • Android SSH Server (I use SSH server by Icecold apps.) This is an SSH server for android devices, no root required.

Procedure

My strategy uses a system of tunnels through the intertubes to accomplish what I want.

  1. Install the OS, openssh-server, autossh, and tsocks on your OOB server, then disconnect it from the internet while keeping it on your local network (I manually configured the IP to not have a default gateway.)
  2. Install an ssh server on your phone
  3. Tether your phone to the computer via USB
  4. Create a dynamic SOCKS proxy tunnel between the OOB server and your phone (Freedompop appears to block internet traffic through USB tethers unless you have a tethering plan, so I had to get creative.)
  5. Configure tsocks on your OOB server to point to the socks proxy established in step 4
  6. Use autossh in conjuction with tsocks to initiate a reverse tunnel between your OOB server and the remote SSH server over the 4G connection from your freedompop phone
  7. (From some other network) SSH into your remote SSH server and create a local tunnel pointing to the remote tunnel created in step 6.
  8. SSH into your locally created tunnel, and… profit?

Adventure, here we come.

Create private / public keys

First, since we’re going to be creating a persistent tunnel, passwordless login is required. We do this by generating an RSA private/public key pair. Create the key pair on your server as per these instructions:

cd ~/.ssh
ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/home/nicholas/.ssh/id_rsa): oob
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in oob.
Your public key has been saved in oob.pub.

Copy the public key generated (oob.pub in my case) to your phone.

Configure phone SSH server

Create an SSH server on your phone. Configure the user you created to use the public key generated above. Once that’s configured, start the ssh server on the phone, plug the phone into USB cable and plug the other end into server, and activate USB tethering in Android settings.

On the OOB server, find out IP address of tether by issuing the route command. Look for the gateway on the usb0 interface.

route
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.42.129 0.0.0.0 UG 0 0 0 usb0

In my case the gateway IP is 192.168.42.129.

Configure OOB server

Initiate an autossh connection to the usb0 gateway, creating a dynamic (socks) proxy in the process.

autossh 192.168.42.129 -l nicholas -i ~/.ssh/oob -p 34097 -D9999

Argument breakdown:

  • -l  username to log in as
  • -i keyfile to use (passwordless login, optional but recommended)
  • -p port to ssh to. This will be random and told to you by the android ssh server on the phone.
  • -D port for your dynamic (socks) proxy to bind to. This can be anything of your choosing.

The phone <-> OOB server tunnel is now established. This tunnel will be used to provide 4G internet access to your server.

Next, configure tsocks to use our newly created tunnel. The options we want to modify are Local Networks, server, and server_port.

sudo vi /etc/tsocks.conf

local = 192.168.0.0/255.255.0.0
server = 127.0.0.1
server_port = 9999

You can now use the 4G internet if you prepend tsocks in front of the program you want to use the internet.

Update 07/01/2015 I discovered my off site router changes SSH host keys every reboot. This was causing SSH to fail due to host key mismatches. Disable strict SSH host key checking per this post to get around this:

vi ~/.ssh/config

Host *
    StrictHostKeyChecking no

Update 08/04/2015

I found an even better way than the one above to avoid SSH key changing errors. Simply add the following options to your ssh command:

-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null

That takes care of changing ssh keys for good. Thanks to this site for the info.

Establish tunnel with external server

Now that we have 4g internet we can use autossh to call out to an external SSH server (my parent’s router in my case.) This time we will be initiating a reverse tunnel. It will cause the remote server to listen on a specified port and tunnel all traffic on that port through the SSH tunnel to your local server. Note that you will have to copy your public key generated earlier to this remote host as well.

tsocks autossh <remote server IP/DNS> -p <remote port> -l<remote user> -i <remote keyfile> -R5448:localhost:22

The local <-> remote server tunnel is now established. The remote port to listen on (-R argument)  can be anything of your choosing. Remember what port you used for the last step.

Automating connections

Since the SSH connections are interactive I’ve found it easier to run these commands via screen as I outline in this post. To have these tunnels form automatically on startup we will have to make a quick and dirty upstart script as detailed here and further clarified here.

vi /etc/init/tunnel.conf
author "Your name goes here - optional"
description "What your daemon does shortly - optional"

start on started dbus
stop on stopping dbus

# console output # if you want daemon to spit its output to console... ick
# respawn # it will respawn if crashed/killed

script
 screen -dm bash -c "autossh 192.168.42.129 -l nicholas -i /home/nicholas/.ssh/oob -p34097 -D9999"
 sleep 5
 screen -dm bash -c "tsocks autossh dana.jeppson.org -l root -p 443 -i /home/nicholas/.ssh/oob -R5448:localhost:22"
end script
sudo initctl reload-configuration

Accessing your server out of band

Now that we have a tunnel established between our local and remote servers, we can access our local server through the remote server. On the remote server:

ssh localhost -p5448

The -p command of ssh specifies which port to connect to. Since we have a reverse tunnel listening on port 5448, the server will take the ssh connection you’ve initiated and send it through the intertubes to your OOB server over its 4G connection.

If you would rather SSH into your OOB server directly from your laptop instead of through your remote SSH server, you will need to create more tunnels, this time regular local port forwarding tunnels. Why would you possibly want more tunnels? If you want to access things like SSH, VNC or RDP for servers on your network through your OOB tunnel directly to your laptop, it will be necessary to create even more tunnels through the tubes.

First tunnel (to expose the OOB server’s SSH port to your laptop)

ssh <remote server> -L2222:localhost:5448

-L specifies which port your laptop will listen on. The other two parts  specify where your laptop will send traffic it sees on that port (from the perspective of the remote SSH server.)

Second tunnel (to expose ports of servers of your choosing to your laptop as well as give you shell access to your local OOB server) You can add as many -L arguments as you wish, one for each address/port combination you wish to access.

ssh localhost -p2222 -L3333:192.168.1.10:3389 ... ...

Why?

If you’ve made it this far, congratulations. This was an exercise in accessing my home network even if the internet connection goes down. You could bypass half of these tunnels if you set up an openvpn server on your out of band server, but that’s a tutorial for another time.

If you followed this madness you would have the following tunnels through the tubes:

  • SOCKS proxy tunnel from server to phone
  • Remote port forward tunnel from OOB server to remote server
  • Local port forward from remote server to your computer
  • Local port forward(s) from your computer to anything on your local network through the tunnel created above

Upgrade Linux Mint 16 to 17.1

I realized recently that my desktop system is quite out of date. It has worked so well for so long that I didn’t realize for a while that it was end of support. I was running Linux Mint 16 – Petra.

Thanks to this site the upgrade was fairly painless – a few repository updates, upgrade, and reboot. Simple! The steps I took are below

Update all repositories

Use sed in conjunction with find to quickly and easily update all your repository files from saucy to trusty, and from petra to rebecca, making a backup of files modified.

sudo find /etc/apt/sources.list.d/ -type f -exec sed --in-place=.bak 's/saucy/trusty/' {} \;
sudo find /etc/apt/sources.list.d/ -type f -exec sed --in-place=.bak 's/petra/rebecca/' {} \;

Update your system

This took a while. It had to download 1.5GB of data and install it.

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get upgrade

Cleanup

After running the upgrade I had a notice that I had many packages that were installed but no longer required. To remove unnecessary packages after the upgrade:

sudo apt-get autoremove

Install new language settings:

sudo apt-get install mintlocale

Install gvfs-backends (for Thunar)

sudo apt-get install gvfs-backends

Reboot

Flawless! It worked on the first try. Awesome.

Use BASH to append an incremental suffix to directories

In migrating Splunk indexes I came across the need to add an incrementing suffix of _(numbers) to a bunch of directories. For example, renaming directories

test
test1
test2
test3

to

test_100
test1_101
test2_102
test3_103

After a bunch of searching I settled on this approach of using a simple bash loop to accomplish this:

I=100; for F in db_*; do mv $F $F\_$I; I=$((I+1)); done

There is an initial variable, I, which is set to 100. The for loop goes through each file beginning with db_ and then renames it adding the proper suffix in the iteration (_###). The last step is to increment the value of I.

Replace I for whatever you want you suffix to begin with and change db_* with whatever the criteria for the files you want to rename are. Simple, but it works.

Fix Splunk lockout after exceeded quota

Recently I came across a situation with my home install of Splunk (free license) where the 500MB quota was exceeded three days in a row. I hadn’t checked Splunk for a few days so I was completely blindsided by it. The consequence of going over quota three days in a row? Losing the ability to do any searches in Splunk, which is a real downer.

The easiest, although least convenient, way to fix being locked out is to wait it out. If you go 30 days in a row without violating the license, Splunk will unlock itself. Splunk will still receive and index events during that time. The inability to search makes it really difficult to track down what the problem is, though, and I wasn’t happy waiting for 30 days before getting Splunk back.

Poking around on the Splunk forums I discovered that there is a way to get splunk back – perform a fresh install and then migrate your database and settings over to the fresh install. This involves backing up a few things, then copying them over the fresh install’s default folders

  • $SPLUNK_HOME/var/lib/splunk/defaultdb   #Default Splunk index, where all my data is held. If you have other indexes in here you’ll want to copy them too.
  • $SPLUNK_HOME/etc  #all your configuration files

Simply back up the above folders, install Splunk on a new machine, launch Splunk first so it will generate all the default files, then copy the files over to the new instance.

I went a step further and planned for the future. I wrote a quick and dirty script that will do all of this for you,  even on the same machine – no need to copy to another machine.  The script assumes you’re running a redhat derivative and have the correct Splunk install file in a predictable location. Update the locations of splunk directories and install files as needed and run as root.

#!/bin/bash

#Backup important directories
mkdir /opt/splunkbackup/
cp -al /opt/splunk/etc /opt/splunkbackup/
cp -al /opt/splunk/var/lib/splunk/defaultdb /opt/splunkbackup/

#Nuke splunk
/opt/splunk/bin/splunk stop
rm -rf /opt/splunk

#Reload from fresh start
rpm -iv --replacepkgs /home/nicholas/splunk-6.2.2-255606-linux-2.6-x86_64.rpm
/opt/splunk/bin/splunk start --accept-license

#Restore configuration files and indexes
/opt/splunk/bin/splunk stop
rm -rf /opt/splunk/etc
cp -al /opt/splunkbackup/etc /opt/splunk/
rm -rf /opt/splunk/var/lib/splunk/defaultdb
cp -al /opt/splunkbackup/defaultdb /opt/splunk/var/lib/splunk/
chown splunk:splunk -R /opt/splunk/
/opt/splunk/bin/splunk start

#Remove splunk backup
rm -rf /opt/splunkbackup

This will restore your searches, settings, and data. It won’t restore audit and other internal Splunk information, however. This script worked marvelously in getting my Splunk back.

Remove and re-install a Debian package

I wanted to completely blow away owncloud on one of my Debian servers today. I did apt-get remove owncloud and removed the owncloud directory. That should be all I need to do, right? Alas, not so.

It turns out that a simple apt-get remove does not remove necessary files, and a simple apt-get install doesn’t restore those missing files. In order to completely blow away the package, you must use the purge command (thanks to this site for the inspiration.)

First, purge the package as well as related packages

sudo apt-get purge owncloud-*

Then, re-install the package with the –reinstall flag

sudo apt-get install --reinstall owncloud

Done! The package has come back anew with all necessary files.

Using screen to run interactive programs at startup

Oftentimes I will encounter programs that weren’t necessarily designed to be automatically run that I want to run on startup. Sometimes that program will have interactive information that you will want to see later, but you still want it to run on startup.

The solution to this particular problem is using screen in combination with su and bash. In my situation, I want to run the HDSurfer plugin on bootup as a different user. The solution I came up is as follows (thanks to superuser.com and stackoverflow.com for the guidance I needed to set this up.)

Install screen

Screen is like having a separate X window session to keep a program running, except it is for console programs. You can attach and detach to this screen whenever you’d like and not worry about the program terminating.

sudo apt-get install screen

Create a script to run your program with all required arguments

In my case I needed to execute the command “python /usr/bin/HDSurferWave/hdsurferwave.py start” as a different user in a screen session (so it wouldn’t terminate when the terminal session did.) To do this,

  • invoke screen with the -dm command (to begin the program in detached mode)
  • issue the bash -c argument afterward to invoke bash
  • Include your desired command after that

My one line script looks like this:

screen -dm bash -c "python /usr/bin/HDSurferWave/hdsurferwave.py start"

Run your script

I use the su command with the -c argument to change the user that will be running the script, as the startup script launches things as root by default (with pre-systemd systems, anyway.) The -s command initiates a shell to launch, and the last argument is the user you want to run as. My launch argument is:

su -c "/usr/bin/HDSurferWave/start.sh" -s /bin/sh nicholas

Configure the script to run on startup

Edit /etc/rc.local and add your script command from above, then mark that file as executable by running chmod +x /etc/rc.local. Note: This will not work with systems using systemd.

 

 

Use batch script to continually check site status

Recently my blog went down (the ISP running it had downtime.) I wanted to see when it came back up. As a result I wrote a little Windows batch script to continually poll my address in order to do just that.

The script issues a query to the default DNS server as well as pings the address of the blog. I used both since sometimes in Windows a ping will simply use internal system cache, which may be wrong if the IP address hosting my blog changes (it’s address is dynamic.)

The script is below:

@ECHO OFF
:loop
 cls
 nslookup jeppson.org | findstr "Address" | findstr /V 10.97.160.160
 ping -n 1 jeppson.org
 timeout /t 3
goto loop

I use the /V argument to take out the first bit spit out from the nslookup command, namely the IP address of the nameserver being used.

A simpler version of the script only issues one ping, waits a second, and then repeats the command. This is different from doing ping -t because it forces ping to do a new lookup for the domain name, whereas ping -t only resolves the IP once, then just pings the IP address. That wouldn’t work in my case as the IP of the domain name changes when it comes back online.

@ECHO OFF
:loop
 ping -n 1 jeppson.org
 timeout /t 1
goto loop

Thanks to Stack Overflow for educating me on how to write a quick loop to emulate the Linux Watch command,  ping only once, and use an application similar to grep to clean up output.

 

Quickly create xen disk images with dd

To this point I have been creating disk images with dd – pretty standard. The command I traditionally use is this:

sudo dd if=/dev/zero of=pc-bsd-10.1.img bs=1M count=40000

This command works but it takes some time as dd copies zeroes for every single part of the image.

It turns out that there is a much faster way to create a disk image, which I found out thanks to this article. Still using dd, you can simply use the -seek parameter to very quickly create your image

sudo dd if=/dev/zero of=pc-bsd-10.1.img bs=1 count=1 seek=40G

By setting bs and count to 1 but seek to 40G I have created a 40 gigabyte empty file image without having to wait for all those pesky zeroes to be written. Awesome.