Category Archives: CLI

Fix Splunk lockout after exceeded quota

Recently I came across a situation with my home install of Splunk (free license) where the 500MB quota was exceeded three days in a row. I hadn’t checked Splunk for a few days so I was completely blindsided by it. The consequence of going over quota three days in a row? Losing the ability to do any searches in Splunk, which is a real downer.

The easiest, although least convenient, way to fix being locked out is to wait it out. If you go 30 days in a row without violating the license, Splunk will unlock itself. Splunk will still receive and index events during that time. The inability to search makes it really difficult to track down what the problem is, though, and I wasn’t happy waiting for 30 days before getting Splunk back.

Poking around on the Splunk forums I discovered that there is a way to get splunk back – perform a fresh install and then migrate your database and settings over to the fresh install. This involves backing up a few things, then copying them over the fresh install’s default folders

  • $SPLUNK_HOME/var/lib/splunk/defaultdb   #Default Splunk index, where all my data is held. If you have other indexes in here you’ll want to copy them too.
  • $SPLUNK_HOME/etc  #all your configuration files

Simply back up the above folders, install Splunk on a new machine, launch Splunk first so it will generate all the default files, then copy the files over to the new instance.

I went a step further and planned for the future. I wrote a quick and dirty script that will do all of this for you,  even on the same machine – no need to copy to another machine.  The script assumes you’re running a redhat derivative and have the correct Splunk install file in a predictable location. Update the locations of splunk directories and install files as needed and run as root.

#!/bin/bash

#Backup important directories
mkdir /opt/splunkbackup/
cp -al /opt/splunk/etc /opt/splunkbackup/
cp -al /opt/splunk/var/lib/splunk/defaultdb /opt/splunkbackup/

#Nuke splunk
/opt/splunk/bin/splunk stop
rm -rf /opt/splunk

#Reload from fresh start
rpm -iv --replacepkgs /home/nicholas/splunk-6.2.2-255606-linux-2.6-x86_64.rpm
/opt/splunk/bin/splunk start --accept-license

#Restore configuration files and indexes
/opt/splunk/bin/splunk stop
rm -rf /opt/splunk/etc
cp -al /opt/splunkbackup/etc /opt/splunk/
rm -rf /opt/splunk/var/lib/splunk/defaultdb
cp -al /opt/splunkbackup/defaultdb /opt/splunk/var/lib/splunk/
chown splunk:splunk -R /opt/splunk/
/opt/splunk/bin/splunk start

#Remove splunk backup
rm -rf /opt/splunkbackup

This will restore your searches, settings, and data. It won’t restore audit and other internal Splunk information, however. This script worked marvelously in getting my Splunk back.

Remove and re-install a Debian package

I wanted to completely blow away owncloud on one of my Debian servers today. I did apt-get remove owncloud and removed the owncloud directory. That should be all I need to do, right? Alas, not so.

It turns out that a simple apt-get remove does not remove necessary files, and a simple apt-get install doesn’t restore those missing files. In order to completely blow away the package, you must use the purge command (thanks to this site for the inspiration.)

First, purge the package as well as related packages

sudo apt-get purge owncloud-*

Then, re-install the package with the –reinstall flag

sudo apt-get install --reinstall owncloud

Done! The package has come back anew with all necessary files.

Using screen to run interactive programs at startup

Oftentimes I will encounter programs that weren’t necessarily designed to be automatically run that I want to run on startup. Sometimes that program will have interactive information that you will want to see later, but you still want it to run on startup.

The solution to this particular problem is using screen in combination with su and bash. In my situation, I want to run the HDSurfer plugin on bootup as a different user. The solution I came up is as follows (thanks to superuser.com and stackoverflow.com for the guidance I needed to set this up.)

Install screen

Screen is like having a separate X window session to keep a program running, except it is for console programs. You can attach and detach to this screen whenever you’d like and not worry about the program terminating.

sudo apt-get install screen

Create a script to run your program with all required arguments

In my case I needed to execute the command “python /usr/bin/HDSurferWave/hdsurferwave.py start” as a different user in a screen session (so it wouldn’t terminate when the terminal session did.) To do this,

  • invoke screen with the -dm command (to begin the program in detached mode)
  • issue the bash -c argument afterward to invoke bash
  • Include your desired command after that

My one line script looks like this:

screen -dm bash -c "python /usr/bin/HDSurferWave/hdsurferwave.py start"

Run your script

I use the su command with the -c argument to change the user that will be running the script, as the startup script launches things as root by default (with pre-systemd systems, anyway.) The -s command initiates a shell to launch, and the last argument is the user you want to run as. My launch argument is:

su -c "/usr/bin/HDSurferWave/start.sh" -s /bin/sh nicholas

Configure the script to run on startup

Edit /etc/rc.local and add your script command from above, then mark that file as executable by running chmod +x /etc/rc.local. Note: This will not work with systems using systemd.

 

 

Use batch script to continually check site status

Recently my blog went down (the ISP running it had downtime.) I wanted to see when it came back up. As a result I wrote a little Windows batch script to continually poll my address in order to do just that.

The script issues a query to the default DNS server as well as pings the address of the blog. I used both since sometimes in Windows a ping will simply use internal system cache, which may be wrong if the IP address hosting my blog changes (it’s address is dynamic.)

The script is below:

@ECHO OFF
:loop
 cls
 nslookup jeppson.org | findstr "Address" | findstr /V 10.97.160.160
 ping -n 1 jeppson.org
 timeout /t 3
goto loop

I use the /V argument to take out the first bit spit out from the nslookup command, namely the IP address of the nameserver being used.

A simpler version of the script only issues one ping, waits a second, and then repeats the command. This is different from doing ping -t because it forces ping to do a new lookup for the domain name, whereas ping -t only resolves the IP once, then just pings the IP address. That wouldn’t work in my case as the IP of the domain name changes when it comes back online.

@ECHO OFF
:loop
 ping -n 1 jeppson.org
 timeout /t 1
goto loop

Thanks to Stack Overflow for educating me on how to write a quick loop to emulate the Linux Watch command,  ping only once, and use an application similar to grep to clean up output.

 

Quickly create xen disk images with dd

To this point I have been creating disk images with dd – pretty standard. The command I traditionally use is this:

sudo dd if=/dev/zero of=pc-bsd-10.1.img bs=1M count=40000

This command works but it takes some time as dd copies zeroes for every single part of the image.

It turns out that there is a much faster way to create a disk image, which I found out thanks to this article. Still using dd, you can simply use the -seek parameter to very quickly create your image

sudo dd if=/dev/zero of=pc-bsd-10.1.img bs=1 count=1 seek=40G

By setting bs and count to 1 but seek to 40G I have created a 40 gigabyte empty file image without having to wait for all those pesky zeroes to be written. Awesome.

Redshift – a better flux program for Linux

F.lux is a wonderful tool for helping eye strain. People who stare at computer screens all day (like myself) can experience quite a bit of eye strain due to the harsh lights screens emit. One solution is to wear yellow tinted gamer goggles. I chose the cheaper route, installing software to adjust the color temperature of your monitor. Flux does this beautifully.. for Windows, at least.

Linux is a different story. Its GUI is pretty flaky and appears to only work for one screen. Enter Redshift, an updated fork of the Linux port of F.lux, which properly supports dual monitors. Unfortunately, it is harder to configure than F.lux. It is a command line only tool (with a GUI indicator component) and it requires creating a manual configuration file.

On my Linux Mint system (Ubuntu based) I needed to install the following:

sudo apt-get install redshift gtk-redshift

I had a hard time getting day/night changes to work. Redshift allows you to specify several different location options, but none of them appeared to work for me. I then realized that I like the softer colors of redshift all the time so I simply set the same temperature for day or night. It now doesn’t matter what the latitude / longitude is.

I found it odd that the settings for flux and redshift don’t appear to be the same. I tweaked my config a little bit to best closely match my Windows f.lux setup. Below is my config file, placed in ~/.config/redshift.conf.

; Global settings for redshift
[redshift]
; Set the day and night screen temperatures

temp-day=4500
temp-night=4500

; Enable/Disable a smooth transition between day and night
; 0 will cause a direct change from day to night screen temperature.
; 1 will gradually increase or decrease the screen temperature
transition=1

; Set the screen brightness. Default is 1.0
;brightness=0.9
; It is also possible to use different settings for day and night since version 1.8.
;brightness-day=0.7
;brightness-night=0.4
; Set the screen gamma (for all colors, or each color channel individually)
gamma=0.8
;gamma=0.8:0.7:0.8

; Set the location-provider: 'geoclue', 'gnome-clock', 'manual'
; type 'redshift -l list' to see possible values
; The location provider settings are in a different section.
location-provider=manual

; Set the adjustment-method: 'randr', 'vidmode'
; type 'redshift -m list' to see all possible values
; 'randr' is the preferred method, 'vidmode' is an older API
; but works in some cases when 'randr' does not.
; The adjustment method settings are in a different section.
adjustment-method=randr

; Configuration of the location-provider:
; type 'redshift -l PROVIDER:help' to see the settings
; ex: 'redshift -l manual:help'
[manual]
lat=40
lon=110

; Configuration of the adjustment-method
; type 'redshift -m METHOD:help' to see the settings
; ex: 'redshift -m randr:help'
; In this example, randr is configured to adjust screen 1.
; Note that the numbering starts from 0, so this is actually the second screen.
[randr]
screen=0

After saving the config file you can add gtx-flux  gtk-redshift as a startup application to have it automatically load on startup. My eyes feel much more comfortable now.

 

 

Manipulate EXIF data with jhead

From time to time I find I need to edit the date taken metadata on pictures. I’ve discovered that jhead is a wonderful tool to accomplish this. It has many options, but the ones I use most frequently are the following:

  • mkexif Create EXIF data for a picture that does not contain it
  • -ft Set picture’s filesystem modified date to match the EXIF date taken contained in the picture
  • dsft Set the EXIF date taken date of the picture to match the modified time of the filesystem for that picture

You can use wildcards, which is super convenient. To create metadata for all JPG files in a current directory:

jhead -mkexif *.JPG

I like to use touch in conjunction with jhead to set exif picture taken times for files that don’t have any metadata but I know the date they were taken:

jhead -mkexif *.jpg
touch -t 201410201700 *.jpg
jhead -dsft *.jpg

For pictures which have correct metadata but incorrect modified date (downloaded pictures, for example) simply do the following:

jhead -ft *.jpg

Neat.

Update 7/15/2018

I came across this helpful article which outlines how to view exif information from the command line using the identify command. It requires that imagemagick be installed.

identify -verbose *.jpg | grep "exif:"

Update 12/9/2018

This is the syntax I use to set the time of the picture to a specific date:

jhead -tsYYYY:MM:DD-HH:mm:ss <filename>

Unzip multiple files into a single directory

Occasionally I have a need to unzip multiple zip files into a single directory, renaming any files with duplicate names so all zip contents end up in the same directory. I accomplish this in a lazy fashion with find and unzip.

First, put all the zip files you need to extract in the same folder. I used find with the -ctime flag to find files created today (as those are the ones I want.) I use the -exec flag to execute a command on the resulting files; in this case, the unzip program with the -B flag, which doesn’t overwrite files with duplicate names, and the -d flag to specify which folder to extract to:

find *.zip -ctime 1 unzip -B {} -d example/ \;

This finds and unzips all my files, but there is a catch: files with the same filename have been renamed with tildes at the end of each file, for example pic1.jpg~. I do another quick find operation to simply tack .jpg to the end of each of these files

find example/ -name "*~*" -exec mv {} {}.jpg \;

The result is a directory full of the files you desire. My case is very simplistic as it assumes that all files in each zip file are of the same extension. You could use something like awk to parse the result of find command and re-add appropriate extensions, but I won’t detail that here (see laziness reference above.)

 

Manipulate dd output with sed, tr, and awk

While improving a backup script I came across a need to modify the information output by the dd command. FreeNAS, the system the script runs on, does not have a version of dd that has the human readable option. My quest: make the output from dd human readable.

I’ve only partially fulfilled this quest at this point, but the basic functionality is there. The first step is to redirect all output of the dd command (both stdout and stderr) to a variable (this particular syntax is for bash)

DD_OUTPUT=$(dd if=/dev/zero of=test.img bs=1000000 count=1000 2>&1)

The 2>&1 redirects all output to the variable instead of the console.

The next step is to use sed to remove unwanted information (number records in and out)

sed -r '/.*records /d'

Next, remove any parenthesis. This is due to the parenthesis from dd output messing up the math that’s going to be done later.

tr -d '()'

-d ‘()’ simply tells tr to remove any instance of the characters between the single quotes.

And now, for awk. Awk allows you to manipulate certain pieces of a text file. Each section, separated by a space, is known as a record. Awk lets us edit some parts of text while keeping others intact. My expertise with awk is that of a n00b so I’m sure there is a more efficient way of doing this; nevertheless here is my solution:

awk '{print ($1/1024)/1024 " MB" " " $3 " " $4 " " $5*60 " minutes"  " (" ($7/1024)/1024 " MB/sec)" }'

Since dd outputs its information in bytes I’m having it divide by 1024 twice so the resulting number is now in megabytes. I also have it divide the seconds by 60 to return the number of minutes dd took. Additionally I’m re-inserting the parenthesis removed by the tr command now that the math is correctly done.

The last step is to pipe all of this together:

DD_OUTPUT=$(dd if=/dev/zero of=test.img bs=1000000 count=10000 2>&1)
DD_HUMANIZED=$(echo "$DD_OUTPUT" | sed -r '/.*records /d' | tr -d '()' | awk '{print ($1/1024)/1024 " MB" " " $3 " " $4 " " $5/60 " minutes"  " (" ($7/1024)/1024 " MB/sec)" }')

After running the above here are the results:

Original output:

echo "$DD_OUTPUT"
10000+0 records in
10000+0 records out
10000000000 bytes transferred in 103.369538 secs (96740299 bytes/sec)

Humanized output:

echo "$DD_HUMANIZED"
9536.74 MB transferred in 1.72283 minutes (92.2587 MB/sec)

The printf function of awk would allow us to round the calculations but I couldn’t quite get the syntax to work and have abandoned the effort for now.

Of course all this is not necessary if you have the correct dd version. The GNU version of dd defaults to human readability; certain BSD versions have the option of passing the msgfmt=human argument per here.


Update: I discovered that the awk method above will only print the one line it finds and ignore all other lines, which is less than ideal for scripting. I updated the awk syntax to do a search and replace (sub) instead so that it will print all other lines as well:

awk '{sub(/.*bytes /, $1/1024/1024" MB "); sub(/in .* secs/, "in "$5/60" mins "); sub(/mins .*/, "mins (" $7/1024/1024" MB/sec)"); printf}')

My new all in one line is:

DD_HUMANIZED=$(echo "$DD_OUTPUT" | sed -r '/.*records /d' | tr -d '()' | awk '{sub(/.*bytes /, $1/1024/1024" MB "); sub(/in .* secs/, "in "$5/60" mins "); sub(/mins .*/, "mins (" $7/1024/1024" MB/sec)"); printf}')

Verify backup integrity with rsync, sed, cat, and tee

Recently it became apparent that a large data transfer I did might have had some errors. I wanted to find an easy way to compare the source and destination to make sure that they were identical. My solution: rsync, sed, cat and tee

I have used rsync quite a bit but did not know about the –checksum flag until recently. When you run rsync with –checksum, it takes much longer, but it effectively does something similar to what a ZFS scrub does – it runs a checksum of every source file and compares it with the checksum of each destination file.  If there is a mismatch, rsync will overwrite the destination file with the source file to correct it.

In my situation I performed a large data migration from my old mdadam-based RAID array to my brand new ZFS array. During the transfer the disks were acting very strange, and at one point one of the disks even popped out of the array. The culprit turned out to be a faulty SATA controller. I bought a cheap 4 port SATA controller from Amazon for my new ZFS array. Do not do this! Spring the cash out for a better controller. The cheap ones, this one at least, only caused headache. I removed it and used the on-board SATA ports on my motherboard and the issues went  away.

All of those shennanigans made me wonder if there was corrupt data on my new ZFS array. A ZFS scrub repaired 15.5G of data! While I’m sure that fixed a lot of the issues, I realized there probably was still some corruption. This is how I verified it

rsync -Pahn --checksum /path/to/source /path/to/destination | tee migration.txt

-P shows progress, -a means archive, -h is for human readable measurements, and -n means dry run (don’t actually copy anything)

Tee is a cool utility that allows you to redirect output of a command both to a file and to standard output. This is useful if you want to see the verification take place in real time but also want to analyze it later.

After the comparison (which took a while!) I wanted to see the discrepancies. the -P flag lists each directory rsync checks as well as which files it detected. You can use sed in conjunction with cat to weed out the unwanted lines (directory listings) so that only the files with discrepancies are left.

 cat pictures.txt | sed '/\/$/d' | tee pictures-truncated.txt

The sed regex simply looks for any line ending in a / (directory listing) and removes that line. What is left is the files in question. You can combine the entire thing into one line like so

rsync -Pahn --checksum /path/to/source /path/to/destination | sed '/\/$/d' | tee migration.txt

In my case I wanted to compare discrepencies with rsync and make decisions on if I wanted to actually fix the issues. If you are 100% sure the source is OK to remove the destination completely, you can simply run

rsync -Pah --checksum --delete /path/to/source /path/to/destination