Tag Archives: BASH

Rename files to reflect modified time while preserving extension

I’m undergoing a big project of scanning old family scrapbooks. I have two scanners involved, each with their own naming scheme, dumping files of various types (pdf, jpg) into the same folder. I need to have an accurate chronological view of when things were scanned (nothing was scanned at the exact same moment.) At this point the only way to get this information is to sort all the files by modified time,  but no other programs but a file manager look at files this way. I needed a way to rename all the files to reflect their modified time.

I found a couple different ways to do this, but settled on a bash for loop utilizing the stat, sed, and mv commands.

Challenge 1

Capture the extension of the files.

Using sed:

ls <filename> | sed 's/.*\(\..*\)$/\1/'

Challenge 2

Obtain the modified time of your file in an acceptable format. I do this using the stat command.

stat -c %y <filename>

%y is the best option here – it leaves no room for ambiguity. You could also choose %Y so the filenames aren’t so large and don’t contain colons (some systems struggle with this.) The downside do this is you only have to-the-second precision. In my case I had a few files that had the same epoch timestamp, which caused problems. More on how to format this can be found here and here.

Stringing it all together

I wrapped it all up in a bash for loop with appropriate variables. This was my final command, which I ran inside the directory I wanted to modify:

for file in *; do name=$(stat -c %y "$file"); ext=$(echo "$file" | sed 's/.*\(\..*\)$/\1/'); mv -n "$file" "$name$ext"; done

The for loop goes through each file one at a time and assigns it to the $file variable.  I then create the name variable which uses the stat command to obtain a precise date modified timestamp of the file. The ext variable is derived from the filename but only keeps the extension (using sed.) The last step uses mv -n (no clobber mode – don’t overwrite anything) to rename the original file to its date modified timestamp. The result: a directory where each file is named precisely when it was modified – a true chronology of what was scanned irrespective of file extension or which scanner created the file. Success.

Append users to powerbroker open RequireMembershipOf

The title isn’t very descriptive. I recently came across a need to script adding users & groups to the “RequireMembershipOf” directive of PowerBroker Open. PowerBroker is a handy tool that really facilitates joining a Linux machine to a Windows domain. It has a lot of configurable options but the one I was interested in was RequireMembershipOf – which as you might expect requires that the person signing into the Linux machine be a member of that list.

The problem with RequireMembershipOf is, as far as I can tell, it has no append function. It has an add function which frustratingly erases everything that was there before and includes only what you added onto the list. I needed a way to append a member to the already existing RequireMembershipOf list. My solution involves the usage of bash, sed, and a lot of regex. It boils down to two lines of code:

#take output of show require membership of, remove words multistring & local policy, replace spaces with carat (pbis space representation) and put results into variable (which automatically puts results onto a single line)

add=$(/opt/pbis/bin/config --show RequireMembershipOf | sed 's/\(multistring\)\|\(local policy\)//g' | sed 's/ /^/g')

#run RequireMembershipOf command with previous output and any added users

sudo /opt/pbis/bin/config RequireMembershipOf "$add" "<USER_OR_GROUP_TO_ADD>"

That did the trick.

Automatically extract rar files downloaded with transmission

My new project recently has been to configure sonarr to work with transmission. The challenge was getting these two pieces of software to properly interface with each other. Sonarr would successfully instruct transmission to download the requested show but once the download completed it would not import the show to its library. The reason behind this was my torrent tracker – most torrents are downloaded as multi part rar files. Sonarr has no mechanism for processing rar files so I had to get creative.

The solution was to write a simple script and have transmission execute it after finishing the download. The script uses the find command to look for rar files in the directory transmission created for that particular torrent. If any rar files are found it will extract them into that same directory. This was important because sonarr will only look in the torrent download directory for the completed video file.

After some tweaking I got it to work pretty well for me. Here is the code I used (thanks to this site for the direction.)

#A simple script to extract a rar file inside a directory downloaded by Transmission.
#It uses environment variables passed by the transmission client to find and extract any rar files from a downloaded torrent into the folder they were found in.
find /$TR_TORRENT_DIR/$TR_TORRENT_NAME -name "*.rar" -execdir unrar e -o- "{}" \;

Save the above script into a file your transmission client can read and make it executable. Lastly configure transmission to run this script on torrent completion by modifying your settings.json file (mine was located at /var/lib/transmission/.config/transmission-daemon/settings.json) Modify the following variables (be sure to stop your transmission client first before making any changes.)

"script-torrent-done-enabled": true, 
"script-torrent-done-filename": "/path/to/where/you/saved/the/script",

That’s it! Sonarr will now properly import shows that were downloaded via multipart rar torrent.

Script to change WordPress URL

I wrote up a little script to run when you migrate a wordpress installation from one host to another (hostname change.)  Once this script is run you can access the site via the hostname of the server it’s running on and then change the URL to whatever you like. This comes in handy for when you want to migrate one internal host to another, then specify an external hostname once things are looking how you like them.

Change SQL_COMMAND to reflect the name of the wordpress table in the destination server. Thanks to this site for the guidance in writing the script.


#A simple script to update the wordpress database to reflect a change in hostname
#Run this after changing the hostname / IP of a wordpress server

#Prompt for mysql root password
read -s -p "Enter mysql root password: " SQL_PASSWORD

SQL_COMMAND="mysql -u root -p$SQL_PASSWORD wordpress -e"

#Determine what the old URL was and save to variable
OLD_URL=$(mysql -u root -p$SQL_PASSWORD wordpress -e 'select option_value from wp_options where option_id = 1;' | grep http)
#Get current hostname

#SQL statements to update database to new hostname
$SQL_COMMAND "UPDATE wp_options SET option_value = replace(option_value, '$OLD_URL', 'http://$HOST') WHERE option_name = 'home' OR option_name = 'siteurl';"
$SQL_COMMAND "UPDATE wp_posts SET guid = replace(guid, '$OLD_URL','http://$HOST');"
$SQL_COMMAND "UPDATE wp_posts SET post_content = replace(post_content, '$OLD_URL', 'http://$HOST');"
$SQL_COMMAND "UPDATE wp_postmeta SET meta_value = replace(meta_value,'$OLD_URL','http://$HOST');"

Rename files for proper sorting in Linux

I often come across files than are named 1..9 and then go to 10…99. The problem is many Linux programs begin with 1, then go to 10, etc. The sorting is wrong. Fortunately the rename command comes to our rescue:

rename 's/\d+/sprintf("%05d", $&)/e' *.jpg

Running the above command looks for numbers in the name of JPG files (in the current directory) and renames the file to ensure there are 5 digits in the filename. Now, instead of 1.jpg, your file will be named 00001.jpg. Handy.

Thanks to this forum for the information.

Mountpoint check script

I wrote a simple script to check to see if a specific mountpoint on a Linux system is still live.  It does this by trying to read a specific file on the share, and if it cannot, write the event to a log, unmount, and then re-mount the folder. The need arose for instances where a file server has been rebooted and the linux system loses the connection to the share. This way it will automatically re-mount.

Modify the variables section as needed and then have a cron job run the script as root at whatever interval you want. Enjoy.

#Script to monitor mount directories to ensure they are properly mounted
#Place a file with the word "mounted" in it inside all mounted directories
#The script will try to read the file and attempt to unmount and remount the folder if it fails to read the file
#Updated 8/30/2016 by Nicholas Jeppson

#---------Variable section------------#

#Place mount folder locations here, separated by space 
#Paths containing spaces need to have quotes around them
LOCATIONS=(/home/njeppson /home/njeppson/Desktop)

#Name of file to try to read

#---------End Variable Section--------#
#-----Do not edit below this line-----#

#Read file, if contents don't contain "mounted" then attempt to unmount and re-mount the folder, output attempt to /var/log/mountcheck

for FOLDER in "${LOCATIONS[@]}"; do 
 if [[ $(cat $FOLDER/$TEST_FILENAME) != "mounted" ]]; then
 echo "$(date "+%b %d %T") $(hostname) $FOLDER Not mounted, remounting." >> /var/log/mountcheck 
 umount $FOLDER
 mount $FOLDER

Embed commands in if statements in bash

I’ve recently had to do some bash-fu and thought I’d document it in case I come across the need again. It involved an if statement inside a for loop. The if statement looked at the result of an external command and acted if conditions were met.

The scenario: An application created folders beginning with a series of digits.  Later it was decided to add a prefix to new folders. A problem occurred where there were folders with the same numeric sequence – corresponding to the same user – but the program was saving things in both prefixed and non-prefixed folders at random. We needed a way to copy information from the numeric only folders into the prefix folders, then backup and delete the numeric-only folders. We also wanted to be warned about any file overwrites in the process.

After a bunch of research and experimentation I came up with the following one-line bash script:

for d in [0-9]*; do BN=$(basename "$d"); if [[ $(find . -maxdepth 1 -type d -name "*$d" | grep -o $d | wc -l) = 2 ]]; then  cp -i -p -r "$d" ../archive/"$d"; cp -i -p -r "$d"/* "PREFIX_$BN"; rm -rf "$d"; fi; done

It does the following:

  • Scan the current directory for files (or folders) beginning with numbers
  • Save the basename of discovered file to a variable (basename was required to remove the ./ that showed up in the results) and use that variable for the copy command
  • Scans the current directory to see if there is another folder with the same string of numbers in its name (same name but only with a prefix attached)
  • If there is a folder with the same string of numbers in its name, copy the non-prefixed folder to an archive location, then copy its contents to the folder with the prefix, prompting before overwriting anything.
  • Once the copy is complete, delete the original non-prefixed folder

The big learning moment for me was embedding a bash command into an if statement. The if statement runs the find command, pipes to wc -l to count the number of results, and then compares that result to something else. Pretty handy.

Thanks to these sites for helping me in my journey:

If statement inside for loop: https://unix.stackexchange.com/questions/52800/how-to-do-an-if-statement-from-the-result-of-a-executed-command

Find results only in current directory:  https://unix.stackexchange.com/questions/162411/find-maxdepth-0-not-returning-me-any-output

Count results from find command: https://stackoverflow.com/questions/6181324/counting-regex-pattern-matches-in-one-line-using-sed-or-grep

Warn before overwriting files: https://askubuntu.com/questions/236478/how-do-i-make-bash-warn-me-when-overwriting-an-existing-file

Add prefix to filenames in bash

A quick handy little way to add a prefix to files in bash (taken from here)

for f in * ; do mv "$f" "PRE_$f" ; done

In my case I wanted to rename all sub-100 filenames to have an extra zero so sorting played nicely with filenames beginning with 100+. To accomplish this I found about the rename command (thanks to this site.)  The command I used to enforce natural sorting was the following:

rename 's/\d+/sprintf("%03d", $&)/e' *

The command looked for anything beginning with a number, then used sprintf to make the number 3 digits. The asterisk instructed the rename command to work on every file. Success.


Xenserver NFS SR from FreeNAS VM hack

I have a Citrix xenserver 6.5 host which hosts a FreeNAS VM that exports an NFS share. I then have that same xenserver host use that NFS export as a SR for other VMs on that same server. It’s unusual, but it saves me from buying a separate server for VM storage.

The problem is if you reboot the hypervisor it will fail to connect to the NFS export (because the VM hosting it hasn’t booted yet.) Additionally it appears Xenserver does not play well at all with hung NFS mounts. If you try to shutdown or reboot your FreeNAS VM while Xenserver is still using its NFS export, things start to freeze. You will be unable to do anything to any of your VMs thanks to the hung NFS share. It’s a problem!

My hack around this mess is to have FreeNAS, not Xenserver, control starting and stopping these VMs.

First, create public/private key pair for ssh into xenserver


This will generate two files, a private key file and a public (.pub) file. Copy the contents of the .pub file into the xenserver’s authorized_keys file:

echo "PUT_RSA_PUBLIC_KEY_HERE" >> /root/.ssh/authorized_keys

Copy the private key file (same name but without .pub extension) somewhere on your FreeNAS VM.

Next, create NFS startup and shutdown scripts. Thanks to linuxcommando for some guidance with this.  Replace the -i argument with the path to your SSH private key file generated earlier. You will also need to know the PBD UUID of the NFS store. Discover this by issuing

xe pbd-list

Copy the UUID for use in the scripts.

vi nfs-startup.sh
#NFS FreeNAS VM startup script

SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i <PRIVATE_KEY_LOCATION> -l root <ADDRESS_OF_XENSERVER>"

#Attach NFS drive first, then start up NFS-reliant VMs

sleep 10

#Issue startup commands for each of your NFS-based VMs, repeat for each VM you have
$SSH_COMMAND xe vm-start vm="VM_NAME"
vi nfs-shutdown.sh
#NFS FreeNAS VM shutdown script
#Shut down NFS-reliant VMs, detach NFS SR

#Re-establish networking to work around the fact that Network goes down before this script is executed within FreeNAS
/sbin/ifconfig -l | /usr/bin/xargs -n 1 -J % /sbin/ifconfig % up
SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i <PRIVATE_KEY_LOCATION> -l root <ADDRESS_OF_XENSERVER>"

#Issue shutdown commands for each of your VMs
$SSH_COMMAND xe vm-shutdown vm="VM_NAME"

sleep 60

$SSH_COMMAND xe pbd-unplug <UUID_OF_NFS_SR>

#Take the networking interfaces back down for shutdown
/sbin/ifconfig -l | /usr/bin/xargs -n 1 -J % /sbin/ifconfig % down

Don’t forget to mark them executable:

chmod +x nfs-startup.sh
chmod +x nfs-shutdown.sh

Now add the scripts as a startup task in FreeNAS  and shutdown task respectively by going to System / Init/Shutdown Scripts. For startup, Select Type: Script, Type: postinit and point it to your nfs-startup.sh script. For shutdown, select Type: Script and Type: Shutdown.

Success! Now whenever your FreeNAS VM is shut down or rebooted, things will be handled properly which will prevent your hypervisor from freezing.


Randomize files in a folder

I wanted to make a simple slideshow for a cheap Kindle turned photo frame in my office. Windows Movie Maker (free, already installed program) does not have a randomize function when importing photos. I had a lot of photos I wanted imported and I wanted them randomized. Movie Maker doesn’t include subfolders for some dumb reason, so I also needed a way to grab pictures from various directories and put them in a single directory.

My solution (not movie-maker specific) was to use bash combined with find, ln, and mv to get the files exactly how I want them. The process goes as follows:

  1. Create a temporary folder
  2. Use the find command to find files you want
    1. Use -type f argument to find only files (don’t replicate directory structure)
    2. Use the -exec argument to call the ln command to create links to each file found
    3. Use the -s argument of ln to create symbolic links
    4. Use the -b argument of ln to ensure duplicate filenames are not overwritten
  3. Invoke a one line bash command to randomize the filenames of those symbolic links

It worked beautifully. The commands I ended up using were as follows:

mkdir temp
cd temp
find /Pictures/2013/ -type f -exec ln -s -b {} . \;
#repeat for each subfolder as needed, unless you want all folders in which case you can just specify the directory beneath it.
find /Pictures/2014/ -type f -exec ln -s -b {} . \;
find /Pictures/2015/ -type f -exec ln -s -b {} . \;

for i in *.JPG; do mv "$i" "$RANDOM.jpg"; done
#repeat for all permutations. The -b argument of ln creates files with tildes in the extension - don't forget about them.
for i in *.jpg; do mv "$i" "$RANDOM.jpg"; done
for i in *.JPG~; do mv "$i" "$RANDOM.jpg"; done
for i in *.jpg~; do mv "$i" "$RANDOM.jpg"; done

The end result was a directory full of pictures with random filenames, ready to be dropped into any crappy slideshow making software of your choosing 🙂