Tag Archives: for loop

Quick way to make HDD light blink

I needed a quick and dirty way to make a specific hard drive LED blink so that I could identify which drive to replace. I stumbled across this post that worked well for me.

The simple trick is to run smartctl in a while loop. Something about smartctl makes the drive light blink differently than using dd, which is what I was using previously. smartctl is what finally allowed me to identify the drive. Here is the command:

while true; do smartctl -a /dev/<device>; done

The command will run forever until you Ctrl + C. It make the LED blink rather obviously, which made things much easier.

Batch crop images with imagemagick

My scanner adds annoying borders on everything it scans. I wanted to find a way to fix this with the command line. Enter Imagemagick (thanks to this site for the help.)

I found one picture and selected the area I wanted to crop it from. I used IrfanView to tell me the dimensions of the desired crop, then passed that info onto the command line. I used a bash for loop to get the job done on the entire directory:

for file in *.jpg; do convert $file -crop 4907x6561+53+75 $file; done

It worked beautifully.

Flatten nested AD group memberships with powershell

Several applications at my job do not know how to read nested security groups. This is annoying because we grant everything through security groups instead of individual entitlements.

I’ve recently finished writing a powershell script that will “flatten” a security group that has nested security groups. This script reads a security group’s membership, compares the individually assigned users with the nested security group membership, and then reconciles them so only members of the nested security group are individually added to the main group. It allows me to simply add a security group to another security group, and still be able to use the group to grant access to applications that don’t support nested groups. It also ensures that nobody has rogue access they shouldn’t have. Everything managed through groups like God intended.

I consulted a ton of different sites to accomplish this. Here are just a few:

https://www.reddit.com/r/PowerShell/comments/3f7iki/flatten_out_active_directory_groups_containing/

https://stackoverflow.com/questions/11526285/how-to-count-objects-in-powershell

https://stackoverflow.com/questions/41658770/determining-object-type

https://docs.microsoft.com/en-us/powershell/module/activedirectory/

https://ss64.com/ps/syntax-compare.htmlhttps://ss64.com/ps/compare-object.html

#Nested Security Group flattener script
#Written by Nicholas Jeppson, 10/6/2018

#This script scans nested security groups and compares their membership to that of the base security group.
#It then reconciles membership so that the only members of this group are those who are members of the nested security groups.
#This is required for applications that cannot read nested security groups, such as mattermost.
#No more manually adding people to a group after you've already added their role to that same group!

#=============Variables section=============#

#Enter groups to reconcile here, separated by quotes and a comma:
$groups_to_flatten = @("group1","group2")

#==========End Variables Section=============#

#Loop through each group to flatten
foreach ($group in $groups_to_flatten) {

    Write-Host "`nProcessing group ""$group"""

    #Read current individually added users
    $individually_added_users = get-ADGroupMember -Identity $group | Where-Object {$_.objectClass -eq 'user'}

    #Read group membership of nested groups - Ignore specific user (optional)
    $nested_group_members = get-ADGroupMember -Identity $group | Where-Object {$_.objectClass -eq 'group'} | Get-ADGroupMember -Recursive | Where-Object {$_.name -ne 'USER_TO_IGNORE'}

    #Compare current individually added users with that of nested security groups
    $users_to_add = Compare-Object -ReferenceObject $individually_added_users -DifferenceObject $nested_group_members -PassThru | Where-Object {$_.SideIndicator -eq "=>"}
    $users_to_remove = Compare-Object -ReferenceObject $individually_added_users -DifferenceObject $nested_group_members -PassThru | Where-Object {$_.SideIndicator -eq "<="}
    
    #loop through each user to remove and remove them
    foreach ($user in $users_to_remove) {
        Remove-ADGroupMember -Identity $group -Member $user -Confirm:$false
        Write-Host "Removed: $user"
    }
    
    #loop through each user to add and add them
    foreach ($user in $users_to_add) {
        #Add nested group membership individually back to the parent group
        #Write-Host "Adding individual members to ""$group""`n`n"
        Add-ADGroupMember -Identity $group -Members $user -Confirm:$false 
        Write-Host "Added: $user"   
    }
}

Powershell equivalent of “find -exec”

I recently found myself on a Windows 10 system needing to do the equivalent of “find . -name *.mdi -exec mdiconvert -source {} -log log.txt \;” I knew what to do instantly on a Unix system, not so on a Windows system

I finally figured it out and am now writing it down because I know I’ll forget! Thanks to these several sites for pointing me in the right direction.

  • Get-ChildItem is the find equivalent
    • -Filter is the -name equivalent
    • -Recurse must be specified otherwise it only looks in the one directory
  • % is an alias for “ForEach-Object
  • Put the command you want run in brackets {}
  • Put an ampersand in front of the command you wish to run so you can properly pass arguments containing dashes
  • $_.FullName turns the powershell object into a text string (which my command required.) FullName is the full path of the item found with Get-ChildItem
    • $_ is the rough equivalent of find’s {} (the item that was found)

The command I ended up using is below (find any .mdi files and use Microsoft’s mdi2tif utility to convert the result to .tif files)

Get-ChildItem "C:\Users" -Recurse -Filter *.mdi | % { & 'C:\Program
Files (x86)\modiconv\MDI2TIF.EXE' -source $_.FullName -log log.txt }

Backup and restore docker container configurations

I came across a need to start afresh with my docker setup. I didn’t want to re-create all the port and volume mappings for my various containers. Fortunately I found a way around this by using docker-autocompose to create .yml files with all my settings and docker-compose to restore them to my new docker host.

Backup

Docker-autocompose source: https://github.com/Red5d/docker-autocompose

git clone https://github.com/Red5d/docker-autocompose.git
cd docker-autocompose
docker build -t red5d/docker-autocompose .

With docker-autocompose created you can then use it to create .yml files for each of your running containers by utilizing a simple BASH for loop:

for image in $(docker ps --format '{{.Names}}'); do docker run -v /var/run/docker.sock:/var/run/docker.sock red5d/docker-autocompose $image > $image.yml; done

Simple.

Restore

To restore, install and use docker-compose:

sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Next we use another simple for loop to go through each .yml file and import them into Docker. The sed piece escapes any $ characters in the .yml files so they will import properly.

for file in *.yml; do sed 's/\$/\$\$/g' -i $file;
docker-compose -f $file up --force-recreate -d; done

You can safely ignore the warnings about orphans.

That’s it!

Troubleshooting

ERROR: Invalid interpolation format for “environment” option in service “Transmission”: “PS1=$(whoami)@$(hostname):$(pwd)$ “

This is due to .yml files which contain unescaped $ characters.

Escape any $ with another $ using sed

sed 's/\$/\$\$/g' -i <filename>.yml

ERROR: The Compose file ‘./MariaDB.yml’ is invalid because:
MariaDB.user contains an invalid type, it should be a string

My MariaDB docker .yml file had a user: environment variable that was a number, which docker compose interpreted as a number instead of a string. I had to modify that particular .yml file and add quotes around the value that I had for the User environment variable.

Accept multiple SSH RSA keys with ssh-keyscan

I came across a new machine that needed to connect to many SSH hosts via ansible. I had a problem where ansible was prompting me for each post if I wanted to accept the RSA key. As I had dozens of hosts I didn’t want to type yes for every single one; furthermore the yes command didn’t appear to work. I needed a way to automatically accept all SSH RSA keys from a list of server names. I know you can disable RSA key checking but I didn’t want to do that.

I eventually found this site which suggested a small for loop, which did the trick beautifully. I modified it to suit my needs.

This little two-liner takes a file (in my case, my ansible hosts file) and then runs ssh-keyscan against it and adds the results to the .ssh/known_hosts file. The end result is an automated way to accept many SSH keys.

SERVER_LIST=$(cat /etc/ansible/hosts)
for host in $SERVER_LIST; do ssh-keyscan -H $host >> ~/.ssh/known_hosts; done

Rename files to reflect modified time while preserving extension

I’m undergoing a big project of scanning old family scrapbooks. I have two scanners involved, each with their own naming scheme, dumping files of various types (pdf, jpg) into the same folder. I need to have an accurate chronological view of when things were scanned (nothing was scanned at the exact same moment.) At this point the only way to get this information is to sort all the files by modified time,  but no other programs but a file manager look at files this way. I needed a way to rename all the files to reflect their modified time.

I found a couple different ways to do this, but settled on a bash for loop utilizing the stat, sed, and mv commands.

Challenge 1

Capture the extension of the files.

Using sed:

ls <filename> | sed 's/.*\(\..*\)$/\1/'

Challenge 2

Obtain the modified time of your file in an acceptable format. I do this using the stat command.

stat -c %y <filename>

%y is the best option here – it leaves no room for ambiguity. You could also choose %Y so the filenames aren’t so large and don’t contain colons (some systems struggle with this.) The downside do this is you only have to-the-second precision. In my case I had a few files that had the same epoch timestamp, which caused problems. More on how to format this can be found here and here.

Stringing it all together

I wrapped it all up in a bash for loop with appropriate variables. This was my final command, which I ran inside the directory I wanted to modify:

for file in *; do name=$(stat -c %y "$file"); ext=$(echo "$file" | sed 's/.*\(\..*\)$/\1/'); mv -n "$file" "$name$ext"; done

The for loop goes through each file one at a time and assigns it to the $file variable.  I then create the name variable which uses the stat command to obtain a precise date modified timestamp of the file. The ext variable is derived from the filename but only keeps the extension (using sed.) The last step uses mv -n (no clobber mode – don’t overwrite anything) to rename the original file to its date modified timestamp. The result: a directory where each file is named precisely when it was modified – a true chronology of what was scanned irrespective of file extension or which scanner created the file. Success.

Use BASH to append an incremental suffix to directories

In migrating Splunk indexes I came across the need to add an incrementing suffix of _(numbers) to a bunch of directories. For example, renaming directories

test
test1
test2
test3

to

test_100
test1_101
test2_102
test3_103

After a bunch of searching I settled on this approach of using a simple bash loop to accomplish this:

I=100; for F in db_*; do mv $F $F\_$I; I=$((I+1)); done

There is an initial variable, I, which is set to 100. The for loop goes through each file beginning with db_ and then renames it adding the proper suffix in the iteration (_###). The last step is to increment the value of I.

Replace I for whatever you want you suffix to begin with and change db_* with whatever the criteria for the files you want to rename are. Simple, but it works.