All posts by nicholas

Append users to powerbroker open RequireMembershipOf

The title isn’t very descriptive. I recently came across a need to script adding users & groups to the “RequireMembershipOf” directive of PowerBroker Open. PowerBroker is a handy tool that really facilitates joining a Linux machine to a Windows domain. It has a lot of configurable options but the one I was interested in was RequireMembershipOf – which as you might expect requires that the person signing into the Linux machine be a member of that list.

The problem with RequireMembershipOf is, as far as I can tell, it has no append function. It has an add function which frustratingly erases everything that was there before and includes only what you added onto the list. I needed a way to append a member to the already existing RequireMembershipOf list. My solution involves the usage of bash, sed, and a lot of regex. It boils down to two lines of code:

#take output of show require membership of, remove words multistring & local policy, replace spaces with carat (pbis space representation) and put results into variable (which automatically puts results onto a single line)

add=$(/opt/pbis/bin/config --show RequireMembershipOf | sed 's/\(multistring\)\|\(local policy\)//g' | sed 's/ /^/g')

#run RequireMembershipOf command with previous output and any added users

sudo /opt/pbis/bin/config RequireMembershipOf "$add" "<USER_OR_GROUP_TO_ADD>"

That did the trick.

Change ZFS based NFS SR address in Xenserver

I recently acquired a shiny new set of SSDs to host my VMs. The problem is I needed to create a new ZFS array to accommodate them. I needed to figure out a way to migrate my VMs to the new array and then instruct Xenserver to use the new array instead of the old one.

Fortunately with a bit of research I learned this is fairly painless. Thanks to this discussion on citrix forums that got me pointed in the right direction. To change the server / IP address of an existing NFS storage repository in Xenserver you must do the following:

  • Shut down affected VMs
  • Shutdown any VMs using NFS SRs
  • Copy the NFS SRs (the directories containing the .vhd files) to the new NFS server
  • xe pbd-unplug uuid=<uuid of pbd pointing to the NFS SR>
  • xe pbd-destroy uuid=<uuid of pbd pointing to the NFS SR>
  • xe pbd-create host-uuid=<uuid of Xen Host> sr-uuid=<uuid of the NFS SR> device-config-server=<New NFS server name> device-config-serverpath=<NFS Share Name>
  • xe pbd-plug uuid=<uuid of the pbd created above>
  • Reboot the VMs using NFS SRs

In my case since my VMs were on an existing ZFS volume with snapshots I wanted to preserve, I used ZFS send and receive to transfer data from my old array to my SSD array. Bonus: I was able to do this while the VMs were still running to ensure minimal downtime. My ZFS copy procedure was as follows:

  • Create recursive snapshot of my VM dataset
    zfs snapshot -r storage/VMs@migrate
  • Start the initial data transfer (this took quite some time to finish)
    zfs send -R storage/VMs@migrate | zfs recv ssd/VMs
  • Do another incremental snapshot and transfer after initial huge transfer is complete (this took much less time to do)
    zfs snapshot -r storage/VMs@migrate2
    zfs send -R -i storage/VMs@migrate storage/VMs@migrate2 | zfs recv ssd/VMs
  • Shutdown all affected VMs and do one more ZFS snapshot & transfer to ensure consistent data:
    zfs snapshot -r storage/VMs@migrate3
    zfs send -R -i storage/VMs@migrate2 storage/VMs@migrate3 | zfs recv ssd/VMs

In the above examples my source dataset was storage/VMs and the destination dataset was ssd/VMs.

Once the data was all transferred to the new location it was time to tell Xenserver about it. I had enough VMs that it was worth my time to write a little script to do it. It’s quick and dirty but it did the job. Behold:

#!/bin/bash
#Author: Nicholas Jeppson
#A simple script to change a xenserver NFS storage repository address to a new location
#Modify NFS_SERVER, NFS_PATH and/or NFS_VERSION to match your environment. 
#Run this script on each xenserver host in your pool. Empty output means the transfer was successful.
#This script takes one argument - the name of the SR to be transferred.

SR_NAME="$1"

NFS_SERVER=10.0.0.1
NFS_PATH=/mnt/ssd/VMs/$SR_NAME
NFS_VERSION=4

#Use sed and awk to grab necessary UUIDs
HOST_UUID=$(xe host-list|egrep -B3 `hostname`$ | grep uuid | awk '{print $5}')
PBD_UUID=$(xe pbd-list|grep -A4 -B4 $SR_NAME | grep -B2 $HOST_UUID |grep -w '^uuid ( RO)' | awk '{print $5}')
SR_UUID=$(xe pbd-list|grep -A4 -B4 $SR_NAME | grep -A2 $HOST_UUID | grep 'sr-uuid' | awk '{print $4}')

#Unplug & destroy old NFS location, create new NFS location
xe pbd-unplug uuid=$PBD_UUID
xe pbd-destroy uuid=$PBD_UUID
NEW_PBD_UUID=$(xe pbd-create host-uuid=$HOST_UUID sr-uuid=$SR_UUID device-config-server=$NFS_SERVER device-config-serverpath=$NFS_PATH device-config-nfsversion=$NFS_VERSION)
xe pbd-plug uuid=$NEW_PBD_UUID

Download the script here (right click / save as)

You can run this script in a simple for loop with something like this:

for SR in <list of SR names separated by a space>; do bash <name of script saved from above> $SR; done

If you named the above script nfs-migrate.sh, and you had three SRs to change (blog1, blog2, blog3) then it would be:

for SR in blog1 blog2 blog3; do bash nfs-migrate.sh $SR; done

After I migrated the data and ran that script, my VMs booted up using the new SSD array. Success.

Rename LVM group in CentOS7

I recently made the discovery that all my VMs have the same volume group name – the default that is given when CentOS is installed. My OCD got the best of me and I set out to change these names to reflect the hostname. The problem is if you rename the volume group containing the root partition, the system will not boot.

The solution is to run a series of commands to get things updated. Thanks to the centOS forums for the information. In my case I had already made the mistake of renaming the group and ending up with an unbootable system. This is what you have to do to get it working again:

Boot into a Linux rescue shell (from the installer DVD, for example) and activate the volume groups (if not activated by default)

vgchange -ay

Mount the root and boot volumes (replace VG_NAME with name of your volume group and BOOT_PARTITION with the device name of your boot partition, typically sda1)

mount /dev/VG_NAME/root /mnt
mount /dev/BOOT_PARTITION /mnt/boot

Mount necessary system devices into our chroot:

mount --bind /proc /mnt/proc
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys

Chroot into our broken system:

chroot /mnt/ /bin/bash

Modify fstab and grub files to reflect new volume group name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 's/OLD_VG_NAME/NEW_VG_NAME/g'

Run dracut to modify boot images:

dracut -f

Remove your recovery boot CD and reboot into your newly fixed VM

exit
reboot

If you want to avoid having to boot into a recovery environment, do the following steps on the machine whose VG you want to rename:

Rename the volume group:

vgrename OLD_VG_NAME NEW_VG_VAME

Modify necessary boot files to reflect the new name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 
's/OLD_VG_NAME/NEW_VG_NAME/g'

Rebuild boot images:

dracut -f

Create a RaidZ array with missing drive in FreeNAS

I came across a need to create a ZFS Raid-Z array with a missing drive. This is easy to do with mdadm but not as easy with ZFS. It is possible, though. The trick is to create an image file with dd, then map that image file as a loopback device. Once that’s done you can treat it as if it were a regular hard drive and add it to the array. Once added to the array you can take the loopback device offline and remove it from the array, then add an actual HDD later.

Create loopback device

Thanks to this site for the information.

First, use dd to create an image file. Change the seek parameter to whatever size disk you wish to emulate.

dd if=/dev/zero of=temp.img bs=1 count=1 seek=1024G

Next, initialize the loopback driver and create the loopback device (md0 in my case)

sudo losetup -a
sudo mdconfig -a -t vnode -f temp.img -u 0

List your loopback devices with the following command to verify your new loopback device:

sudo mdconfig -l

Create array using loopback device

You can now partition and add your loopback device as if it were a regular hard drive. Change volume name, array name, and device names as necessary for your environment.

sudo gpart add -t freebsd-zfs -l <volume name> md0
sudo zpool create <array name> raidz ada7p1 ada8p1 ada9p1 md0p1

Fail & Remove loopback device

Now that our new array is up and running properly we can fail out the loopback device. Make sure to modify the command to use your array name and loopback device/partition number.

sudo zpool offline <array_name> md0p1

Import new array into FreeNAS GUI

To get our new array in freenas we must export the array from the command line, then import it from the GUI.

sudo zpool export <array name>

Once the array has been exported, navigate to the FreeNAS GUI and go to Storage / Volumes / Import Volume.

You should now have your new array minus one drive ready to go in FreeNAS. You can now add a physical HDD when it becomes available (in my case, when it returns from RMA.)

Install notepadqq on Debian 8 (jessie)

Notepadqq is a version of notepad++ adapted for Linux. I love notepad++ for its powerful features and also because it’s free. Notepadqq has a ppa configured for easy installation for Ubuntu users, but for everyone else they must resort to compiling it. Fortunately it’s not too complicated. Thanks to linuxbabe for the guidance.

For Debian 8 users this is what you have to do:

sudo apt-get install qt5-qmake libqt5webkit5 libqt5svg5 coreutils libqt5webkit5-dev libqt5svg5-dev qttools5-dev-tools git
git clone https://github.com/notepadqq/notepadqq.git
cd notepadqq
./configure
make
sudo make install

After a bit of time notepadqq will be compiled and installed. That’s it!

If you are running gnome-shell or a derivative (such as cinnamon) then you can get your newly installed program to show up in the menu by pressing alt+f2, hitting r, and then hitting enter. This causes the shell to reload so it will pick up your newly installed program.

Automatically extract rar files downloaded with transmission

My new project recently has been to configure sonarr to work with transmission. The challenge was getting these two pieces of software to properly interface with each other. Sonarr would successfully instruct transmission to download the requested show but once the download completed it would not import the show to its library. The reason behind this was my torrent tracker – most torrents are downloaded as multi part rar files. Sonarr has no mechanism for processing rar files so I had to get creative.

The solution was to write a simple script and have transmission execute it after finishing the download. The script uses the find command to look for rar files in the directory transmission created for that particular torrent. If any rar files are found it will extract them into that same directory. This was important because sonarr will only look in the torrent download directory for the completed video file.

After some tweaking I got it to work pretty well for me. Here is the code I used (thanks to this site for the direction.)

#!/bin/bash
#A simple script to extract a rar file inside a directory downloaded by Transmission.
#It uses environment variables passed by the transmission client to find and extract any rar files from a downloaded torrent into the folder they were found in.
find /$TR_TORRENT_DIR/$TR_TORRENT_NAME -name "*.rar" -execdir unrar e -o- "{}" \;

Save the above script into a file your transmission client can read and make it executable. Lastly configure transmission to run this script on torrent completion by modifying your settings.json file (mine was located at /var/lib/transmission/.config/transmission-daemon/settings.json) Modify the following variables (be sure to stop your transmission client first before making any changes.)

"script-torrent-done-enabled": true, 
"script-torrent-done-filename": "/path/to/where/you/saved/the/script",

That’s it! Sonarr will now properly import shows that were downloaded via multipart rar torrent.

Find top 10 requests returning 404 errors

I had a website where I was curious what the top 10 URLs that were returning 404s were along with how many hits those URLs got. This was after a huge site redesign so I was curious what old links were still trying to be accessed.

Getting a report on this can be accomplished with nothing more than the Linux command line and the log file you’re interested in. It involves combining grep, sed, awk, sort, uniq, and head commands. I enjoyed how well these tools work together so I thought I’d share. Thanks to this site for giving me the inspiration to do this.

This is the command I used to get the information I wanted:

grep '404' _log_file_ | sed 's/, /,/g' | awk {'print $7'} | sort | uniq -c | sort -n -r | head -10

Here is a rundown of each command and why it was used:

  • grep ‘404’ _log_file_ (replace with filename of your apache, tomcat, or varnish access log.) grep reads a file and returns all instances of what you want, in this case I’m looking for the number 404 (page not found HTTP error)
  • sed ‘s/, /,/g’ Sed will edit a stream of text in any way that you specify. The command I gave it (s/, /,/g) tells sed to look for instances of commas followed by spaces and replace them with just commas (eliminating the space after any comma it sees.) This was necessary in my case because sometimes the source IP address field has multiple IP addresses and it messed up the results. This may be optional if your server isn’t sitting behind any type of reverse proxy.
  • awk {‘print $7’} Awk has a lot of similar functions to sed – it allows you to do all sorts of things to text. In this case we’re telling awk to only display the 7th column of information (the URL requested in apache and varnish logs is the 7th column)
  • sort This command (absent of arguments) sorts our results alphabetically, which is necessary for the next command to work properly.
  • uniq -c This command eliminates any duplicates in the results. The -c argument adds a number indicating how many times that unique string was found.
  • sort -n -r Sorts the results in reverse alphabetical order. The -n argument sorts things numerically so that 2 follows 1 instead of 10. -r Indicates to reverse the order so the highest number is at the top of the results instead of the default which is to put the lowest number first.
  • head -10 outputs the top 10 results. This command is optional if you want to see all the results instead of the top 10. A similar command is tail – if you want to see the last results instead.

This was my output – exactly what I was looking for. Perfect.

2186 http://<sitename>/source/quicken/index.ini
2171 http://<sitename>/img/_sig.png
1947 http://<sitename>/img/email/email1.aspx
1133 http://<sitename>/source/quicken/index.ini
830 http://<sitename>/img/_sig1.png
709 https://<sitename>/img/email/email1.aspx
370 http://<sitename>/apple-touch-icon.png
204 http://<sitename>/apple-touch-icon-precomposed.png
193 http://<sitename>/About-/Plan.aspx
191 http://<sitename>/Contact-Us.aspx

Website load testing with wget and siege

In an effort to see how well my varnish configuration stood up to real world tests I came across a really cool piece of software: siege. You can use siege to benchmark / load test your site by having it hit your site repeatedly in a configurable fashion.

I played around with the  siege configuration until I found one I liked. It involves using wget to spider your target site, varnishncsa and tee (if you use varnish), awk to parse the access log into a usable URL list, and then passing that list to siege to test the results.

varnishncsa

For my purposes I wanted to test how well varnish performed. Varnish is a caching server that sits in front of your webserver. By default it doesn’t log anything – this is where varnishncsa comes in. Run the following command to monitor the varnish log while simultaneously outputting the results to a file:

varnishncsa | tee varnish.log

If you don’t use varnish and rather serve pages up directly through nginx, apache, or something else, then you can skip this step.

wget

The next step is to spider your site to get a list of URLs to send to siege; wget fits in nicely for this task. This is the configuration I used (replace aoeu.1234.com with the domain of your web server)

wget -r -l4 --spider -D aoeu1234.com http://www.aoeu1234.com

-r tells wget to be recursive (follow links)
-l is the number of levels you want to go down when following links
— spider tells wget to not actually download anything (no need to save the results, we just want to hit the site to generate log entries)
-D specifies a comma separated list of domains you want wget to crawl (useful if you don’t want wget to follow links outside your site)

awk

Once the spidering is finished, we want to use awk to parse the resulting server logfile to extract the URLs that were accessed and pipe the results into a file named crawl.txt.

awk '{ print $7 }' varnish.log | sort | uniq > crawl.txt

varnish.log is what I named the output from the varnishncsa command above; if you use apache/nginx, then you would use your respective access log file instead.

siege

This is where the fun comes in. You can now use siege to stress test your website to see how well it does.

siege -c 550 -i -t 3M -f crawl.txt -d 10

There are many siege options so you’ll definitely want to read up on the man page. I picked the following based on this site:

-c concurrent connections. I had to tweak siege configuration to go up that high.
-i tells siege to access the URLs in a random fashion
-t specifies how long to run siege for in seconds, minutes, or hours
-f specifies a list of URLs for siege to use
-d specifies the maximum delay between user requests (it will pick a random number between 0 and this specified number to make requests more random)

siege configuration

I had to tweak siege a little bit to allow more than 256 connections at a time. First, run

siege.config

to generate an initial siege configuration. Then modify ~/.siege/siege.conf to increase the limit of concurrent connections:

sed -i ~/.siege/siege.conf -e 's/limit = 256/limit = 1000/g'

Note: the config says the following:
DO NOT INCREASE THIS NUMBER UNLESS YOU CONFIGURED APACHE TO HANDLE MORE THAN 256 SIMULTANEOUS REQUESTS

This was fine in my case because it’s hitting varnish, not apache, so we can really ramp up the simultaneous user count.

sysctl configuration

When playing with siege I ran into a few different error messages. Siege is a complicated enough program that it requires you to tweak system settings for it to work fully. I followed some advice on this site to tweak my sysctl configuration to make it work better. Your mileage may vary.

### IMPROVE SYSTEM MEMORY MANAGEMENT ###

# Increase size of file handles and inode cache
fs.file-max = 2097152

# Do less swapping
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2

### GENERAL NETWORK SECURITY OPTIONS ###

# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2

# Allowed local port range
net.ipv4.ip_local_port_range = 2000 65535

# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15

### TUNING NETWORK PERFORMANCE ###

# Default Socket Receive Buffer
net.core.rmem_default = 31457280

# Maximum Socket Receive Buffer
net.core.rmem_max = 12582912

# Default Socket Send Buffer
net.core.wmem_default = 31457280

# Maximum Socket Send Buffer
net.core.wmem_max = 12582912

# Increase number of incoming connections
net.core.somaxconn = 4096

# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144

# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384

# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384

# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

Compile 32bit WINE in 64bit CentOS 7

I was dismayed to find out that you can’t run 32 bit windows programs in 64bit CentOS 7 because the wine ti comes with will only execute 64bit windows programs. In order to get around this you must compile the 32 bit version of WINE. The easiest way to do this (in my opinion) is to download the 32bit version of CentOS 7 and throw it on a VM, build wine there, then copy the RPMs and install on your 64bit host.

I was able to accomplish this thanks to the wonderful guide on the  scientific linux forum. In my case I used a virtualbox VM of 32bit Centos7 (you can download the ISO here.) You could also use Docker or even a chroot but virtualbox was the easiest for me to set up.

In the 32bit Centos:

Install necessary packages

Basics

Make sure your system is up to date before you begin your compilation journey.

sudo yum -y update

Install the EPEL repository

sudo yum -y install wget
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum -y localinstall epel-release-latest-7.noarch.rpm

Install development packages

sudo yum -y groupinstall "Development Tools" 
sudo yum -y install glibc-devel.i686 dbus-devel.i686 freetype-devel.i686 pulseaudio-libs-devel.i686 libX11-devel.i686 mesa-libGLU-devel.i686 libICE-devel.i686 libXext-devel.i686 libXcursor-devel.i686 libXi-devel.i686 libXxf86vm-devel.i686 libXrender-devel.i686 libXinerama-devel.i686 libXcomposite-devel.i686 libXrandr-devel.i686 mesa-libGL-devel.i686 mesa-libOSMesa-devel.i686 libxml2-devel.i686 libxslt-devel.i686 zlib-devel.i686 gnutls-devel.i686 ncurses-devel.i686 sane-backends-devel.i686 libv4l-devel.i686 libgphoto2-devel.i686 libexif-devel.i686 lcms2-devel.i686 gettext-devel.i686 isdn4k-utils-devel.i686 cups-devel.i686 fontconfig-devel.i686 gsm-devel.i686 libjpeg-turbo-devel.i686 pkgconfig.i686 libtiff-devel.i686 unixODBC.i686 openldap-devel.i686 alsa-lib-devel.i686 audiofile-devel.i686 freeglut-devel.i686 giflib-devel.i686 gstreamer-devel.i686 gstreamer-plugins-base-devel.i686 libXmu-devel.i686 libXxf86dga-devel.i686 libieee1284-devel.i686 libpng-devel.i686 librsvg2-devel.i686 libstdc++-devel.i686 libusb-devel.i686 unixODBC-devel.i686 qt-devel.i686 cmake desktop-file-utils fontforge libpcap-devel fontpackages-devel ImageMagick-devel icoutils

Prepare the build environment

Create working directory

mkdir wine && cd wine

Download gcomes’ rpmbuild script

wget 'https://www.centos.org/forums/download/file.php?id=405' -O './rpmrebuild.gz' -c
gunzip ./rpmrebuild.gz ; chmod a+x rpmrebuild

Build chrpath

wget http://vault.centos.org/7.0.1406/os/Source/SPackages/chrpath-0.13-14.el7.src.rpm
./rpmrebuild chrpath-0.13-14.el7.src.rpm

Build openal-soft

wget http://dl.fedoraproject.org/pub/epel/7/SRPMS/o/openal-soft-1.16.0-3.el7.src.rpm
./rpmrebuild -e openal-soft-1.16.0-3.el7.src.rpm

Comment out BuildRequires:portaudio-devel, then save changes (esc:ZZ)

Save and install openal-soft 32-bit rpms (do no skip this step, rpmrebuild erases and restarts each time it is run):

cp rpmbuild/RPMS/i686/openal-soft{,-devel}-1.16.0-3.el7.centos.i686.rpm .

Build nss-mdns

wget http://dl.fedoraproject.org/pub/epel/7/SRPMS/n/nss-mdns-0.10-12.el7.src.rpm
./rpmrebuild nss-mdns-0.10-12.el7.src.rpm
cp rpmbuild/RPMS/i686/nss-mdns-0.10-12.el7.centos.i686.rpm .

Build WINE

With our prerequisites installed we now need to compile 32bit WINE.

wget https://dl.fedoraproject.org/pub/epel/7/SRPMS/w/wine-1.8.4-1.el7.src.rpm
./rpmrebuild -e wine-1.8.4-1.el7.src.rpm
#ZZ to exit, no changes required 

Copy RPMs

After the lengthy build process completes be sure to copy the RPMs that were generated. These are the RPMs you will need to copy over to your 64bit Centos 7 for installation.

cp rpmbuild/RPMS/*/* .

In the 64bit CentOS

Install the resulting RPMs by copying them to your 64bit system and using yum localinstall

sudo yum -y localinstall *.rpm

Install winetricks (optional)

wget  https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
chmod +x winetricks
sudo mv winetricks /usr/local/bin

Success

Now that you’ve made it all the way through the tutorial I will provide a link for the lazy who don’t want to compile their own wine and instead just want to download the RPMs (assuming they want to trust my build). Download the RPMs here.

Script to change WordPress URL

I wrote up a little script to run when you migrate a wordpress installation from one host to another (hostname change.)  Once this script is run you can access the site via the hostname of the server it’s running on and then change the URL to whatever you like. This comes in handy for when you want to migrate one internal host to another, then specify an external hostname once things are looking how you like them.

Change SQL_COMMAND to reflect the name of the wordpress table in the destination server. Thanks to this site for the guidance in writing the script.

#!/bin/bash

#A simple script to update the wordpress database to reflect a change in hostname
#Run this after changing the hostname / IP of a wordpress server

#Prompt for mysql root password
read -s -p "Enter mysql root password: " SQL_PASSWORD

SQL_COMMAND="mysql -u root -p$SQL_PASSWORD wordpress -e"

#Determine what the old URL was and save to variable
OLD_URL=$(mysql -u root -p$SQL_PASSWORD wordpress -e 'select option_value from wp_options where option_id = 1;' | grep http)
#Get current hostname
HOST=$(hostname)

#SQL statements to update database to new hostname
$SQL_COMMAND "UPDATE wp_options SET option_value = replace(option_value, '$OLD_URL', 'http://$HOST') WHERE option_name = 'home' OR option_name = 'siteurl';"
$SQL_COMMAND "UPDATE wp_posts SET guid = replace(guid, '$OLD_URL','http://$HOST');"
$SQL_COMMAND "UPDATE wp_posts SET post_content = replace(post_content, '$OLD_URL', 'http://$HOST');"
$SQL_COMMAND "UPDATE wp_postmeta SET meta_value = replace(meta_value,'$OLD_URL','http://$HOST');"