Tag Archives: linux

Install OTRW2 on DD-WRT

Optware done the right way 2 is a set of scripts designed to enhance the functionality of your DD-WRT router. I’ve recently installed it on my parents’ router so I can more or less have a full Linux box running in their house (it makes my life easier.)

The tutorial for install it is pretty comprehensive. These are my notes on the experience.

  • USB devices needs to be ext2 formatted (fat won’t do.) This is because the script makes a bunch of symlinks to that device.
  • Mount ext2 formatted drive as /opt (Services / USB / Disk Mount Point)
  • Reboot router if you made any changes to mountpoints.

SSH into the router and run the following:

wget -O /tmp/prep_optware http://dd-ware.googlecode.com/svn/otrw2/prep_optware
sh /tmp/prep_optware

Installation takes some time. Wait one minute after installation complete message and reboot router.

Once rebooted, use the service command to see which services are available. Green means the service is enabled.

service mypage on

Enables the mypage service, but you still have to reboot. Reboot router after any changes to services to enable / disable them.

Small overview of services as taken from dd-wrt forums:

rotate_log = move the log file to /opt/var/log/
pixelserv = addblocking on you network.
unfsd = nfs server
zabbix = zabbix client (useless since its included in kong build)
pound = reverse proxy which you can use since you host multiple sites
sshhack = block ips hammering ssh with incorrect credentials.
stophack = BLock ips which are trying to hack server (only combined with pound or vlighttpd)
stophammer = block ips which are hammering ports
nzbget & transmission for downloading.
fixtables rearranges the firewall entries

A full explanation on how otrw2 enhances your router and what each package does is located here.

All in all, pretty straightforward once you get the right filesystem on your media and have it mounted on the right mountpoint. OTRW2 gives your router a whole lot more usefulness and the ability to install a wide range of packages on it.

 

 

Slow Linux VM performance in VMware vSphere

Recently I’ve been scratching my head over a particular performance issue with Linux VMs hosted on VMWare vSphere. Everything seemed to move at a glacial pace.

vmstat gave a few clues as to what was happening, although depending on what I read it still wasn’t clear:

vmstat

It became apparent that I was suffering from some kind of queuing problem. I wasn’t sure if it was CPU or disk related. I came across this post which has a lot of good performance tuning guides.This tip caught my eye:


 

7. Set your disk scheduling algorithm to ‘noop’

The Linux kernel has different ways to schedule disk I/O, using schedulers like deadline, cfq, and noop. The ‘noop’ — No Op — scheduler does nothing to optimize disk I/O. So why is this a good thing? Because ESX is also doing I/O optimization and queuing! It’s better for a guest OS to just hand over all the I/O requests to the hypervisor to sort out than to try optimizing them itself and potentially defeating the more global optimizations.

You can change the kernel’s disk scheduler at boot time by appending:

elevator=noop

to the kernel parameters in /etc/grub.conf.


Sure enough, I modified /boot/grub/grub.conf on my Centos 6 boxes and appended elevator=noop to the kernel line, then rebooted. It helped a lot! Performance no longer was pitiful. I’m not nearly as familiar with vmware as I am with Xenserver so this was a good hint.

Fix Splunk lockout after exceeded quota

Recently I came across a situation with my home install of Splunk (free license) where the 500MB quota was exceeded three days in a row. I hadn’t checked Splunk for a few days so I was completely blindsided by it. The consequence of going over quota three days in a row? Losing the ability to do any searches in Splunk, which is a real downer.

The easiest, although least convenient, way to fix being locked out is to wait it out. If you go 30 days in a row without violating the license, Splunk will unlock itself. Splunk will still receive and index events during that time. The inability to search makes it really difficult to track down what the problem is, though, and I wasn’t happy waiting for 30 days before getting Splunk back.

Poking around on the Splunk forums I discovered that there is a way to get splunk back – perform a fresh install and then migrate your database and settings over to the fresh install. This involves backing up a few things, then copying them over the fresh install’s default folders

  • $SPLUNK_HOME/var/lib/splunk/defaultdb   #Default Splunk index, where all my data is held. If you have other indexes in here you’ll want to copy them too.
  • $SPLUNK_HOME/etc  #all your configuration files

Simply back up the above folders, install Splunk on a new machine, launch Splunk first so it will generate all the default files, then copy the files over to the new instance.

I went a step further and planned for the future. I wrote a quick and dirty script that will do all of this for you,  even on the same machine – no need to copy to another machine.  The script assumes you’re running a redhat derivative and have the correct Splunk install file in a predictable location. Update the locations of splunk directories and install files as needed and run as root.

#!/bin/bash

#Backup important directories
mkdir /opt/splunkbackup/
cp -al /opt/splunk/etc /opt/splunkbackup/
cp -al /opt/splunk/var/lib/splunk/defaultdb /opt/splunkbackup/

#Nuke splunk
/opt/splunk/bin/splunk stop
rm -rf /opt/splunk

#Reload from fresh start
rpm -iv --replacepkgs /home/nicholas/splunk-6.2.2-255606-linux-2.6-x86_64.rpm
/opt/splunk/bin/splunk start --accept-license

#Restore configuration files and indexes
/opt/splunk/bin/splunk stop
rm -rf /opt/splunk/etc
cp -al /opt/splunkbackup/etc /opt/splunk/
rm -rf /opt/splunk/var/lib/splunk/defaultdb
cp -al /opt/splunkbackup/defaultdb /opt/splunk/var/lib/splunk/
chown splunk:splunk -R /opt/splunk/
/opt/splunk/bin/splunk start

#Remove splunk backup
rm -rf /opt/splunkbackup

This will restore your searches, settings, and data. It won’t restore audit and other internal Splunk information, however. This script worked marvelously in getting my Splunk back.

Join a CentOS machine to an AD domain

I ran into enough snags when attempting to join an CentOS 6.6 machine to a Microsoft domain that I thought I would document them here. Hopefully it is of use to someone. The majority of the experience is thanks to this site.

Update 03/16/2015: I came across this site which makes things a little easier when it comes to initial configuration – messing with other config files is no longer necessary. The authconfig command to do this is below:

authconfig --disablecache --enablelocauthorize --enablewinbind --enablewinbindusedefaultdomain --enablewinbindauth        --smbsecurity=ads --enablekrb5 --enablekrb5kdcdns --enablekrb5realmdns --enablemkhomedir --enablepamaccess --updateall        --smbidmapuid=100000-1000000 --smbidmapgid=100000-1000000 --disablewinbindoffline --winbindjoin=Admin_account --winbindtemplateshell=/bin/bash --smbworkgroup=DOMAIN --smbrealm=FQDN --krb5realm=FQDN

Replace DOMAIN with short domain name, FQDN with your fully qualified domain name, and Admin_account with an account with domain admin privileges, then skip to the Reboot section, as it covers everything before that.

Install the necessary packages

yum -y install authconfig krb5-workstation pam_krb5 samba-common oddjob-mkhomedir

Configure kerberos auth with authconfig

There is a curses-based GUI you can use to do this in but I opted for the command line.

authconfig --disablecache --enablewinbind --enablewinbindauth --smbsecurity=ads --smbworkgroup=DOMAIN --smbrealm=DOMAIN.COM.AU --enablewinbindusedefaultdomain --winbindtemplatehomedir=/home/DOMAIN/%U --winbindtemplateshell=/bin/bash --enablekrb5 --krb5realm=DOMAIN.COM.AU --enablekrb5kdcdns --enablekrb5realmdns --enablelocauthorize --enablemkhomedir --enablepamaccess --updateall

Add your domain to kerberos configuration

Kerberos information is stored in /etc/krb5.conf. Append your domain in the realms configuration, like below

vi /etc/krb5.conf
[realms]
 EXAMPLE.COM = {
 kdc = kerberos.example.com
 admin_server = kerberos.example.com
 }
 
DOMAIN.COM.AU = {
admin_server = DOMAIN.COM.AU
kdc = DC1.DOMAIN.COM.AU
kdc = DC2.DOMAIN.COM.AU
}
 
[domain_realm]
 .example.com = EXAMPLE.COM
 example.com = EXAMPLE.COM
 domain.com.au = DOMAIN.COM.AU
 .domain.com.au = DOMAIN.COM.AU

 Test your configuration

Use the kinit command with a valid AD user to ensure a good connection with the domain controllers:

kinit <AD user account>
It should return you to the prompt with no error messages. You can further make sure it worked by issuing the klist command to show open Kerberos tickets
klist

Ticket cache: FILE:/tmp/krb5cc_0
Default principal: someaduser@DOMAIN.COM.AU
Valid starting Expires Service principal
02/27/14 12:23:21 02/27/14 22:23:21 krbtgt/DOMAIN.COM.AU@DOMAIN.COM.AU
renew until 03/06/14 12:23:19
When I tried the kinit command it returned an error:
kinit: KDC reply did not match expectations while getting initial credentials
 After scratching my head for a while I came across this site, which explains that your krb5.conf is case sensitive – it must all be all upper case. Fixing my krb5.conf to be all caps for my domain resolved that issue.

Join the domain

net ads join domain.com.au -U someadadmin
When I tried to join the domain I received this lovely message:
Our netbios name can be at most 15 chars long, "EXAMPLEMACHINE01" is 16 chars long
Invalid configuration. Exiting....
Failed to join domain: The format of the specified computer name is invalid.
Thanks to Ubuntu forms I learned I needed to edit my samba configuration to assign an abbreviated NETBIOS name to my machine.
vi /etc/samba/smb.conf
Uncomment the “netbios name =” line and fill it in with a shorter (max 15 characters) NETBIOS name.
netbios name = EXAMPLE01
You can test to ensure the join was successful with this command
net ads testjoin

Configure home directories

The authconfig command above included a switch for home directories. Make sure you create a matching directory and set appropriate permissions for it.

mkdir /home/DOMAIN
setfacl -m group:"Domain Users":rwx /home/DOMAIN #the article calls to do this, this command doesn't work for me but home directories still appear to be created properly

Reboot

To really test everything the best way is to reboot the machine. When it comes back up, log in with Active Directory credentials. It should work!

Account lockout issues

I ran into a very frustrating problem where everything works dandy if you get the password correct on the first try, but if you mess up even once it results in your Active Directory account being locked. You were locked out after the first try. Each login, even when successful, had this in the logs:

winbind pam_unix(sshd:auth): authentication failure

This problem took a few days to solve. Ultimately it involved modifying two files:

vi /etc/pam.d/system-auth
vi /etc/pam.d/password-auth

As far as I can tell, the problem was a combination of pam_unix being first (which always failed when using AD login), as well as having both winbind and kerberos enabled. The fix was to change the order of each mention of pam_unix to be below any mention of pam_winbind. The other fix I had to do was to comment out mentions of pam_krb5 completely.

#auth        sufficient    pam_krb5.so use_first_pass

Restrict logins

The current configuration allows any domain account to log into the machine. You will probably want to restrict who can log in to the machine to certain security groups. The problem: many Active Directory security groups contain spaces in their name, which Linux doesn’t like.

How do you add a security group that contains a space? Escape characters don’t seem to work in the pam config files.  I found out thanks to this site that it is easier to just not use spaces at all. Get the SID of the group instead.

Use wbcinfo -n to query the group in question, using the backslash to escape the space. It will return the SID we desire.

wbinfo -n Domain\ Users
S-1-5-21-464601995-1902203606-794563710-513 Domain Group (2)

Next, modify /etc/pam.d/password-auth and add the require_membership_of argument to pam_winbind.so:

auth        sufficient    pam_winbind.so require_membership_of=S-1-5-21-464601995-1902203606-794563710-513

That’s it! Logins are now restricted to the security group listed.

Configure sudo access

Sudo uses a different list for authorization, which amusingly, handles escaped spaces just fine.  Simply add the active directory group in sudo as you a local one, eg using a % and then group name, escaping spaces with a backslash:

%Domain\ Users ALL=(ALL) ALL

Rejoice

You’ve just gone through a long and painful battle. Hopefully this article helped you to achieve victory.

Configure iSCSI initiator in CentOS

Below are my notes for configuring a CentOS box to connect to an iSCSI target. This assumes you have already configured an iSCSI target on another machine / NAS. Much of this information comes thanks to this very helpful website.

Install the software package

1
yum -y install iscsi-initiator-utils

Configure the iqn name for the initiator

1
vi /etc/iscsi/initiatorname.iscsi
1
2
InitiatorName=iqn.2012-10.net.cpd:san.initiator01
InitiatorAlias=initiator01

Edit the iSCSI initiator configuration

1
vi /etc/iscsi/iscsid.conf
1
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = initiator_user
node.session.auth.password = initiator_pass
#The next two lines are for mutual CHAP authentication
node.session.auth.username_in = target_user
node.session.auth.password_in = target_password

Start iSCSI initiator daemon

1
2
/etc/init.d/iscsid start
chkconfig --levels 235 iscsid on

Discover targets in the iSCSI server:

1
2
iscsiadm --mode discovery -t sendtargets --portal 172.16.201.200 the portal's IP address
172.16.201.200:3260,1 iqn.2012-10.net.cpd:san.target01

Try to log in with the iSCSI LUN:

1
2
3
iscsiadm --mode node --targetname iqn.2012-10.net.cpd:san.target01 --portal 172.16.201.200 --login
Logging in to [iface: default, target: iqn.2012-10.net.cpd:san.target01, portal: 172.16.201.200,3260] (multiple)
Login to [iface: default, target: iqn.2012-10.net.cpd:san.target01, portal: 172.16.201.200,3260] successful.

Verify configuration

This command shows what is put into the  iSCSI targets database  (the files located in /var/lib/iscsi/)

1
cat /var/lib/iscsi/send_targets/172.16.201.200,3260/st_config
1
2
3
4
5
6
7
8
9
10
11
12
discovery.startup = manual
discovery.type = sendtargets
discovery.sendtargets.address = 172.16.201.200
discovery.sendtargets.port = 3260
discovery.sendtargets.auth.authmethod = None
discovery.sendtargets.timeo.login_timeout = 15
discovery.sendtargets.use_discoveryd = No
discovery.sendtargets.discoveryd_poll_inval = 30
discovery.sendtargets.reopen_max = 5
discovery.sendtargets.timeo.auth_timeout = 45
discovery.sendtargets.timeo.active_timeout = 30
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768

Verify session is established

1
2
iscsiadm --mode session --op show
tcp: [2] 172.16.201.200:3260,1 iqn.2012-10.net.cpd:san.target01

Create LVM volume and mount

Add our iSCSI disk to a new LVM physical volume, volume group, and logical volume

1
2
3
4
5
6
7
fdisk -l
Disk /dev/sdb: 17.2 GB, 17171480576 bytes
64 heads, 32 sectors/track, 16376 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
1
Disk /dev/sdb doesn't contain a valid partition table
1
2
pvcreate /dev/sdb
vgcreate iSCSI /dev/sdb
lvcreate iSCSI -n volume_name -l100%FREE
mkfs.ext4 /dev/iSCSI/volume_name

Add the logical volume to fstab

Make sure to use the mount option _netdev.  Without this option, Linux will try to mount this device before it loads network support.

1
vi /etc/fstab
/dev/mapper/iSCSI-volume_name    /mnt   ext4   _netdev  0 0

Success.

Fix Apache Permission Denied errors

The other day I ran the rsync command to migrate files from an old webserver to a new one. What I didn’t notice right away was that the rsync changed the permissions of the folder I was copying into.

The problem presented itself with a very lovely 403 forbidden error message when trying to access any website that server hosted. Checking the logs (/var/log/apache2/error.log on my Debian system) revealed this curious message:

[error] [client 192.168.22.22] (13)Permission denied: access to / denied

This made it look like apache was denying access for some reason. I verified apache config and confirmed it shouldn’t be denying anything. After some head scratching I came across this site which explained that Apache throws that error when it encounters filesystem access denied error messages.

I was confused because /var/www, where the websites live, had the appropriate permissions. After some digging I found that the culprit in my case was not /var/www, but rather the /var directory underneath /var/www. For some reason the rsync changed /var to not have any execute permissions (necessary for folder access.)  A simple

chmod o+rx /var/

resolved my problem. Next time you get 403 it could be underlying filesystem issues and not apache at all.

Using screen to run interactive programs at startup

Oftentimes I will encounter programs that weren’t necessarily designed to be automatically run that I want to run on startup. Sometimes that program will have interactive information that you will want to see later, but you still want it to run on startup.

The solution to this particular problem is using screen in combination with su and bash. In my situation, I want to run the HDSurfer plugin on bootup as a different user. The solution I came up is as follows (thanks to superuser.com and stackoverflow.com for the guidance I needed to set this up.)

Install screen

Screen is like having a separate X window session to keep a program running, except it is for console programs. You can attach and detach to this screen whenever you’d like and not worry about the program terminating.

sudo apt-get install screen

Create a script to run your program with all required arguments

In my case I needed to execute the command “python /usr/bin/HDSurferWave/hdsurferwave.py start” as a different user in a screen session (so it wouldn’t terminate when the terminal session did.) To do this,

  • invoke screen with the -dm command (to begin the program in detached mode)
  • issue the bash -c argument afterward to invoke bash
  • Include your desired command after that

My one line script looks like this:

screen -dm bash -c "python /usr/bin/HDSurferWave/hdsurferwave.py start"

Run your script

I use the su command with the -c argument to change the user that will be running the script, as the startup script launches things as root by default (with pre-systemd systems, anyway.) The -s command initiates a shell to launch, and the last argument is the user you want to run as. My launch argument is:

su -c "/usr/bin/HDSurferWave/start.sh" -s /bin/sh nicholas

Configure the script to run on startup

Edit /etc/rc.local and add your script command from above, then mark that file as executable by running chmod +x /etc/rc.local. Note: This will not work with systems using systemd.

 

 

Fix sudo being slow after changing hostname

Recently I changed the hostname of one of my machines. Ever since I did this there has been a five second pause from when I enter a command and when it actually executes. I was perplexed about this until I came across this post explaining that the /etc/hosts file was probably still pointing to the old hostname. It turns out it was!

So, to recap, if you want to change the hostname of your machine you have to make sure you do these three things:

  • issue the hostname command to change the hostname while running
  • update /etc/hostname with your new hostname
  • update /etc/hosts to reflect your new hostname after 127.0.0.1 (get rid of the old hostname.)

Update 2/24/2015: If you happen to have a Splunk forwarder installed on the machine, make sure you update its config to reflect the new hostname. Thanks to Splunk Answers for the information.  To do this, update  $SPLUNK_HOME/etc/system/local/server.conf and change the serverName= field to your new hostname.

 

 

Redshift – a better flux program for Linux

F.lux is a wonderful tool for helping eye strain. People who stare at computer screens all day (like myself) can experience quite a bit of eye strain due to the harsh lights screens emit. One solution is to wear yellow tinted gamer goggles. I chose the cheaper route, installing software to adjust the color temperature of your monitor. Flux does this beautifully.. for Windows, at least.

Linux is a different story. Its GUI is pretty flaky and appears to only work for one screen. Enter Redshift, an updated fork of the Linux port of F.lux, which properly supports dual monitors. Unfortunately, it is harder to configure than F.lux. It is a command line only tool (with a GUI indicator component) and it requires creating a manual configuration file.

On my Linux Mint system (Ubuntu based) I needed to install the following:

sudo apt-get install redshift gtk-redshift

I had a hard time getting day/night changes to work. Redshift allows you to specify several different location options, but none of them appeared to work for me. I then realized that I like the softer colors of redshift all the time so I simply set the same temperature for day or night. It now doesn’t matter what the latitude / longitude is.

I found it odd that the settings for flux and redshift don’t appear to be the same. I tweaked my config a little bit to best closely match my Windows f.lux setup. Below is my config file, placed in ~/.config/redshift.conf.

; Global settings for redshift
[redshift]
; Set the day and night screen temperatures

temp-day=4500
temp-night=4500

; Enable/Disable a smooth transition between day and night
; 0 will cause a direct change from day to night screen temperature.
; 1 will gradually increase or decrease the screen temperature
transition=1

; Set the screen brightness. Default is 1.0
;brightness=0.9
; It is also possible to use different settings for day and night since version 1.8.
;brightness-day=0.7
;brightness-night=0.4
; Set the screen gamma (for all colors, or each color channel individually)
gamma=0.8
;gamma=0.8:0.7:0.8

; Set the location-provider: 'geoclue', 'gnome-clock', 'manual'
; type 'redshift -l list' to see possible values
; The location provider settings are in a different section.
location-provider=manual

; Set the adjustment-method: 'randr', 'vidmode'
; type 'redshift -m list' to see all possible values
; 'randr' is the preferred method, 'vidmode' is an older API
; but works in some cases when 'randr' does not.
; The adjustment method settings are in a different section.
adjustment-method=randr

; Configuration of the location-provider:
; type 'redshift -l PROVIDER:help' to see the settings
; ex: 'redshift -l manual:help'
[manual]
lat=40
lon=110

; Configuration of the adjustment-method
; type 'redshift -m METHOD:help' to see the settings
; ex: 'redshift -m randr:help'
; In this example, randr is configured to adjust screen 1.
; Note that the numbering starts from 0, so this is actually the second screen.
[randr]
screen=0

After saving the config file you can add gtx-flux  gtk-redshift as a startup application to have it automatically load on startup. My eyes feel much more comfortable now.

 

 

Manipulate EXIF data with jhead

From time to time I find I need to edit the date taken metadata on pictures. I’ve discovered that jhead is a wonderful tool to accomplish this. It has many options, but the ones I use most frequently are the following:

  • mkexif Create EXIF data for a picture that does not contain it
  • -ft Set picture’s filesystem modified date to match the EXIF date taken contained in the picture
  • dsft Set the EXIF date taken date of the picture to match the modified time of the filesystem for that picture

You can use wildcards, which is super convenient. To create metadata for all JPG files in a current directory:

jhead -mkexif *.JPG

I like to use touch in conjunction with jhead to set exif picture taken times for files that don’t have any metadata but I know the date they were taken:

jhead -mkexif *.jpg
touch -t 201410201700 *.jpg
jhead -dsft *.jpg

For pictures which have correct metadata but incorrect modified date (downloaded pictures, for example) simply do the following:

jhead -ft *.jpg

Neat.

Update 7/15/2018

I came across this helpful article which outlines how to view exif information from the command line using the identify command. It requires that imagemagick be installed.

identify -verbose *.jpg | grep "exif:"

Update 12/9/2018

This is the syntax I use to set the time of the picture to a specific date:

jhead -tsYYYY:MM:DD-HH:mm:ss <filename>