Install Owncloud 8 on Centos 7

I recently needed to re-install my Owncloud VM. I’ve been on a CentOS kick lately so I decided to see if I could install OwnCloud 8 on a Centos 7 base install. It turned out to not be as easy as I thought it would be.

When I tried to install owncloud on my CentOS 7 system, I kept getting a 404 error message even though I followed the documentation outlined here.

It turns out that they changed where the RPM is held and apparently forgot to update the documentation. I discovered this by manually navigating to download.suse.org/repositories/isv:ownCloud:community and browsing the directories. The documentation has you grab a repo from Centos_Centos-7 folder, which is broken. It looks like the proper directory is just Centos_7.

I had to remove the old repo, purge the cache, and try again. To do so, remove the .repo file and purge via yum:

cd /etc/yum.repos/d/
rm isv\:ownCloud\:community.repo
yum --enablerepo=isv_ownCloud_community clean metadata
wget http://download.opensuse.org/repositories/isv:ownCloud:community/CentOS_7/isv:ownCloud:community.repo
yum install owncloud

The above procedure is what you should run if you’ve already tried to use the broken link in the documentation and failed. If you haven’t installed owncloud yet, do the following

cd /etc/yum.repos/d/
wget http://download.opensuse.org/repositories/isv:ownCloud:community/CentOS_7/isv:ownCloud:community.repo
yum install owncloud

Success.

Fix raidcheck error on Xenserver 6.5

After configuring a software RAID to host my VMs on my Xenserver 6.5 instance I began receiving odd e-mails once a week. The e-mails simply said:

/usr/sbin/raid-check: line 62: declare: -A: invalid option 
declare: usage: declare [-afFirtx] [-p] [name[=value] ...]

It turns out /usr/sbin-raid-check is a bash script called from the file /etc/cron.d/raid-check. It’s a weekly cron job designed to “scrub” the RAID array. I was getting these e-mails because I had configured my xen server to e-mail me anything sent to root, which includes messages encountered during cron jobs.

There appears to be a typo in the raid-check script. Line 62 of raid-check reads:

declare -A check

After reading the syntax of the declare command, I believe the issue is the fact that the A is capitalized. I commented out that line and replaced it with

declare -a check

That seemed to work. No more weird errors coming from my xenserver.

Install Splunk Universal Forwarder on Linux

I do this infrequently enough that I decided I should really write this down. Below is the quick and dirty way to get the Splunk universal forwarder installed on a new Linux  system. Thanks to byteschef for the information used to create this guide.

Download the latest splunk .RPM from their site and install it via RPM -i <filename> (if RedHat based) or dpki -i <filename> if debian based.

Run the following commands as root:

cd /opt/splunkforwarder/bin
./splunk start --accept-license
./splunk enable boot-start
./splunk add forward-server <IP/hostname of splunk server>:9997 -auth admin:changeme
./splunk add monitor /var/log
./splunk edit user admin -password NEW_PASSWORD -auth admin:changeme
./splunk restart

If there are any other directories you want monitored other than /var/log (application logs, for example) then issue:

./splunk add monitor <directory to monitor>

Done.

Fix Owncloud 8.1.1 samba shares not working

It never seems to go smoothly, does it? I just upgraded my version of Owncloud from 8.0.4 to 8.1.1 on my Ubuntu Trusty Tahr 14.04 VM. After the upgrade I noticed that all my samba (SMB) shares were gone. The logs were not very helpful, full of things like these:

Exception: {"Exception":"Icewind\\SMB\\Exception\\InvalidHostException","Message":"","Code":0,"Trace":"#0 \/var\/www\/owncloud\/apps\/files_external\/3rdparty\/icewind\/smb\/src\/Connection.php(37): Icewind\\SMB\\Connection

Additionally errors like this were showing up:

Your web server is not yet set up properly to allow file synchronization because the WebDAV interface seems to be broken.

After much digging I discovered this post which had a suggestion to install libsmbclient-php. In Ubuntu 14.04 it involves this command:

sudo apt-get install php5-libsmbclient

That did the trick! After installing php5-libsmbclient my samba shares worked once more.

 

Xenserver SSH backup script

Citrix Xenserver is an amazing hypervisor with pretty much every function released to you for free. One thing they do not handle, however, is automated backups.

I have hacked together a backup script for myself that seems to work fairly well. It is my own mix of this and this script along with some logic for e-mail reporting that I came up with myself. It does not require any modification of the xenserver host at all (no need to mount anything!) with the exception of adding a public key to the xenserver’s authorized_keys file.

This script can be run on anything with BASH and the appropriate UNIX tools (even other xenservers if you want) and uses SSH to initiate and transfer the backups to a location of your choosing.

Place this script on the machine you want to be initiating / saving the backups on. It requires that you generate an SSH public/private key pair, which can be done with this command:

ssh-keygen

Add the resulting .pub file’s contents to your xenserver’s /root/.ssh/authorized_keys file (create it if it doesn’t exist.) Take note of the location of the private key file that was generated with that command and put that path in the script.

You can download the script here or view it below. This script has been working pretty well for me. Note it will not work with any VMs that have spaces in their names. I was too lazy to debug this so I just renamed the problem VMs to remove the spaces. Enjoy!

#!/bin/bash

# Modified August 30, 2015 by Nicholas Jeppson
# Taken from http://discussions.citrix.com/topic/345960-xenserver-automated-snapshots-script/ and modified to allow for ssh backups
# Additional insight taken from https://github.com/cepa/xen-backup

# [Usage & Config]
# This script involves two computers: a xenserver machine and a backup machine.
# Put this script on the backup server and run with any account that has privileges to the desired export directory.
#
# This script assumes you have already created a private and public key pair on the backup server
# as well as adding respective the public key to the xenserver authorized_keys file.
#
# [How it works]
# Step1: Snapshots each VM on the xen server
# Step2: Backs up the snapshots to specified location
# Step3: Deletes temporary snapshot created in step 1
# Step4: Deletes old VM backups as defined later in this file
#
# [Note]
# This script will only work with VMs that don't have spaces in their names
# Please make sure you have enough disk space for BACKUP_PATH, or backup will fail
#
# Tested on xenserver 6.5
#
# Modify the variables in the config section below to suit your particular environment's needs.
#

############### Config section ###############

#Location where you want backups to go
BACKUP_PATH="/mnt/backup/OS/"

#SSH configuration
SSH_CIPHER="arcfour128"

#Number of backups to keep
NUM_BACKUPS=2

#Xenserver ssh configuration
#This dictates the address and location of keyfiles as they reside on the xenserver
XEN_ADDRESS="192.168.1.1"
XEN_KEY_LOCATION="/home/backup/.ssh/backup"
XEN_USER="root"

#E-mail configuration
EMAIL_ADDRESS="youremail@provider.here"
EMAIL_SUBJECT="`hostname -s | awk '{print "["toupper($1)"]"}'` VM Backup Report: `date +"%A %b %d %Y"`"

########## End of Config section ###############

ret_code=0

#Replace any spaces found with backslashes because dd doesn't like them
BACKUP_PATH_ESCAPED="`echo $BACKUP_PATH | sed 's/ /\\\ /g'`"

# SSH command
remote_exec() {
chmod 0600 $XEN_KEY_LOCATION
ssh -i $XEN_KEY_LOCATION -o "StrictHostKeyChecking no" -c $SSH_CIPHER $XEN_USER@$XEN_ADDRESS $1
}

backup() {
echo "======================================================"
echo "VM Backup started: `date`"
begin="$(date +%s)"
echo "Backup location: ${BACKUP_PATH}"
echo

#add a slash to the end of the backup path if it doesn't exist
if [[ "$BACKUP_PATH" != */ ]]; then
BACKUP_PATH="$BACKUP_PATH/"
fi

#Build array of VM names
VMNAMES=$(remote_exec "xe vm-list is-control-domain=false | grep name-label | cut -d ':' -f 2 | tr -d ' '")

for VMNAME in $VMNAMES
do
echo "======================================================"
echo "$VMNAME backup started `date`"
echo
before="$(date +%s)"

# create snapshot
TIMESTAMP=`date '+%Y%m%d-%H%M%S'`
SNAPNAME="$VMNAME-$TIMESTAMP"
SNAPUUID=$(remote_exec "xe vm-snapshot vm=\"$VMNAME\" new-name-label=\"$SNAPNAME\"")

# export snapshot
# remote_exec "xe snapshot-export-to-template snapshot-uuid=$SNAPUUID filename= | gzip" | gunzip | dd of="$BACKUP_PATH/$SNAPNAME.xva"
remote_exec "xe snapshot-export-to-template snapshot-uuid=$SNAPUUID filename=" | dd of="$BACKUP_PATH/$SNAPNAME.xva"

#if export was unsuccessful, return error
if [ $? -ne 0 ]; then
echo "Failed to export snapshot name = $snapshot_name$backup_ext"
ret_code=1

else
#calculate backup time, print results
after="$(date +%s)"
elapsed=`bc -l <<< "$after-$before"`
elapsedMin=`bc -l <<< "$elapsed/60"`
echo "Snapshot of $VMNAME saved to $SNAPNAME.xva"
echo "Backup completed in `echo $(printf %.2f $elapsedMin)` minutes"

# destroy snapshot
remote_exec "xe snapshot-uninstall force=true snapshot-uuid=$SNAPUUID"

#remove old backups (uses num_backups variable from top)
BACKUP_COUNT=$(find $BACKUP_PATH -name "$VMNAME*.xva" | wc -l)

if [[ "$BACKUP_COUNT" -gt "$NUM_BACKUPS" ]]; then

OLDEST_BACKUP=$(find $BACKUP_PATH -name "$VMNAME*" -print0 | xargs -0 ls -tr | head -n 1)
echo
echo "Removing oldest backup: $OLDEST_BACKUP"
rm $OLDEST_BACKUP
if [ $? -ne 0 ]; then
echo "Failed to remove $OLDEST_BACKUP"
fi
fi
echo "======================================================"
fi
done

end=$"$(date +%s)"
total_time=`bc -l <<< "$end-$begin"`
total_time_min=`bc -l <<< "$total_time/60"`
echo "Backup completed: `date`"
echo "VM Backup completed in `echo $(printf %.2f $total_time_min)` minutes"
echo
}

#Run the backup function and save all output to a variable, including stderr
BACKUP_OUTPUT=$(backup 2>&1)

#Clean up the output of the backup function
#Remove records count from dd, do some basic math to make dd's numbers more human readable
BACKUP_OUTPUT_HUMANIZED=$(echo "$BACKUP_OUTPUT" | sed -r '/.*records /d' | tr -d '()' \
| awk '{sub(/.*bytes /, $1/1024/1024/1024" GB "); sub(/in .* secs/, "in "$5/60" mins "); sub(/mins .*/, "mins (" $7/1024/1024" MB/sec)"); print}')

#Send a report e-mail with the backup results
echo "$BACKUP_OUTPUT_HUMANIZED" | mail -s "$EMAIL_SUBJECT" "$EMAIL_ADDRESS"

exit $ret_code

Avoid prompts when installing FreeBSD ports

The FreeBSD ports tree is wonderful for installing software but sometimes it can be a real pain. Recently I was trying to install Emby in FreeBSD because why not? The instructions were easy enough except for when I ran

make install clean

I was constantly barraged with choices for things. I want to assume the default on all of these and not be barraged with questions.

Thanks to stack exchange I learned it’s relatively easy to bypass all these questions. Simply add:

BATCH=yes

to the end of your make install clean statement to assume the defaults to all the questions for the package. The Emby guide is pretty comprehensive, but I would add this command at the bottom:

make install clean BATCH=yes

Handy.

Measure SSH transfer speeds

SSH is a beautiful thing. In addition to remotely administering machines you can use it to transfer files. To do this one simply pipes the cat command on both ends. For example, to copy hello.txt on the source host to hi.txt on the destination host, the command would be:

ssh remote_host cat hello.txt | cat > hi.txt

The command takes the contents of hello.txt and pipes it over to the remote host. The cat command on the remote host takes what was piped to it as  input and the > sign instructs cat to take its input and output it to hi.txt.

A great way to measure transfer speeds using ssh between two hosts is to take /dev/zero on the source host and output it to /dev/null on the destination host. This bypasses any disk speed bottlenecks and only measures network throughput. Combine this with the pv command to get a nice graphical view of how fast the transfer is going.

ssh remote_host cat /dev/zero | pv | cat > /dev/null

The default options between my machines result in about a 65 megabytes a second transfer speed.

1

It turns out that the encryption cipher used makes a big difference on transfer speeds. Use the -c command to specify which cipher to use and see how much of a difference this makes. -o compression=no can also help with transfer speeds.

The fastest cipher I’ve found is arcfour. It’s touted as less secure, but for my local network I can accept the risk (thanks to slashdot for the discussion.)

ssh -c arcfour -o Compression=no remote_host cat /dev/zero | pv | cat > /dev/null

2

Using acrfour more than doubles the speed for me! Amazing.

Allow non-root users to mount disks

I came across a need today to allow a regular (non-root) user to mount disks in Ubuntu 14.04 Trusty Tahr. I usually use sudo but in this case I needed to be able to run photorec as a regular user.

The way to accomplish this is to add the regular user to the disk group. To accomplish this, run this command:

sudo usermod -a -G disk <username>

If you are logged in as that user, you will have to log out and log back in to receive the permissions. Once this is done you should be able to mount disks without using sudo or being root.

Disable IPv6 on an interface in Linux

After tethering my phone to my laptop and googling “what is my ip” I was surprised to find an IPv6 address. Apparently my mobile carrier has implemented IPv6. Bravo to them.

Unfortunately, when I initiated my VPN, which is supposed to tunnel all traffic through it, my IP address didn’t change. This is because my VPN is IPv4 only. My system prioritizes IPv6 traffic, so if I happen to go to any IPv6 enabled site such as google, my VPN tunnel is bypassed entirely.

I don’t like the security implications of this. The long term solution is to implement IPv6 with my VPN; however while traveling I won’t be able to do that. The short term solution is to simply disable IPv6 for the interface that has it, in my case usb0 as that is what is tethered to my phone.

This simple command will do the trick:

sudo sh -c 'echo 1 > /proc/sys/net/ipv6/conf/usb0/disable_ipv6'

Change USB0 to whatever interface you would like (or all of them) and you’re done! Thanks to this site for the information.

 

 

 

Install Cinnamon on a Chromebook with Crouton

I really love using Crouton on my Chromebook Pixel LS 2015. I was sad to see that there is no cinnamon desktop environment target with the latest versions of crouton. Below is what I did to get Cinnamon on my chromebook. Much of what I did was taken from https://gist.github.com/sohjsolwin/5939948

  1. Create a base chroot
  2. Enter your chroot
sudo apt-get update
sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository ppa:tsvetko.tsvetkov/cinnamon
sudo apt-get update
sudo apt-get install cinnamon

Once Cinnamon was installed I needed to know how to start it manually. Thanks to the Arch Linux forums for explaining it. You have to create a .xinitrc file in your home directory within your chroot.

echo "exec cinnamon-session" > ~/.xinitrc

Trying to manually start cinnamon by typing startx didn’t work – I got a blank screen and had to hard reset to get anything to come back. Thanks to github I learned you need to use xinit instead of startx.

Lastly, we need to create a suitable startcinnamon script.

wget https://gist.github.com/sohjsolwin/5934362/raw/f68fc0942798902a0bd48f40c17dc0cd5cf585ea/startcinnamon

Modify the file to remove the startx command with xinit. Also remove everything after xinit. My file is as follows:

APPLICATION="${0##*/}"

USAGE="$APPLICATION [options]

Wraps enter-chroot to start a Mint session.
By default, it will log into the primary user on the first chroot found.

Options are directly passed to enter-chroot; run enter-chroot to list them."

exec sh -e "`dirname "$0"`/enter-chroot" "$@" xinit

Make this file executable (chmod +x startcinnamon) and move it to the /usr/local/bin directory of your chromebook (not your chroot.) Now all you need to do is enter

sudo startcinnamon

and your cinnamon desktop should come up!


 

Update 2016-01-04

These two scripts seem to work a little bit better. Place this one within your chroot under /usr/local/bin/startcinnamon:

#!/bin/sh -e
# Copyright (c) 2015 The crouton Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

# Launches GNOME; automatically falls back to gnome-panel

exec crouton-noroot gnome-session-wrapper cinnamon

Place this one in /usr/local/bin outside your chroot (on your chromebook itself.)

#!/bin/sh -e
# Copyright (c) 2015 The crouton Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

set -e

APPLICATION="${0##*/}"

USAGE="$APPLICATION [options]

Wraps enter-chroot to start a GNOME session.
By default, it will log into the primary user on the first chroot found.

Options are directly passed to enter-chroot; run enter-chroot to list them."

exec sh -e "`dirname "\`readlink -f "$0"\`"`/enter-chroot" -t cinnamon "$@" "" \
    exec startcinnamon