Tag Archives: linux

CentOS 7 Enterprise desktop setup

These are my notes for standing up a CentOS 7 desktop in an enterprise environment.

Packages

Install the EPEL repository for a better experience:

sudo yum -y install epel-release

Desktop experience packages:

sudo yum -y install vlc libreoffice java gstreamer gstreamer1 gstreamer-ffmpeg gstreamer-plugins-good gstreamer-plugins-ugly gstreamer1-plugins-bad-freeworld gstreamer1-libav pidgin rhythmbox ffmpeg keepass xdotool ntfs-3g gvfs-fuse gvfs-smb fuse sshfs redshift-gtk stoken-gui stoken-cli

Additional packages that may come in handy

sudo yum -y install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum -y install libdvdcss gstreamer{,1}-plugins-ugly gstreamer-plugins-bad-nonfree gstreamer1-plugins-bad-freeworld libde265 x265

Enable ssh:

sudo systemctl enable sshd
sudo systemctl start sshd

Google Chrome

Paste into /etc/yum.repos.d/google-chrome.repo:

[google64]
name=Google - x86_64
baseurl=http://dl.google.com/linux/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
sudo yum -y install google-chrome-stable

Domain

It’s just easier to use PowerBroker Open from beyondtrust

sudo wget -O /etc/yum.repos.d/pbiso.repo http://repo.pbis.beyondtrust.com/yum/pbiso.repo
sudo yum -y install pbis-open

Cliff notes for joining the domain:

domainname=<your_domain_name>
domain_prefix=<your_domain_netbios_name>
domainaccount=<your_domain_admin_account

sudo domainjoin-cli join $domainname $domainaccount 
<enter password>

sudo /opt/pbis/bin/config UserDomainPrefix $domain_prefix
sudo /opt/pbis/bin/config AssumeDefaultDomain true
sudo /opt/pbis/bin/config LoginShellTemplate /bin/bash
sudo /opt/pbis/bin/config HomeDirTemplate %H/%U

Add domain admins to sudo, escaping spaces with a backlsash and replacing DOMAIN with your domain:

sudo visudo
%DOMAIN\\Domain\ Administrators ALL=(ALL) ALL

Reboot to make all changes go into effect.

Certificate

You might need to copy your domain’s CA certificate to your certificate trust store:

sudo cp <CA CERT FILENAME> /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust

Drive mapping

I use a simple script to use gvfs-mount to mount network drives. Change suffix to match your domain and mounts to suit your needs.

#!/bin/bash
#Simple script to mount network drives on login

suffix=<DOMAIN_SUFFIX>
MOUNTS=(
	server1$suffix/folder1
	server2$suffix/folder2
        server3$suffix/folder3
)

for i in "${MOUNTS[@]}" 
do
	gvfs-mount "smb://$i"
done

Configure in gnome to run on startup:

Add the following to ~/.config/autostart/mount-drives.desktop, changing Exec= to the path of the above script.

[Desktop Entry]
Name=Mount network drives
GenericName=Mount network drives
Comment=Script to mount network drives
Exec=<location of mount script>
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

Network Config

If you wish to add static IP and configure your DNS suffix (search domain) then run

nm-connection-editor

The other GUI for network configuration doesn’t have an option for search domains for some reason.

Smartcard

sudo yum -y install opensc pcsc-tools pcsc-lite

Be sure to install the drivers for your particular card reader. Mine came from here and here.

After installing you can test by starting pcscd and using pcsc_scan

sudo systemctl start pcscd
pcsc_scan

Vmware horizon view

Smartcard support

There is a problem with how the VMware View interacts with the opensc smartcard drivers shipped in popular Linux distributions such as CentOS and Ubuntu. View cannot load the drivers in the default configuration; therefore in order to get VMware View working with smartcards you need manually patch and compile the opensc package (thanks to this site for the information needed to do so.)

First, install the necessary development packages

sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel pcsc-lite-devel

Next, download and extract opensc-0.13 from sourceforge:

wget http://downloads.sourceforge.net/project/opensc/OpenSC/opensc-0.13.0/opensc-0.13.0.tar.gz
tar zxvf opensc-0.13.0.tar.gz
cd opensc-0.13.0

Now we have to patch two specific files in the source before compiling:

echo "--- ./src/pkcs11/opensc-pkcs11.exports
 +++ ./src/pkcs11/opensc-pkcs11.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize
 --- ./src/pkcs11/pkcs11-spy.exports
 +++ ./src/pkcs11/pkcs11-spy.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize" > opensc.patch

patch -p1 -i opensc.patch

Next, compiling and installing:

./bootstrap
./configure
make
sudo make install

Assuming there were no errors, you can now link the compiled driver to the location VMware view expects it. Note: you must rename the library from opensc-pkcs11.so to libopensc-pkcs11.so for this to work (another lovely VMware bug)

sudo mkdir -p /usr/lib/vmware/view/pkcs11/
sudo ln -s /usr/local/lib/pkcs11/opensc-pkcs11.so /usr/lib/vmware/view/pkcs11/libopensc-pkcs11.so

Lync

Install the pidgin-sipe plugin as detailed here

sudo yum -y install pidgin pidgin-sipe

Choose “Office Communicator” as the protocol. Enter your e-mail address for the username, then go to the Advanced tab and check “Use single sign-on.”

On first run all contact names were missing. Per here, simply close and restart the application.

Gnome 3

Disable audible bell

Taken from here

Disable audible bell and enable visual bell with:

gsettings set org.gnome.desktop.wm.preferences audible-bell false
gsettings set org.gnome.desktop.wm.preferences visual-bell true

and change the type of the visual bell if you don’t need the fullscreen flash:

gsettings set org.gnome.desktop.wm.preferences visual-bell-type frame-flash

Extensions

If you can find your extension via yum it tends to work better than the gnome extension site. Make sure you’re using the correct shell version from the site:

gnome-shell --version
sudo yum -y install gnome-shell-extension-top-icons gnome-shell-extension-dash-to-dock

Other useful extensions:

backslide, multi monitors add-on , No topleft hot corner, Dropdown terminal, Media player indicator, Focus my window, Workspace indicator, Native window placement, Openweather, Panel osd, Dash to dock, Gpaste

RSA

For if you have the misfortune of being in an environment that uses RSA SecurID for two factor authentication, here is the official guide

Necessary packages to be installed:

sudo yum -y install selinux-policy-devel policycoreutils-devel
  1.  Download & extract PAM agent, cd to extracted directory
    tar -xvf PAM-Agent*.tar
  2. Create /var/ace directory and place necessary files inside. Create sdopts.rec and add the IP address of the desktop.
    mkdir /var/ace
    cp sdconf.rec /var/ace
    vi /var/ace/sdopts.rec
    CLIENT_IP=<IP ADDRESS OF DESKTOP>
  3. Run the install_pam script and specify UDP authentication
    ./install_pam.sh
  4.  Modify /etc/pam.d/password-auth to add the RSA authentication agent. Insert above pam_lsass.so smartcard_prompt try_first_pass line, then comment out pam_lsass.so smartcard_prompt try_first_pass line
    auth required pam_securid.so
    auth required pam_env.so
    auth sufficient pam_lsass.so
  5. Add new system in RSA console: Access / Authentication Agents / Add new
  6. Test to make sure everything works:
    /opt/pam/bin/64bit/acetest

Managing Windows hosts with Ansible

I spun my wheels for a while trying to get Ansible to manage windows hosts. Here are my notes on how I finally successfully got ansible (on a Linux host) to use an HTTPS WinRM connection to connect to a windows host using Kerberos for authentication. This article was of great help.

Ansible Hosts file

[all:vars]
ansible_user=<user>
ansible_password=<password>
ansible_connection=winrm
ansible_winrm_transport=kerberos

Packages to install (CentOS 7)

sudo yum install gcc python2-pip
sudo pip install kerberos requests_kerberos pywinrm certifi

Playbook syntax

Modules involving Windows hosts have a win_ prefix.

Troubleshooting

Code 500

WinRMTransportError: (u'http', u'Bad
HTTP response returned from server. Code 500')

I was using -m ping for testing instead of -m win_ping. Make sure you’re using win_ping and not regular ping module.

Certificate validation failed

"msg": "kerberos: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)"

I had a self signed CA certificate on the box ansible was trying to connect to. Python doesn’t appear to trust the system’s certificate trust chain by default. Ansible has a configuration directive

ansible_winrm_ca_trust_path

but even with that pointing to my system trust it wouldn’t work. I then found this gem on the winrm page for ansible:

The CA chain can contain a single or multiple issuer certificates and each entry is contained on a new line. To then use the custom CA chain as part of the validation process, set ansible_winrm_ca_trust_path to the path of the file. If this variable is not set, the default CA chain is used instead which is located in the install path of the Python package certifi.

Challenge #1: I didn’t have certifi installed.

sudo pip install certifi

Challenge #2: I needed to know where certifi’s default trust store was located, which I discovered after reading the project github page

python
import certifi
certifi.where()

In my case the location was ‘/usr/lib/python2.7/site-packages/certifi/cacert.pem’. I then symlinked my system trust to that location (backing up existing trust first)

sudo mv /usr/lib/python2.7/site-packages/certifi/cacert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem

Et voila! No more trust issues.

Ansible Tower

Note: If you’re running Ansible Tower, you have to work with their own bundled version of python instead of the system version. For version 3.2 it was located here:

/var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

I fixed it by doing this:

sudo mv /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

This resolved the trust issues.

Run ProxMox in local mode when part of a quorum

When moving my desktop (which I had joined to my ProxMox cluster) to a new environment I found I couldn’t start any VMs because there was no quorum. I couldn’t even get it to run “pvecm expected 1” because corosync wouldn’t start. I would see these errors in the log:

Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly.

I found on this forum post that you can tell proxmox to run in local mode (not trying to be in a quorum)

sudo systemctl stop pve-cluster
sudo /usr/bin/pmxcfs -l

Success! This caused my node to run in local node, which let me run my Windows gaming VM in another location.

Update /etc/hosts with current IP for ProxMox

ProxMox virtual environment is a really nice package for managing KVM and container visualization. One quirk about it is you need to have an entry in /etc/hosts that points to your system’s IP address, not 127.0.0.1 or 127.0.1.1. I wrote a little script to grab the IP of your specified interface and add it to /etc/hosts automatically for you. You may download it here or see below:

#!/bin/bash
#A simple script to update /etc/hosts with your current IP address for use with ProxMox virtual environment
#Author: Nicholas Jeppson
#Date: 4/25/2018

###Edit these variables to your environment###
INTERFACE="enp4s0" #the interface that has the IP you want to update hosts for
DNS_SUFFIX=""
###End variables section###

#Variables you shouldn't have to change
IP=$(ip addr show $INTERFACE |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1)
HOSTNAME=$(hostname)

#Use sed to add IP to first line in /etc/hosts
sed -i "1s/^/$IP $HOSTNAME $HOSTNAME$DNS_SUFFIX\n/" /etc/hosts

Use grep, awk, and cut to display only your IP address

I needed a quick way to determine my IP address for a script. If you run the ip addr show command it outputs a lot of information I don’t need. I settled on using grep, awk, and cut to get the information I want

ip addr show <interface name> |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1

The result is a clean IP address. Beautiful. Thanks to this site for insight into how to use cut.

Windows VM with GTX 1070 GPU passthrough in ProxMox 5

I started this blog four years ago to document my highly technical adventures – mainly so I could reproduce them later. One of my first articles dealt with GPU passthrough / virtualization. It was a complicated ordeal with Xen. Now that I’ve switched to KVM (ProxMox) I thought I’d give it another go. It’s still complicated but not nearly as much this time.

To get my Nvidia GTX 1070 GPU properly passed through to a Windows VM hosted by ProxMox 5 I simply followed this excellent guide written by sshaikh. I will summarize what I took from his guide to get my setup to work.

  1. Ensure VT-d is supported and enabled in the BIOS
  2. Enable IOMMU on the host
    1. append the following to the GRUB_CMDLINE_LINUX_DEFAULT line in /etc/default/grub
      intel_iommu=on
    2. Save your changes by running
      update-grub
  3. Blacklist NVIDIA & Nouveau kernel modules so they don’t get loaded at boot
    1. echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
      echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
    2. Save your changes by running
      update-initramfs -u
  4. Add the following lines to /etc/modules
    vfio
    vfio_iommu_type1
    vfio_pci
    vfio_virqfd
  5. Determine the PCI address of your GPU
    1. Run
      lspci -v

      and look for your card. Mine was 01:00.0 & 01:00.1. You can omit the part after the decimal to include them both in one go – so in that case it would be 01:00

    2. Run lspci -n -s <PCI address> to obtain vendor IDs. Example :
      lspci -n -s 01:00
      01:00.0 0300: 10de:1b81 (rev a1)
      01:00.1 0403: 10de:10f0 (rev a1)
  6. Assign your GPU to vfio driver using the IDs obtained above. Example:
    echo "options vfio-pci ids=10de:1b81,10de:10f0" > /etc/modprobe.d/vfio.conf
  7. Reboot the host
  8. Create your Windows VM using the UEFI bios hardware option (not the deafoult seabios) but do not start it yet. Modify /etc/pve/qemu-server/<vmid>.conf and ensure the following are in the file. Create / modify existing entries as necessary.
    bios: ovmf
    machine: q35
    cpu: host,hidden=1
    numa: 1
  9. Install Windows, including VirtIO drivers. Be sure to enable Remote desktop.
  10. Pass through the GPU.
    1. Modify /etc/pve/qemu-server/<vmid>.conf and add
      hostpci0: <device address>,x-vga=on,pcie=1. Example

      hostpci0: 01:00,x-vga=on,pcie=1
  11. Profit.

Troubleshooting

Code 43

I received the dreaded code 43 error after installing CUDA drivers. The workaround was to add hidden=1 to the CPU option of the VM:

cpu: host,hidden=1

Blue screening when launching certain games

Heroes of the Storm and Starcraft II would consistently blue screen on me with the following error:

kmode_exception_not_handled

The fix as outlined here was to create /etc/modprobe.d/kvm.conf and add the parameter “options kvm ignore_msrs=1”

echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf

Update 4/9/18: Blue screening happens to Windows 10 1803 as well with the error

System Thread Exception Not Handled

The fix for this is the same – ignore_msrs=1

GPU optimization:

Give as many CPUs as the host (in my case 8) and then enable NUMA for the CPU. This appeared to make my GTX 1070 perform better in the VM – near native performance.

Fix wordpress PHP change was reverted error

Since WordPress 4.9 I’ve had a peculiar issue when trying to edit theme files using the web GUI. Whenever I tried to save changes I would get this error message:

Unable to communicate back with site to check for fatal errors, so the PHP change was reverted. You will need to upload your PHP file change by some other means, such as by using SFTP.

After following this long thread I saw the suggestion to install and use the Health Check plugin to get more information into why this is happening. In my case I kept getting this error message:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 28: Connection timed out after 10001 milliseconds

I researched what a loopback request is in this case. It’s the webserver reaching out to its own site’s url to talk to itself. My webserver was being denied internet access, which included its own URL, so it couldn’t complete the loopback request.

One solution, mentioned here, is to edit the hosts file on your webserver to point to 127.0.0.1 for the URL of your site. My solution was to open up the firewall to allow my server to connect to its URL. I then ran into a different problem:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 60: Peer's Certificate issuer is not recognized.

After digging for a while I found this site which explains how to edit php.ini to point to an acceptable certificate list. To fix this on my Cent7 machine I edited /etc/php.ini and added this line (you could also add it to /etc/php.d/curl.ini)

curl.cainfo="/etc/pki/tls/cert.pem"

This caused php’s curl module to use the same certificate trust store that the underlying OS uses.

Then restart php-fpm if you’re using it:

sudo systemctl restart php-fpm

Success! Loopback connections now work properly.


Update 7/16/2018: I still had a wordpress site that was giving me certificate grief despite the above fix. After MUCH frustration I finally found this post where André Gayle points out that wordpress ships with its own certificate bundle, independent of even curl’s ca bundle! It’s located in your wordpress directory/wp-includes/certificates folder.

My solution to this extremely frustrating problem was to remove their bundle and symlink to my own (Cent 7 box – adjust your path to match where your wordpress install and certificate trust store is located)

sudo mv /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt.old
sudo ln -s /etc/pki/tls/cert.pem /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt

FINALLY no more loopback errors in the Health Check plugin, and thus the ability to edit theme files in the editor.

Migrate from Xenserver to Proxmox

I was dismayed to see Citrix’s recent announcement about Xenserver 7.3 removing several key features from the free version. Xenserver’s free features are the reason I switched over to them in the first place back in 2014. Xenserver has been rock solid; I haven’t had any complaints until now. Their removal of xenmotion and migration in the free version forced me to look elsewhere for my virtualization needs.

I’ve settled on ProxMox, which is KVM based. Their documentation is excellent and it has all the features I need – for free. I’m also in love with their web based management – no more Windows fat client!

Below are my notes on how I successfully migrated all my Xenserver VMs over to the ProxMox Virtual Environment (PVE).

  • Any changes to network interfaces, such as bringing them up, require a reboot of the host
  • If you have an existing ISO share, you can create a directory called  “template” in your ISO repository folder, then inside symlink “iso” back to your ISO folder. Proxmox looks inside template/iso for ISO images for whatever storage you configure.
  • Do not create your ProxMox host with ZFS unless you have tons of RAM. If you don’t have enough RAM you will run into huge CPU load times making the system unresponsive in cases of high disk load, such as VM copies / backups. More reading here.

Cluster of two:

ProxMox’s clustering is a bit different – better, in my opinion. No more master, slave dynamic – ever node is a master. Important reading: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster

If you have two node cluster, like I do, it creates some problems, though. If one goes down, the other can’t do anything to the pool (create VM, backup) until it comes back up. In my situation I have one primary host that is up all the time and I bring the secondary host up only when I want to do maintenance on the first.

In that specific situation you can still designate a “master” of sorts by increasing the number of quorum votes it gets from 1 to 2.  That way when the secondary node is down, the primary node can still do cluster operations because the default number of votes to stay quorate is 2. See here for more reading on the subject.

On either host (they must both be up and in the cluster for this to work)

vi /etc/pve/corosync.conf

Find your primary server in the nodelist settings and change

quorum_votes: 2

Also find the quorum section and add expected_votes: 2

Make sure to increment config_version number (bottom of the file.) Now if your secondary is down you can still operate the primary.

Migrating VMs

I migrated my Xen VMs to KVM by creating VMs with identical specs in PVE, copying the VHD files from the Xen host to the new PVE host, running qemu-img to convert them to RAW format, and then using dd to copy the raw information over to corresponding empty VM  disks. Depending on the OS of the VM there was some after-copy tweaking I also had to do.

From shared storage

Grab the VHD file (quiesce any snapshots away first) of each xen VM and convert them to raw format

qemu-img convert <VHD_FILE_NAME>.vhd -O raw <RAW_FILE_NAME>.raw

Create a new VM with identical configuration, especially disk size. Go to the hardware tab and take note of the name of the disk. For example, one of mine was:

local-zfs:vm-100-disk-1,discard=on,size=40G

The interesting part is between local-zfs and discard=on, namely vm-100-disk-1. This is the name of the disk we want to overwrite with data from our Xenserver VM’s disk.

Next figure out the full path of this disk on your proxmox host

find / -name vm-100-disk-1*

The result in my case was /dev/zvol/rpool/data/vm-100-disk-1

Take the name and put it in the following command to complete the process:

dd if=<RAW_FILE_NAME>.raw of=/dev/zvol/rpool/data/vm-100-disk-1 bs=16M

Once that’s done you can delete your .vhd and .raw files.

From local / LVM storage

In case your Xen VMs are stored in LVM device format instead of a VHD file, get UUID of storage by doing xe vdi-list and finding the name of the hard disk from the VM you want. It’s helpful to rename the hard disks to something easy to spot. I chose the word migrate.

xe vdi-list|grep -B3 migrate
uuid ( RO) : a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f
 name-label ( RW):  migrate

Once you have the UUID of the drive, you can use lvscan to find the full LVM device path of that disk:

lvscan|grep a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f
 inactive '/dev/VG_XenStorage-1ada0a08-7e6d-a5b6-d0b4-515e251c0c75/VHD-a466ae1b-80c7-4ef2-91a3-5c1ba1f6fc2f' [10.03 GiB] inherit

Shut down the corresponding VM and reactivate its logical volume (xen deactivates LVMs if the VM is shut off:

lvchange -ay <full /dev/VG_XenStorage path discovered above>

Now that we have the full LVM path and the volume is active, we can use dd over SSH to transfer the image to our proxmox server:

sudo dd if=<full /dev/VG/Xenstorage path discovered above> | ssh <IP_OF_PROXMOX_SERVER> dd of=<LOCATION_ON_PROXMOX_THAT_HAS_ENOUGH_SPACE>/<NAME_OF_VDI_FILE>.vhd

then follow vhd -> raw -> dd to proxmox drive process described in the From Shared Storage section.

Post-Migration tweaks

For the most part Debian-based systems moved over perfectly without any needed tweaks; Some VMs changed interface names due to network device changes. eth0 turned into ens8. I had to modify /etc/network/interfaces to change eth0 to ens8 to get virtio networking working.

CentOS

All my CentOS VMs failed to boot after migration due to a lack of virtio disk drivers in the initial RAM disk. The fix is to change the disk hardware to IDE mode (they boot fine this way) and then modify the initrd of each affected host:

sudo dracut --add-drivers "virtio_pci virtio_blk virtio_scsi virtio_net virtio_ring virtio" -f -v /boot/initramfs-`uname -r`.img `uname -r`
sudo sh -c "echo 'add_drivers+=\" virtio_pci virtio_blk virtio_scsi virtio_net virtio_ring virtio \"' >> /etc/dracut.conf"
sudo shutdown -h now

Once that’s done you can detach the hard disk and re-attach it back as SCSI (virtio) mode. Don’t forget to modify the options and change the boot order from ide0 to scsi0

Arch Linux

One of my Arch VMs had UUID configured which complicated things. The root device UUID changes in KVM virtio vs IDE mode. The easiest way to fix it is to boot this VM into an Arch install CD. Mount the root partition and then run arch-chroot /mnt/sda1. Once in the chroot runpacman -Sy kernel to reinstall the kernel and generate appropriate kernel modules.

mount /dev/sda1 /mnt
arch-chroot /mnt
pacman -Sy kernel

Also make sure to modify /etc/fstab to reflect appropriate device id or UUID (xen used /dev/xvda1, kvm /dev/sda1)

Windows

Create your Windows VM using non-virtio drivers (default settings in PVE.) Obtain the latest windows virtio drivers here and extract them somewhere memorable. Switch everything but the disk over to Virtio in the VM’s hardware config and reboot the VM. Go into device manager and point to extracted driver location for each unknown device.

To get Virtio disk to work, add a new disk to the VM of any size and SCSI (virtio) type. Boot the Windows VM and install drivers for that drive. Then shut down, remove that second drive, detach the primary drive and change to virtio SCSI. It should then come up with full virtio drivers.

All hosts

KVM has a guest agent like xenserver does called qemu-agent. Turn it on in VM options and install qemu-guest-agent in your guest. This KVM a bit more insight into your host.

Determine which VMs need guest agent installed:

qm agent $id ping

If nothing is returned, it means qemu-agent is working. You can test all your VMs at once with this one-liner (change your starting and finishing VM IDs as appropriate)

for id in {100..114}; do echo $id; qm agent $id ping; done

This little one-liner will output the VM ID it’s trying to ping and will return any errors it finds. No errors means everything is working.

Disable support nag

PVE has a support model and will nag you at each login. If you don’t like this you can change it like so (the line number might be different depending on which version you’re running:

vi +850 /usr/share/pve-manager/js/pvemanagerlib.js

Modify the line if (data.status !== ‘Active’); change it to

if (false)

Troubleshooting

Remove a failed node

See here: https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster#Remove_a_cluster_node

systemctl stop pvestatd.service
systemctl stop pvedaemon.service
systemctl stop pve-cluster.service
rm -r /etc/corosync/*
rm -r /var/lib/pve-cluster/*
reboot

Quorum never establishes / takes forever

I had a really strange issue where I was able to establish quorum with a second node, but after a reboot quorum never happened again. I re-installed that second node and re-joined it several times but I never got past the “waiting for quorum….” stage.

After much research I came across this article which explained what was happening. Corosync uses multicast to establish cluster quorum. Many switches (including mine) have a feature called IGMP snooping, which, without an IGMP querier, essentially means multicast never happens. Sure enough, after logging into my switches and disabling IGMP snooping, quorum was instantly established. The article above says this is not recommended, but in my small home lab it hasn’t produced any ill effects. Your mileage may vary. You can also configure your cluster to use unicast instead.

USB Passthrough not working properly

With Xenserver I was able to pass through the USB controller of my host to the guest (a JMICRON USB to ATAATAPI bridge holding a 4 disk bay.) I ran into issues with PVE, though. Using the GUI to pass the USB device did not work. Manually adding PCI passthrough directives (hostpci0: 00:14.1) didn’t work. I finally found on a little nugget on the PCI Passthrough page about how you can simply pass the entire device and not the function like I had in Xenserver. So instead of doing hostpci0: 00:14.1, I simply did hostpci0: 00:14 . That  helped a little bit, but I was still unable to fully use these drives simultaneously.

My solution was eventually to abandon PCI passthrough altogether in favor of just passing individual disks to the guest as outlined here.

Find the ID of the desired disks by issuing ls -l /dev/disk/by-id. You only need to know the UUIDs of the disks, not the partitions. Then modify the KVM config of your desired host (mine was located at /etc/pve/qemu-server/101.conf) and a new line for each disk, adjusting scsi device numbers and UUIDs to match:

scsi5: /dev/disk/by-id/scsi-SATA_ST5000VN000-1H4_Z111111

With that direct disk access everything is working splendidly in my FreeNAS VM.

Fix icedtea Cannot grant permissions to unsigned jars error

I banged my head on a wall for a while before I finally found a fix to this one. OpenJDK8 has new security features that break compatibility with the IPMI interfaces of my older servers. The problem in my case stemmed from the fact that the java applet is signed, just with an algorhythm that JDK8 blacklists. So, I had to remove MD5 from the blacklisted algorhythms to get this to work. Thanks to this site for guidance on how to do this.

Per that site, this is what I did to fix the issue:

Find the java.security file. In my case it is located in /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/java.security

Then find the row:

1
jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024

Comment it out, copy it, delete the MD5 string.

1
2
#jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024
jdk.jar.disabledAlgorithms=MD2, RSA keySize < 1024