Category Archives: OS

Fix Proxmox swapping issue

I recently had an issue with one of my Proxmox hosts where it would max out all swap and slow down to a crawl despite having plenty of physical memory free. After digging and tweaking, I found this post which directed to set the kernel swappiness setting to 0. More reading suggested I should set it to 1, which is what I did.

Append to /etc/sysctl.conf:

#Fix excessive swap usage
vm.swappiness = 1 

Apply settings with:

sysctl --system

This did the trick for me.

Rasbperry Pi as a dashboard computer

Here are my raw, unpolished notes on how I set up a raspberry pi to serve as a dashboard display:

Use Raspbian OS

Autostart Chrome in kiosk mode

Eliminate Chrome crash bubble thanks to this post

mkdir -p ~/.config/lxsession/LXDE-pi/
nano ~/.config/lxsession/LXDE-pi/autostart

Add this line:
@chromium-browser --kiosk --app=<URL>

Mouse removal

sudo apt-get install unclutter

in ~/.config/lxsession/LXDE-pi/autostart add

@unclutter -idle 5

Disable screen blank:

in /etc/lightdm/lightdm.conf add

[SeatDefaults]
xserver-command=X -s 0 -dpms

Open up SSH & VNC

Pi / Preferences / Raspberry Pi Configuration: Interfaces tab

SSH: Enable
VNC: Enable

Increase swap file

sudo nano /etc/dphys-swapfile
CONF_SWAPSIZE=2048M

Configure NTP

sudo apt-get install openntpd ntpdate
sudo systemctl enable openntpd
sudo ntpdate <IP of NTP server>

edit /etc/openntpd/ntpd.conf and modify servers lines to fit your NTP server

Disable overscan

Pi / Preferences / Raspberry Pi Configuration: System tab
Overscan: Disable

Recover files from ZFS snapshot of ProxMox VM

I recently needed to restore a specific file from one of my ProxMox VMs that had been deleted. I didn’t want to roll back the entire VM from a previous snapshot – I just wanted a single file from the snapshot. My snapshots are handled via ZFS using FreeNAS.

Since my VM was CentOS 7 it uses XFS, which made things a bit more difficult. I couldn’t find a way to crash-mount a read-only XFS snapshot – it simply resufed to mount, so I had to make everything read/write. Below is the process I used to recover my file:

On the FreeNAS server, find the snapshot you wish to clone:

sudo zfs list -t snapshot -o name -s creation -r DATASET_NAME

Next, clone the snapshot

sudo zfs clone SNAPSHOT_NAME CLONED_SNAPSHOT_NAME

Next, on a Linux box, use SSHFS to mount the snapshot:

mkdir Snapshot
sshfs -o allow_other user@freenas:/mnt/CLONED_SNAPSHOT_NAME Snapshot/

Now create a read/write loopback device following instructions found here:

sudo -i #easy lazy way to get past permissions issues
cd /path/to/Snapshot/folder/created/above
losetup -P -f VM_DISK_FILENAME.raw
losetup 
#Take note of output, it's likely set to /dev/loop0 unless you have other loopbacks

Note if your VM files are not in RAW format, extra steps will need to be taken in order to convert it to RAW format.

Now we have an SSH-mounted loopback device ready for mounting. Things are complicated if your VM uses LVM, which mine does (CentOS 7). Once the loopback device is set, lvscan should see the image’s logical volumes. Make the desired volume active

sudo lvscan
sudo lvchange -ay /dev/VG_NAME/LV_NAME

Now you can mount your volume:

mkdir Restore
mount /dev/VG_NAME/LV_NAME Restore/

Note: for XFS you must have read/write capability on the loopback device for this to work.

When you’re done, do your steps in reverse to unmount the snaspshot:

#Unmount snapshot
umount Restore
#Deactivate LVM
lvchange -an /dev/VG_NAME/LV_NAME
Remove loopback device
losetup -d /dev/loop0 #or whatever the loopback device was
#Unmount SSHfs mount to ZFS server
umount Snapshot

Finally, on the ZFS server, delete the snapshot:

sudo zfs destroy CLONED_SNAPSHOT_NAME

Troubleshooting

When I tried to mount the LVM partition at this point I got this error message:

mount: /dev/mapper/centos_plexlocal-root: can't read superblock

It ended up being because I was accidentally creating a read-only loopback device. I destroy the loopback device and re-created with write support and all was well.

Upgrading AWX

AWX is the open source version of Ansible Tower. It’s a powerful tool, but unfortunately AWX has no in place upgrade capability. If you want to upgrade your AWX to the latest version it takes a bit of trickery (the easy way out being just to pay for Ansible Tower.)

Essentially to upgrade AWX you need to spin up a completely new instance and then migrate your data over to it. Fortunately there is a script out there that makes doing this a bit easier.

Below are my notes for how I upgraded my instance of AWX from version 1.0.6 to 2.1.0.

Create temporary AWX migration server

Spin up new server with ansible installed, then clone AWX

git clone https://github.com/ansible/awx.git 
cd awx 
git clone https://github.com/ansible/awx-logos.git

Modify AWX install to expose 5432 externally:

edit installer/roles/local_docker/tasks/standalone.yml and add

    ports:
      - "5432:5432" 

right above the when: pg_hostname is not defined or pg_hostname == '' line. Complete stanza looks like this:

- name: Activate postgres container
  docker_container:
    name: postgres
    state: started
    restart_policy: unless-stopped
    image: "{{ postgresql_image }}"
    volumes:
      - "{{ postgres_data_dir }}:/var/lib/postgresql/data:Z"
    env:
      POSTGRES_USER: "{{ pg_username }}"
      POSTGRES_PASSWORD: "{{ pg_password }}"
      POSTGRES_DB: "{{ pg_database }}"
      PGDATA: "/var/lib/postgresql/data/pgdata"
    ports:
      - "5432:5432"
  when: pg_hostname is not defined or pg_hostname == ''
  register: postgres_container_activate

Make sure you have port 5432 open on your host-based firewall.

Install AWX on the new host. Verify you can log into the empty instance and that it’s the version you want to upgrade to.

Prepare original AWX server to send

Kill the AWX postgres container on the source machine, and re-run awx installer after modifying it to expose its postgres port as described above.

Install tower-cli (this can be on either source or destination servers)

sudo pip install ansible-tower-cli

Configure tower-cli

tower-cli config username SRC_AWX_USERNAME
towercli config password SRC_AWX_PASSWORD
towercli config host SRC_AWX_HOST

Make sure to use full ansible URL as accessed from a browser for both source and destination

Install awx-migrate:

git clone https://github.com/autops/awx-migrate.git

Update awx-migrate/awx-migrate-wrapper with correct source and destination info

Run awx-migrate-wrapper. It will generate json files with your configuration.

Migrate database to temporary server

Modify tower-cli config, set host, username and password to that of the destination AWX instance

tower-cli config username DEST_AWX_USERNAME
towercli config password DEST_AWX_PASSWORD
towercli config host: DEST_AWX_HOST

Send JSON info to destination:

tower-cli send awx-data.json

You will now have a fresh new, updated AWX instance working, with imported database, on the destination host. Confirm you can log into it with the admin account you set it up with.

Prepare original AWX server to receive

Now, on the source, remove  the old AWX docker containers:

sudo docker rm -f postgres awx_task awx_web memcached rabbitmq

Move / delete the database folder the postgres docker container was using (as defined in awx installer inventory) in my case:

/var/lib/awx
/var/db/pgsqldocker

Remove and re-install AWX folder with a fresh git checkout

rm -rf awx
git clone https://github.com/ansible/awx.git
cd awx
git clone https://github.com/ansible/awx-logos.git

Re-run the AWX installer to re-create a blank database on the source host, modify the new awx/installer/inventory as needed. Also modify installer/roles/local_docker/tasks/standalone.yml as outlined above.

cd awx/installer
sudo ansible-playbook -i inventory install.yml

Migrate from temporary AWX server back to source AWX server

Once a new, empty version of awx is running on the source host,  start the awx-migrate process in reverse to migrate the database on the destination instance back to the source. Modify awx-migrate-wrapper and tower-cli to switch src and destination (the destination has become the source and the source has become the destination)

Use awx-migrate-wrapper to generate  new ansible version json files (don’t confuse them with the old json files – best to delete / move all json files before running awx-migrate-wrapper)

Modify tower-cli to point to original AWX URL

Run tower-cli send awx-data.json

Once completed, log in as the admin account. Input LDAP BIND password under settings, then delete any imported LDAP users.

Cleanup

You may want to remove the exposed postgres database ports. Simply undo the changes you made in awx/installer/roles/local_docker/tasks/standalone.yml to remove the Ports part of the first play, then remove your postgres container and re-install AWX with install.yml

Also remember to delete the JSON files generated with awx-migrate as they contain all your credentials in plaintext.

Success.

 

Export multiple resolutions in Lightroom

I needed to one-click export multiple resolutions of pictures from Lightroom. Unfortunately there isn’t any kind of plugin available to do this. Fortunately I was able to find this guide on how to get an applescript script to do it for me (Mac only, sadly.)

The trick is to write a few bits of applescript and save it as an application. Then when exporting the pictures in lightroom, make sure the word “fullsize” is in the filename, and configure lightroom to run your applescript after export.

I tweaked the script a bit to move the full size version to a different folder, then open Finder to the folder that the other resolutions were created in (thanks to this site for the guidance)

Here is my script below. It works!

on open of myFiles
	set newSizes to {5000}
	
	repeat with aFile in myFiles
		set filePath to aFile's POSIX path
		set bPath to (do shell script "dirname " & quoted form of filePath)
		tell application "System Events" to set fileName to aFile's name
		repeat with newSize in newSizes
			do shell script "sips " & quoted form of aFile's POSIX path & " -Z " & newSize & " --out " & quoted form of (bPath & "/" & rename(fileName, newSize) as text)
		end repeat
		do shell script "mv " & quoted form of aFile's POSIX path & " /path/to/folder/for/fullres/images"
	end repeat
	do shell script "open /path/to/folder/of/resized/images"
end open

on rename(fName, fSize)
	do shell script "sed 's/fullsize/" & fSize & "/' <<< " & quoted form of fName
end rename

Configure ACLs in Linux

I came across a need to make files in a folder inherit certain permissions no matter who creates them. Thanks to Stack Overflow for help in figuring this out.
You first set a sticky bit for the parent folder, then use the setfacl command to set the ACL:
chmod g+s -R <folder>
setfacl -d -m "g:<group name>:<permissions>" -R <directory>
Example:
Grants all members of group testgrouprw read,write, and directory permissions to /var/www/html/wordpress:
setfacl -d -m "g:testgrouprw:rwX" -R /var/www/html/wordpress/
Sources:

LDAP nested group membership query

I have a lot of applications at work which do not support Active Directory but instead rely on LDAP queries for granting user access. A problem we have is much of our access is granted to a security group (known as a ROLE) and users are granted to that single security group to get access to things. This allows easier access granting to new hires / transfers. The problem is it makes LDAP queries much more difficult. Things are further complicated by the fact that sometimes users are directly granted access to resources instead of going through their ROLE security group.

Nested LDAP group search

I spent a lot of time researching LDAP nested group queries. I now have a functional way of doing semi-nested LDAP group searches. The scenario: a user could be directly added to a security group granting access to a resource, or could be a member of a security group which has access to the resource. I want the LDAP group search string to account for both. I accomplish this by combining these two queries:

Nested group membership query

Search groups beginning with the name ROLE for a specific member, then return what that ROLE group has access to

(&(objectClass=group)(DisplayName=ROLE*)(member=FQDN_OF_USER_IN_QUESTION)(memberOf=*))

Individually added group query

Search for all groups a specified member is a member of

(&(objectClass=user)(sAMAccountName=USERNAME_OF_USER_IN_QUESTION)(memberOf=*))

I combine these two queries by separating them out with an OR operator (|)

Combined query

Return the group membership of the user in question, as well as the group membership of the group beginning with the name ROLE that the user is a member of

(|(&(objectClass=group)(DisplayName=ROLE*)(member=FQDN_OF_USER_IN_QUESTION)(memberOf=*))(&(objectClass=user)(sAMAccountName=USERNAME_OF_USER_IN_QUESTION)(memberOf=*)))

It has three main parts:

  • Begin with an or operator (|
  • Have a new group with an AND operator (&
    • This requires everything in this query to be true
  • Make a second group with an AND operator

This works for our organization because ROLE groups are not nested within themselves and each user can only have one ROLE group assigned to them.

This combined query allows me to not have to “flatten” security groups for LDAP-bound applications. It makes me so happy.

This was made possible by a flurry of stack overflow posts:

https://stackoverflow.com/questions/32829104/ldap-query-with-wildcard

https://stackoverflow.com/questions/9564120/using-wildcards-in-ldap-search-filters-queries

https://stackoverflow.com/questions/6195812/ldap-nested-group-membership

https://stackoverflow.com/questions/1032351/how-to-write-ldap-query-to-test-if-user-is-member-of-a-group/1032426

https://www.novell.com/coolsolutions/feature/16671.html

Active Directory / LDAP integration with WordPress

I struggled for a while to get WordPress to use Active Directory credentials on CentOS 7. Below is how I finally got it to work.

First, install necessary packages:

sudo yum -y install openldap-clients php-ldap

If you use self-signed certificate for ldaps, you’ll need to modify /etc/openldap/ldap.conf

HOST <HOSTNAME_OF_LDAP_SERVER>
PORT 636
TLS_CACERT <PATH_TO_CA_CERT>
TLS_REQCERT demand

With the above settings you can test your ldap string with ldapsearch

ldapsearch -x -D "<BIND USERNAME>" -b "<BASE_DN>" -H ldaps://<LDAP_SERVER_HOSTNAME> -W sAMAccountName=<USER_TO_QUERY>

Once ldapsearch works properly, install your AD integration plugin.  I use AuthLDAP by Andreas Heigl

I struggled with which LDAP strings and filters to use. This is what finally got everything working with our Active Directory environment:

LDAP URIldaps://<BIND_USERNAME>:<BIND_PASSWORD>@<AD_SERVER_ADDRESS>:636/<BASE DN>

Filter(sAMAccountName=%s)

Name-AttributegivenName

User-ID Attribute: sAMAccountName

Second Name Attributesn

Group-Attribute:memberOf

Group-Separator:  _

Group-Filter: (&(objectClass=user)(sAMAccountName=%s)(memberOf=*))

Role – group mapping

I had to change Group-Separator to _ above, because in Role – group mapping for active directory, you must put the FQDN, which includes commas. Put an underscore separated list of FQDNS for each of these fields you want.

Using expect with the Ansible shell module

In one of my ansible playbooks I need to obtain a file from a Windows share. I can’t find a module that handles this so I’m using the shell module to call the smbclient command to do what I need. The problem with this solution is that smbclient prompts for a password (and I don’t want to supply it on the command itself for security reasons.)

I tried using ansible’s built-in expect module, but frustratingly it only works on systems that have pexpect >= 3.3  , which CentOS 7 & Ubuntu 14.04 do not have.

My solution to this is to install the expect command on the host, and then use the ansible shell module to call it, following the example given in Ansible’s shell module page

Part of the process in my playbook is registering stdout from that command for later use. I then ran into a problem where I would run smbclient -c “ls <filename>” but ansible would register nothing. After some digging I found I also need to include the interact command after sending the password. Without it, anything after sending the password is not registered to stdout. Thanks to rostyslav-fridman on Stack Overflow for the answer.

My final problem was I was sending a password that had a ] character in it. It was causing this error on run:

missing close-bracket\n while executing\n\"send

I found here (thanks glenn-jackman) it was due to  the fact that the expect syntax uses tcl language, which treats those brackets as special characters. To get around this I had to use an ansible filter, specifically regex_escape()

Lastly I ran into an issue specifically with how I was spawning smbclient. I kept getting this message:

"stderr": "send: spawn id exp4 not open\n while executing\n\"send

It boiled down to single vs double quotes. If I put my -c arguments in single quotes it failed; with double quotes, it worked.

My completed play is below. Finally, success!

- name: Get RSA filename 
  shell: |
    set timeout 300
    spawn smbclient {{standards_location}} -W DOMAIN -U {{username}} -c "ls {{file_location}}*.tar"
    expect "password:"
    send {{ password | regex_escape() }}\r
    interact
    exit 0
  args:
    executable: /usr/bin/expect
  changed_when: false
  no_log: true
  register: RSA_filename_raw

Gaming VM with graphics passthrough in Arch Linux

At one point I had KVM with GPU passthrough running in Arch Linux. I have since moved away from it back to ProxMox. Here are my notes I jotted down when I did this in Arch. Sorry these are just rough notes, I didn’t end up using Arch for long enough to turn this into a polished article.


pacman -Sy qemu netctl ovmf virt-manager

When creating VM, make sure chipset is Q35

CPU model host-passthrough (write it in)

Create VirtIO SCSI controller and attach drives to it

NIC device model: virtio

—- networking —-

Create bridge:

https://wiki.archlinux.org/index.php/Bridge_with_netctl

Copy /etc/netctl/examples/bridge to /etc/netctl/bridge

/etc/netctl/bridge
Description="Example Bridge connection"
Interface=br0
Connection=bridge
BindsToInterfaces=(enp4s0)
IP=dhcp

#Optional - give your system another IP for host-only networking
ExecUpPost="ip addr add 192.168.2.1/24 dev br0"
sudo netctl reenable bridge
sudo netctl restart bridge

In the VM add another network interface, also assign to br0. Manually specify IP in guest VM to match subnet specified above in ExecUpPost

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF

Allow UEFI bios: https://wiki.archlinux.org/index.php/libvirt#UEFI_Support

sudo vim /etc/libvirt/qemu.conf

/etc/libvirt/qemu.conf
nvram = [
    "/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd"
]

sudo systemctl restart libvirtd

Edit VM hardware:

CLI: sudo virsh edit <vm name>

GUI: double click on VM, then click second icon fnom the left (little i bubble)  Add GPU this way

Nvidia GPU: need to do x otherwise code 43

<features>
	<hyperv>
		...
		<vendor_id state='on' value='whatever'/>
		...
	</hyperv>
	...
	<kvm>
	<hidden state='on'/>
	</kvm>
</features>

Hot add CD:

sudo virsh attach-disk <VM_NAME> <ISO LOCATION>  hdb –type cdrom

Add second NIC: https://jamielinux.com/docs/libvirt-networking-handbook/bridged-network.html

sudo virsh edit win10

<interface type="bridge">
   <source bridge="br1"/>
</interface>

CPU configuration

Current Allocation 16

Topology / manually set CPU topology

1 socket, 16 cores, 1 thread

<cputune>
<vcpupin vcpu=’0′ cpuset=’16’/>
<vcpupin vcpu=’1′ cpuset=’17’/>
<vcpupin vcpu=’2′ cpuset=’18’/>
<vcpupin vcpu=’3′ cpuset=’19’/>
<vcpupin vcpu=’4′ cpuset=’20’/>
<vcpupin vcpu=’5′ cpuset=’21’/>
<vcpupin vcpu=’6′ cpuset=’22’/>
<vcpupin vcpu=’7′ cpuset=’23’/>
<vcpupin vcpu=’8′ cpuset=’24’/>
<vcpupin vcpu=’9′ cpuset=’25’/>
<vcpupin vcpu=’10’ cpuset=’26’/>
<vcpupin vcpu=’11’ cpuset=’27’/>
<vcpupin vcpu=’12’ cpuset=’28’/>
<vcpupin vcpu=’13’ cpuset=’29’/>
<vcpupin vcpu=’14’ cpuset=’30’/>
<vcpupin vcpu=’15’ cpuset=’31’/>
</cputune>

Running Windows 10 on Linux using KVM with VGA Passthrough

 

 

--machine q35 \
--host-device 4b:00.0 --host-device 4b:00.1 \

https://medium.com/@calerogers/gpu-virtualization-with-kvm-qemu-63ca98a6a172

 

add usb ports. Doesn’t work if nothing’s in the port?

virsh edit win10

<hostdev mode=’subsystem’ type=’usb’ managed=’yes’>
<source>
<address bus=’3′ device=’2’/>
</source>
<address type=’usb’ bus=’0′ port=’2’/>
</hostdev>

Remove Tablet input device to get 4th USB passthrough option

 

 

 

Troubleshooting

internal error: Unknown PCI header type '127'

https://forum.level1techs.com/t/trouble-passing-though-an-rx-580-to-an-ubuntu-desktop-vm/123376/3

Threadripper PCI Reset bug: https://www.reddit.com/r/Amd/comments/7gp1z7/threadripper_kvm_gpu_passthru_testers_needed/

Error 43:

<features>
	<hyperv>
		...
		<vendor_id state='on' value='whatever'/>
		...
	</hyperv>
	...
	<kvm>
	<hidden state='on'/>
	</kvm>
</features>

Audio cuts out whenever microphone is used

I had a very odd issue where all sound disappeared in my Windows VM if the microphone was used. Even simply opening up audio properties and going to the Recording tab triggered this issue. Disabling / re-enabled Special Effects for the playback device brought it back until the microphone was accessed again.

I’m using USB sound card passed through to the VM for audio. It stems from the VM’s USB controller. When I had it set to USB3 the issue would occur. When set to USB2 the issue went away. Bizarre.