Accept multiple SSH RSA keys with ssh-keyscan

I came across a new machine that needed to connect to many SSH hosts via ansible. I had a problem where ansible was prompting me for each post if I wanted to accept the RSA key. As I had dozens of hosts I didn’t want to type yes for every single one; furthermore the yes command didn’t appear to work. I needed a way to automatically accept all SSH RSA keys from a list of server names. I know you can disable RSA key checking but I didn’t want to do that.

I eventually found this site which suggested a small for loop, which did the trick beautifully. I modified it to suit my needs.

This little two-liner takes a file (in my case, my ansible hosts file) and then runs ssh-keyscan against it and adds the results to the .ssh/known_hosts file. The end result is an automated way to accept many SSH keys.

SERVER_LIST=$(cat /etc/ansible/hosts)
for host in $SERVER_LIST; do ssh-keyscan -H $host >> ~/.ssh/known_hosts; done

Using a Bus Pirate to fix Seagate drives

I wrote these notes almost three years ago but never published them. Since I’ve now referenced them again I’ll publish them albeit in a crude state.

7200.11 BSY bug

I had a need to fix the firmware of a Seagate  7200.11 BSY bug, which involved connecting to the RS232 serial ports on the drive and issuing a few commands to clear SMART data. Details here:

http://www.arvydas.co.uk/2012/07/fixing-a-seagate-7200-11-hard-drive-with-arduino/

http://hackaday.com/2012/07/30/recovering-from-a-seagate-hdd-firmware-bug/

https://plus.google.com/u/0/+BillFarrow/posts/ir1xnfu46TE

http://webcache.googleusercontent.com/search?q=cache:F1J2P5E3mrIJ:haquesprojects.com/embedded-device-hacking/using-a-bus-pirate-as-a-usb-ttl-serial-converter/+&cd=4&hl=en&ct=clnk&gl=us

http://fillwithcoolblogname.blogspot.com/2011/02/fixing-seagate-720011-bsy-0-lba-fw-bug.html

Using a Bus Pirate:

Find out what device the bus pirate is given:

dmesg | tail

usb 1-1.6.3: FTDI USB Serial Device converter now attached to ttyUSB0

Next, add your user to the dialout group (thanks to here for the hint)

usermod -a -G dialout $USER

You may need to log out and log back in after issuing the above command for it to take effect.

Fire up a terminal editor (I used screen after learning about my options from here.)

screen /dev/ttyUSB0 115200 8N1

Press Enter and you should be greeted with the Bus Pirate’s HiZ> prompt. Next, enter the following:

1. m – to change the mode
2. 3 – for UART mode
3. 7 – for 38400 bps
4. 1 – for 8 bits of data, no parity control
5. 1 – for 1 stop bit
6. 1 – for Idle 1 receive polarity
7. 2 – for Normal output type

At the “UART>” prompt. Enter “(0)” to show available macros:

UART>(0)
0.Macro menu
1.Transparent bridge
2.Live monitor
3.Bridge with flow control

Now enter “(3)”  (don’t forget the parenthesis – this burned me) to enter bridge mode with flow control and hit “y” at the “Are you sure?” prompt. The terminal will receive input from your device.

UART>(3)
UART bridge
Reset to exit
Are you sure?

Now plug in pins to hard drive. Use this site as a guide for which pins to use. The drive should be upside down to expose the controller board.
BP Gnd (top left) to Gnd on drive (Second pin from the left)
BP MISO (UART RX – bottom right) to TX on drive (far right pin)
BP MOSI (UART TX) to RX on drive (Seconf from the right pin)

I only ended up needing MISO & MOSI, ground wasn’t required.

Un-screw hard drive, add shim to prevent electrical contact

Power on drive

CTRL+Z

/2

(wait 30 seconds)

Z

(un-shim, re-screw hard drive)

U

/1

N1

Power down drive, wait few seconds, power back up

CTRL + Z

m0,2,2,0,0,0,0,22 (enter)

Clear SMART data

A couple years later I came across some old NAS drives that I wanted to use. I ran a full battery of burn-in tests using badblocks and the drives passed with flying colors. The only problem is they had SMART data saying Reallocated_Sector_Ct was past the threshold. Barely. I decided to roll the dice with these drives anyway given their proven performance currently and over the years.

The problem is FreeNAS will e-mail spam you about that SMART attribute. I couldn’t find a good way to suppress those alerts yet have them alert if that number gets worse, so I decided to cheat and clear all SMART data from those drives, thus getting FreeNAS happy with me yet alerting me if the reallocated sector count increases in the future.

I read a few sources to accomplish this with my bus pirate.

https://blog.zencoffee.org/2011/07/bus-pirate-as-ftdi-cable/

https://forum.hddguru.com/viewtopic.php?f=1&t=33886&start=20&mobile=mobile

https://forum.hddguru.com/viewtopic.php?f=1&t=33886&start=20

Use the same instructions as above for hooking up the bus pirate to the drive’s RS232 ports (to the right of the SATA port.)

Once you’ve serial connected to the drive, it’s three simple commands to clear the SMART data:

CTRL + Z
/1
N1

Docker – run a cron job for a container from the host

I’ve installed tiny tiny rss as a replacement for Feedly once they started inserting ads that looked like articles. Deceptive advertising. I’m not a fan.

I’ve spun up linuxserver’s version of it in docker and it works pretty well except for updating articles. I couldn’t find a great guide on configuring it for updates specifically within a docker container, so here is mine. My solution was to have a cron job running on the docker host to run the feed update script within the docker container, inspired by this post.

The trick is to use the docker exec command to run a command from the docker host but execute it within the running container.

docker exec -u 1001 -it TinyTinyRSS /usr/bin/php /config/www/tt-rss/update.php --feeds --quiet

The -u command specifies which user ID to run the command as. TinyTinyRSS is the name of my container. I’ve set this to run every 15 minutes with the following crontab syntax:

*/15 * * * * /usr/bin/docker exec -u 1001 -d TinyTinyRSS /usr/bin/php /config/www/tt-rss/update.php --feeds --quiet

edit: Modified the crontab entry to make it work properly per this post.

 

CentOS 7 Enterprise desktop setup

These are my notes for standing up a CentOS 7 desktop in an enterprise environment.

Packages

Install the EPEL repository for a better experience:

sudo yum -y install epel-release

Desktop experience packages:

sudo yum -y install vlc libreoffice java gstreamer gstreamer1 gstreamer-ffmpeg gstreamer-plugins-good gstreamer-plugins-ugly gstreamer1-plugins-bad-freeworld gstreamer1-libav pidgin rhythmbox ffmpeg keepass xdotool ntfs-3g gvfs-fuse gvfs-smb fuse sshfs redshift-gtk stoken-gui stoken-cli

Additional packages that may come in handy

sudo yum -y install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum -y install libdvdcss gstreamer{,1}-plugins-ugly gstreamer-plugins-bad-nonfree gstreamer1-plugins-bad-freeworld libde265 x265

Enable ssh:

sudo systemctl enable sshd
sudo systemctl start sshd

Google Chrome

Paste into /etc/yum.repos.d/google-chrome.repo:

[google64]
name=Google - x86_64
baseurl=http://dl.google.com/linux/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
sudo yum -y install google-chrome-stable

Domain

It’s just easier to use PowerBroker Open from beyondtrust

sudo wget -O /etc/yum.repos.d/pbiso.repo http://repo.pbis.beyondtrust.com/yum/pbiso.repo
sudo yum -y install pbis-open

Cliff notes for joining the domain:

domainname=<your_domain_name>
domain_prefix=<your_domain_netbios_name>
domainaccount=<your_domain_admin_account

sudo domainjoin-cli join $domainname $domainaccount 
<enter password>

sudo /opt/pbis/bin/config UserDomainPrefix $domain_prefix
sudo /opt/pbis/bin/config AssumeDefaultDomain true
sudo /opt/pbis/bin/config LoginShellTemplate /bin/bash
sudo /opt/pbis/bin/config HomeDirTemplate %H/%U

Add domain admins to sudo, escaping spaces with a backlsash and replacing DOMAIN with your domain:

sudo visudo
%DOMAIN\\Domain\ Administrators ALL=(ALL) ALL

Reboot to make all changes go into effect.

Certificate

You might need to copy your domain’s CA certificate to your certificate trust store:

sudo cp <CA CERT FILENAME> /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust

Drive mapping

I use a simple script to use gvfs-mount to mount network drives. Change suffix to match your domain and mounts to suit your needs.

#!/bin/bash
#Simple script to mount network drives on login

suffix=<DOMAIN_SUFFIX>
MOUNTS=(
	server1$suffix/folder1
	server2$suffix/folder2
        server3$suffix/folder3
)

for i in "${MOUNTS[@]}" 
do
	gvfs-mount "smb://$i"
done

Configure in gnome to run on startup:

Add the following to ~/.config/autostart/mount-drives.desktop, changing Exec= to the path of the above script.

[Desktop Entry]
Name=Mount network drives
GenericName=Mount network drives
Comment=Script to mount network drives
Exec=<location of mount script>
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

Network Config

If you wish to add static IP and configure your DNS suffix (search domain) then run

nm-connection-editor

The other GUI for network configuration doesn’t have an option for search domains for some reason.

Smartcard

sudo yum -y install opensc pcsc-tools pcsc-lite

Be sure to install the drivers for your particular card reader. Mine came from here and here.

After installing you can test by starting pcscd and using pcsc_scan

sudo systemctl start pcscd
pcsc_scan

Vmware horizon view

Smartcard support

There is a problem with how the VMware View interacts with the opensc smartcard drivers shipped in popular Linux distributions such as CentOS and Ubuntu. View cannot load the drivers in the default configuration; therefore in order to get VMware View working with smartcards you need manually patch and compile the opensc package (thanks to this site for the information needed to do so.)

First, install the necessary development packages

sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel pcsc-lite-devel

Next, download and extract opensc-0.13 from sourceforge:

wget http://downloads.sourceforge.net/project/opensc/OpenSC/opensc-0.13.0/opensc-0.13.0.tar.gz
tar zxvf opensc-0.13.0.tar.gz
cd opensc-0.13.0

Now we have to patch two specific files in the source before compiling:

echo "--- ./src/pkcs11/opensc-pkcs11.exports
 +++ ./src/pkcs11/opensc-pkcs11.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize
 --- ./src/pkcs11/pkcs11-spy.exports
 +++ ./src/pkcs11/pkcs11-spy.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize" > opensc.patch

patch -p1 -i opensc.patch

Next, compiling and installing:

./bootstrap
./configure
make
sudo make install

Assuming there were no errors, you can now link the compiled driver to the location VMware view expects it. Note: you must rename the library from opensc-pkcs11.so to libopensc-pkcs11.so for this to work (another lovely VMware bug)

sudo mkdir -p /usr/lib/vmware/view/pkcs11/
sudo ln -s /usr/local/lib/pkcs11/opensc-pkcs11.so /usr/lib/vmware/view/pkcs11/libopensc-pkcs11.so

Lync

Install the pidgin-sipe plugin as detailed here

sudo yum -y install pidgin pidgin-sipe

Choose “Office Communicator” as the protocol. Enter your e-mail address for the username, then go to the Advanced tab and check “Use single sign-on.”

On first run all contact names were missing. Per here, simply close and restart the application.

Gnome 3

Disable audible bell

Taken from here

Disable audible bell and enable visual bell with:

gsettings set org.gnome.desktop.wm.preferences audible-bell false
gsettings set org.gnome.desktop.wm.preferences visual-bell true

and change the type of the visual bell if you don’t need the fullscreen flash:

gsettings set org.gnome.desktop.wm.preferences visual-bell-type frame-flash

Extensions

If you can find your extension via yum it tends to work better than the gnome extension site. Make sure you’re using the correct shell version from the site:

gnome-shell --version
sudo yum -y install gnome-shell-extension-top-icons gnome-shell-extension-dash-to-dock

Other useful extensions:

backslide, multi monitors add-on , No topleft hot corner, Dropdown terminal, Media player indicator, Focus my window, Workspace indicator, Native window placement, Openweather, Panel osd, Dash to dock, Gpaste

RSA

For if you have the misfortune of being in an environment that uses RSA SecurID for two factor authentication, here is the official guide

Necessary packages to be installed:

sudo yum -y install selinux-policy-devel policycoreutils-devel
  1.  Download & extract PAM agent, cd to extracted directory
    tar -xvf PAM-Agent*.tar
  2. Create /var/ace directory and place necessary files inside. Create sdopts.rec and add the IP address of the desktop.
    mkdir /var/ace
    cp sdconf.rec /var/ace
    vi /var/ace/sdopts.rec
    CLIENT_IP=<IP ADDRESS OF DESKTOP>
  3. Run the install_pam script and specify UDP authentication
    ./install_pam.sh
  4.  Modify /etc/pam.d/password-auth to add the RSA authentication agent. Insert above pam_lsass.so smartcard_prompt try_first_pass line, then comment out pam_lsass.so smartcard_prompt try_first_pass line
    auth required pam_securid.so
    auth required pam_env.so
    auth sufficient pam_lsass.so
  5. Add new system in RSA console: Access / Authentication Agents / Add new
  6. Test to make sure everything works:
    /opt/pam/bin/64bit/acetest

Managing Windows hosts with Ansible

I spun my wheels for a while trying to get Ansible to manage windows hosts. Here are my notes on how I finally successfully got ansible (on a Linux host) to use an HTTPS WinRM connection to connect to a windows host using Kerberos for authentication. This article was of great help.

Ansible Hosts file

[all:vars]
ansible_user=<user>
ansible_password=<password>
ansible_connection=winrm
ansible_winrm_transport=kerberos

Packages to install (CentOS 7)

sudo yum install gcc python2-pip
sudo pip install kerberos requests_kerberos pywinrm certifi

Playbook syntax

Modules involving Windows hosts have a win_ prefix.

Troubleshooting

Code 500

WinRMTransportError: (u'http', u'Bad
HTTP response returned from server. Code 500')

I was using -m ping for testing instead of -m win_ping. Make sure you’re using win_ping and not regular ping module.

Certificate validation failed

"msg": "kerberos: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)"

I had a self signed CA certificate on the box ansible was trying to connect to. Python doesn’t appear to trust the system’s certificate trust chain by default. Ansible has a configuration directive

ansible_winrm_ca_trust_path

but even with that pointing to my system trust it wouldn’t work. I then found this gem on the winrm page for ansible:

The CA chain can contain a single or multiple issuer certificates and each entry is contained on a new line. To then use the custom CA chain as part of the validation process, set ansible_winrm_ca_trust_path to the path of the file. If this variable is not set, the default CA chain is used instead which is located in the install path of the Python package certifi.

Challenge #1: I didn’t have certifi installed.

sudo pip install certifi

Challenge #2: I needed to know where certifi’s default trust store was located, which I discovered after reading the project github page

python
import certifi
certifi.where()

In my case the location was ‘/usr/lib/python2.7/site-packages/certifi/cacert.pem’. I then symlinked my system trust to that location (backing up existing trust first)

sudo mv /usr/lib/python2.7/site-packages/certifi/cacert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem

Et voila! No more trust issues.

Ansible Tower

Note: If you’re running Ansible Tower, you have to work with their own bundled version of python instead of the system version. For version 3.2 it was located here:

/var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

I fixed it by doing this:

sudo mv /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

This resolved the trust issues.

Run ProxMox in local mode when part of a quorum

When moving my desktop (which I had joined to my ProxMox cluster) to a new environment I found I couldn’t start any VMs because there was no quorum. I couldn’t even get it to run “pvecm expected 1” because corosync wouldn’t start. I would see these errors in the log:

Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that the local firewall is configured improperly.

I found on this forum post that you can tell proxmox to run in local mode (not trying to be in a quorum)

sudo systemctl stop pve-cluster
sudo /usr/bin/pmxcfs -l

Success! This caused my node to run in local node, which let me run my Windows gaming VM in another location.

VGA Passthrough with Threadripper

An unfortunate bug exists for the AMD Threadripper family of GPUs which causes VGA Passthrough not to work properly. Fortunately some very clever people have implemented a workaround to allow proper VGA passthrough until a proper Linux Kernel patch can be accepted and implemented. See here for the whole story.

Right now my Thrdearipper 1950x successfully has GPU passthrough thanks to HyenaCheeseHeads “java hack” applet.  I went this route because I really didn’t want to try and recompile my ProxMox kernel to get passthrough to work. Per the description “It is a small program that runs as any user with read/write access to sysfs (this small guide assumes “root”). The program monitors any PCIe device that is connected to VFIO-PCI when the program starts, if the device disconnects due to the issues described in this post then the program tries to re-connect the device by rewriting the bridge configuration.” Instructions taken from the above Reddit post.

  • Go to https://pastebin.com/iYg3Dngs and hit “Download” (the MD5 sum is supposed to be 91914b021b890d778f4055bcc5f41002)
  • Rename the downloaded file to “ZenBridgeBaconRecovery.java” and put it in a new folder somewhere
  • Go to the folder in a terminal and type “javac ZenBridgeBaconRecovery.java”, this should take a short while and then complete with no errors. You may need to install the Java 8 JDK to get the javac command (use your distribution’s software manager)
  • In the same folder type “sudo java ZenBridgeBaconRecovery”
  • Make sure that the PCIe device that you intend to passthru is listed as monitored with a bridge
  • Now start your VM

In my case (Debian Stretch, ProxMox) I needed to install openjdk-8-jdk-headless

sudo apt install openjdk-8-jdk-headless
javac ZenBridgeBaconRecovery.java

Next I have a little script on startup to spawn this as root in a detached tmux session, so I don’t have to remember to run it (If you try to start your VM before running this, it will hose passthrough on your system until you reboot it.) Be sure to change the script to point to wherever you compiled ZenBridgeBaconRecovery

#!/bin/bash
cd /home/nicholas  #change me to suit your needs
sudo java ZenBridgeBaconRecovery

And here is the command I use to run on startup:

tmux new -d '/home/nicholas/passthrough.sh'

Again, be sure to modify the above to point to the path of wherever you saved the above script.

So far this works pretty well for me. I hate having to run a java process as sudo, but it’s better than recompiling my kernel.


Update 6/27/2018:  I’ve created a systemd service script for the ZenBaconRecovery file to run at boot. Here is my file, placed in
/etc/systemd/system/zenbridge.service:  (change your working directory to match the zenbridgebaconrecovery java file location. Don’t forget to do systemctl daemon-reload.)

[Unit] 
Description=Zen Bridge Bacon Recovery 
After=network.target 

[Service] 
Type=simple 
User=root 
WorkingDirectory=/home/nicholas 
ExecStart=/usr/bin/java ZenBridgeBaconRecovery 
Restart=on-failure # or always, on-abort, etc 

[Install] 
WantedBy=multi-user.target 
~

Update 8/18/2018 Finally solved for everyone!

Per an update on the reddit thread motherboard manufactures have finally put out BIOS updates that resolve the PCI passthrough problems. I updated my X399 Tachi to the latest version of its UEFI BIOS (3.20) and indeed PCI passthrough worked without any more wonky workarounds!

Update /etc/hosts with current IP for ProxMox

ProxMox virtual environment is a really nice package for managing KVM and container visualization. One quirk about it is you need to have an entry in /etc/hosts that points to your system’s IP address, not 127.0.0.1 or 127.0.1.1. I wrote a little script to grab the IP of your specified interface and add it to /etc/hosts automatically for you. You may download it here or see below:

#!/bin/bash
#A simple script to update /etc/hosts with your current IP address for use with ProxMox virtual environment
#Author: Nicholas Jeppson
#Date: 4/25/2018

###Edit these variables to your environment###
INTERFACE="enp4s0" #the interface that has the IP you want to update hosts for
DNS_SUFFIX=""
###End variables section###

#Variables you shouldn't have to change
IP=$(ip addr show $INTERFACE |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1)
HOSTNAME=$(hostname)

#Use sed to add IP to first line in /etc/hosts
sed -i "1s/^/$IP $HOSTNAME $HOSTNAME$DNS_SUFFIX\n/" /etc/hosts

Use grep, awk, and cut to display only your IP address

I needed a quick way to determine my IP address for a script. If you run the ip addr show command it outputs a lot of information I don’t need. I settled on using grep, awk, and cut to get the information I want

ip addr show <interface name> |egrep 'inet '| awk '{print $2}'| cut -d '/' -f1

The result is a clean IP address. Beautiful. Thanks to this site for insight into how to use cut.