Tag Archives: CentOS 7

Recover files from ZFS snapshot of ProxMox VM

I recently needed to restore a specific file from one of my ProxMox VMs that had been deleted. I didn’t want to roll back the entire VM from a previous snapshot – I just wanted a single file from the snapshot. My snapshots are handled via ZFS using FreeNAS.

Since my VM was CentOS 7 it uses XFS, which made things a bit more difficult. I couldn’t find a way to crash-mount a read-only XFS snapshot – it simply resufed to mount, so I had to make everything read/write. Below is the process I used to recover my file:

On the FreeNAS server, find the snapshot you wish to clone:

sudo zfs list -t snapshot -o name -s creation -r DATASET_NAME

Next, clone the snapshot

sudo zfs clone SNAPSHOT_NAME CLONED_SNAPSHOT_NAME

Next, on a Linux box, use SSHFS to mount the snapshot:

mkdir Snapshot
sshfs -o allow_other user@freenas:/mnt/CLONED_SNAPSHOT_NAME Snapshot/

Now create a read/write loopback device following instructions found here:

sudo -i #easy lazy way to get past permissions issues
cd /path/to/Snapshot/folder/created/above
losetup -P -f VM_DISK_FILENAME.raw
losetup 
#Take note of output, it's likely set to /dev/loop0 unless you have other loopbacks

Note if your VM files are not in RAW format, extra steps will need to be taken in order to convert it to RAW format.

Now we have an SSH-mounted loopback device ready for mounting. Things are complicated if your VM uses LVM, which mine does (CentOS 7). Once the loopback device is set, lvscan should see the image’s logical volumes. Make the desired volume active

sudo lvscan
sudo lvchange -ay /dev/VG_NAME/LV_NAME

Now you can mount your volume:

mkdir Restore
mount /dev/VG_NAME/LV_NAME Restore/

Note: for XFS you must have read/write capability on the loopback device for this to work.

When you’re done, do your steps in reverse to unmount the snaspshot:

#Unmount snapshot
umount Restore
#Deactivate LVM
lvchange -an /dev/VG_NAME/LV_NAME
Remove loopback device
losetup -d /dev/loop0 #or whatever the loopback device was
#Unmount SSHfs mount to ZFS server
umount Snapshot

Finally, on the ZFS server, delete the snapshot:

sudo zfs destroy CLONED_SNAPSHOT_NAME

Troubleshooting

When I tried to mount the LVM partition at this point I got this error message:

mount: /dev/mapper/centos_plexlocal-root: can't read superblock

It ended up being because I was accidentally creating a read-only loopback device. I destroy the loopback device and re-created with write support and all was well.

Upgrading AWX

AWX is the open source version of Ansible Tower. It’s a powerful tool, but unfortunately AWX has no in place upgrade capability. If you want to upgrade your AWX to the latest version it takes a bit of trickery (the easy way out being just to pay for Ansible Tower.)

Essentially to upgrade AWX you need to spin up a completely new instance and then migrate your data over to it. Fortunately there is a script out there that makes doing this a bit easier.

Below are my notes for how I upgraded my instance of AWX from version 1.0.6 to 2.1.0.

Create temporary AWX migration server

Spin up new server with ansible installed, then clone AWX

git clone https://github.com/ansible/awx.git 
cd awx 
git clone https://github.com/ansible/awx-logos.git

Modify AWX install to expose 5432 externally:

edit installer/roles/local_docker/tasks/standalone.yml and add

    ports:
      - "5432:5432" 

right above the when: pg_hostname is not defined or pg_hostname == '' line. Complete stanza looks like this:

- name: Activate postgres container
  docker_container:
    name: postgres
    state: started
    restart_policy: unless-stopped
    image: "{{ postgresql_image }}"
    volumes:
      - "{{ postgres_data_dir }}:/var/lib/postgresql/data:Z"
    env:
      POSTGRES_USER: "{{ pg_username }}"
      POSTGRES_PASSWORD: "{{ pg_password }}"
      POSTGRES_DB: "{{ pg_database }}"
      PGDATA: "/var/lib/postgresql/data/pgdata"
    ports:
      - "5432:5432"
  when: pg_hostname is not defined or pg_hostname == ''
  register: postgres_container_activate

Make sure you have port 5432 open on your host-based firewall.

Install AWX on the new host. Verify you can log into the empty instance and that it’s the version you want to upgrade to.

Prepare original AWX server to send

Kill the AWX postgres container on the source machine, and re-run awx installer after modifying it to expose its postgres port as described above.

Install tower-cli (this can be on either source or destination servers)

sudo pip install ansible-tower-cli

Configure tower-cli

tower-cli config username SRC_AWX_USERNAME
towercli config password SRC_AWX_PASSWORD
towercli config host: SRC_AWX_HOST

Make sure to use full ansible URL as accessed from a browser for both source and destination

Install awx-migrate:

git clone https://github.com/autops/awx-migrate.git

Update awx-migrate/awx-migrate-wrapper with correct source and destination info

Run awx-migrate-wrapper. It will generate json files with your configuration.

Migrate database to temporary server

Modify tower-cli config, set host, username and password to that of the destination AWX instance

tower-cli config username DEST_AWX_USERNAME
towercli config password DEST_AWX_PASSWORD
towercli config host: DEST_AWX_HOST

Send JSON info to destination:

tower-cli send awx-data.json

You will now have a fresh new, updated AWX instance working, with imported database, on the destination host. Confirm you can log into it with the admin account you set it up with.

Prepare original AWX server to receive

Now, on the source, remove  the old AWX docker containers:

sudo docker rm -f postgres awx_task awx_web memcached rabbitmq

Move / delete the database folder the postgres docker container was using (as defined in awx installer inventory) in my case:

/var/lib/awx
/var/db/pgsqldocker

Remove and re-install AWX folder with a fresh git checkout

rm -rf awx
git clone https://github.com/ansible/awx.git
cd awx
git clone https://github.com/ansible/awx-logos.git

Re-run the AWX installer to re-create a blank database on the source host, modify the new awx/installer/inventory as needed. Also modify installer/roles/local_docker/tasks/standalone.yml as outlined above.

cd awx/installer
sudo ansible-playbook -i inventory install.yml

Migrate from temporary AWX server back to source AWX server

Once a new, empty version of awx is running on the source host,  start the awx-migrate process in reverse to migrate the database on the destination instance back to the source. Modify awx-migrate-wrapper and tower-cli to switch src and destination (the destination has become the source and the source has become the destination)

Use awx-migrate-wrapper to generate  new ansible version json files (don’t confuse them with the old json files – best to delete / move all json files before running awx-migrate-wrapper)

Modify tower-cli to point to original AWX URL

Run tower-cli send awx-data.json

Once completed, log in as the admin account. Input LDAP BIND password under settings, then delete any imported LDAP users.

Cleanup

You may want to remove the exposed postgres database ports. Simply undo the changes you made in awx/installer/roles/local_docker/tasks/standalone.yml to remove the Ports part of the first play, then remove your postgres container and re-install AWX with install.yml

Also remember to delete the JSON files generated with awx-migrate as they contain all your credentials in plaintext.

Success.

 

Active Directory / LDAP integration with WordPress

I struggled for a while to get WordPress to use Active Directory credentials on CentOS 7. Below is how I finally got it to work.

First, install necessary packages:

sudo yum -y install openldap-clients php-ldap

If you use self-signed certificate for ldaps, you’ll need to modify /etc/openldap/ldap.conf

HOST <HOSTNAME_OF_LDAP_SERVER>
PORT 636
TLS_CACERT <PATH_TO_CA_CERT>
TLS_REQCERT demand

With the above settings you can test your ldap string with ldapsearch

ldapsearch -x -D "<BIND USERNAME>" -b "<BASE_DN>" -H ldaps://<LDAP_SERVER_HOSTNAME> -W sAMAccountName=<USER_TO_QUERY>

Once ldapsearch works properly, install your AD integration plugin.  I use AuthLDAP by Andreas Heigl

I struggled with which LDAP strings and filters to use. This is what finally got everything working with our Active Directory environment:

LDAP URIldaps://<BIND_USERNAME>:<BIND_PASSWORD>@<AD_SERVER_ADDRESS>:636/<BASE DN>

Filter(sAMAccountName=%s)

Name-AttributegivenName

User-ID Attribute: sAMAccountName

Second Name Attributesn

Group-Attribute:memberOf

Group-Separator:  _

Group-Filter: (&(objectClass=user)(sAMAccountName=%s)(memberOf=*))

Role – group mapping

I had to change Group-Separator to _ above, because in Role – group mapping for active directory, you must put the FQDN, which includes commas. Put an underscore separated list of FQDNS for each of these fields you want.

Track and log unclean shutdowns in CentOS 7

I needed to find a way to track if my CentOS 7 systems reboot unexpectedly. I was surprised that this isn’t something that the OS does by default. I found this article from RedHat that outlines that you basically have to write a couple of systemd scripts yourself if you want this functionality. So, I did.

I ended up with three separate systemd services that accomplish what I want:

  • set_graceful_shutdown: Runs just before shutdown. Creates a file /root/grateful_shutdown
  • log_ungraceful_shutdown: Runs on startup. Checks to see if /root/grateful_shutdown is missing and logs this fact to a file (/var/log/shutdown.log) if it is.
  • reset_shutdown_flag: Runs after log_ungraceful_shutdown. It checks for the presence of that file, and if it exists, removes it.

I placed these three files into /etc/systemd/system and then ran systemctl daemon-reload & systemctl enable for each one.

set_graceful_shutdown.service

[Unit]
Description=Set flag for graceful shutdown
DefaultDependencies=no
RefuseManualStart=true
Before=shutdown.target

[Service]
Type=oneshot
ExecStart=/bin/touch /root/graceful_shutdown

[Install]
WantedBy=shutdown.target

log_ungraceful_shutdown.service

[Unit]
Description=Log ungraceful shutdown
ConditionPathExists=!/root/graceful_shutdown
RefuseManualStart=true
RefuseManualStop=true
Before=reset_shutdown_flag.service

[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/sh -c "echo $$(date): Improper shutdown detected >> /var/log/shutdown.log"

[Install]
WantedBy=multi-user.target

reset_shutdown_flag.service

[Unit]
Description=Check if previous system shutdown was graceful
ConditionPathExists=/root/graceful_shutdown
RefuseManualStart=true
RefuseManualStop=true

[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/bin/rm /root/graceful_shutdown

[Install]
WantedBy=multi-user.target

It feels like a kludge but it works pretty well. The result is I get an entry in a log file if the system wasn’t shut down properly.

Linux Samba shares using Kerberos / AD credentials

I had a hell of a time trying to figure out why after upgrading the CentOS Samba package the samba shares quit working. Every time someone tried to access the share, the smb service would crash. I had this system configured to use active directory credentials and it worked well for a time, but no longer.

After much digging I found my problem to be the lack of a krb5.keytab file. This is due to my using PowerBroker Open instead of kerberos for authentication.

The solution was to add this line to my samba config:

kerberos method = system keytab

That one bit made all the difference. My current samba config is as follows with no more crashing: (Updated 8/29 to add workgroup name)

[global]
     security = ADS
     passdb backend = tdbsam
     realm = DOMAIN
     workgroup = NETBIOS_DOMAIN_NAME
     encrypt passwords = yes
     lanman auth = no
     ntlm auth = no
     kerberos method = system keytab
     obey pam restrictions = yes
     winbind enum users = yes
     winbind enum groups = yes

Update 8/29/2018: After updating and rebooting my smb service refused to start. It kept giving this very unhelpful message:

 ../source3/auth/auth_util.c:1399(make_new_session_info_guest)
create_local_token failed: NT_STATUS_NO_MEMORY
../source3/smbd/server.c:2011(main)
ERROR: failed to setup guest info.
smb.service: main process exited, code=exited, status=255/n/a
Failed to start Samba SMB Daemon.

I couldn’t find any documentation on this and eventually resorted to just messing around with my smb.conf file. What fixed it was adding this to my configuration:

workgroup = NETBIOS_DOMAIN_NAME

Replacing NETBIOS_DOMAIN_NAME with the old NetBIOS style domain name (what you would put in the domain part of domain\username for logging in) for my company. It worked!

Fix USB bluetooth in KDE Plasma on CentOS 7

I spent too many hours trying to figure this stupid thing out.. but FINALLY! I have my bluetooth headset working in CentOS 7 with the KDE 4 Plasma environment. Read on if you dare…

First, you must configure dbus to allow your user to use the bluetooth dongle. Add the following above the closing /busconfig tag.  Be sure to replace USERNAME with your user account:

sudo nano /etc/dbus-1/system.d/bluetooth.conf
  <policy user="USERNAME">
    <allow send_destination="org.bluez"/>
    <allow send_interface="org.bluez.Agent1"/>
    <allow send_interface="org.bluez.GattCharacteristic1"/>
    <allow send_interface="org.bluez.GattDescriptor1"/>
    <allow send_interface="org.freedesktop.DBus.ObjectManager"/>
    <allow send_interface="org.freedesktop.DBus.Properties"/>
  </policy>

Remove and re-plug the adapter in.

Next, follow Arch Linux’s excellent guide on how to pair a bluetooth device using bluetoothctl


bluetoothctl
[bluetooth]# power on
[bluetooth]# agent on
[bluetooth]# default-agent
[bluetooth]# scan on

Now make sure that your headset is in pairing mode. It should be discovered shortly. For example,

[NEW] Device 00:1D:43:6D:03:26 Lasmex LBT10

shows a device that calls itself “Lasmex LBT10” and has MAC address “00:1D:43:6D:03:26”. We will now use that MAC address to initiate the pairing:

[bluetooth]# pair 00:1D:43:6D:03:26

After pairing, you also need to explicitly connect the device (every time?):

[bluetooth]# connect 00:1D:43:6D:03:26

If you’re getting a connection error org.bluez.Error.Failed retry by killing existing PulseAudio daemon first:

$ pulseaudio -k
[bluetooth]# connect 00:1D:43:6D:03:26

Finally, configure pulseaudio to automatically switch all audio to your headset by adding the following line to the bottom of /etc/pulse/default.pa:

nano /etc/pulse/default.pa

# automatically switch to newly-connected devices
load-module module-switch-on-connect

Update 7/27: I rebooted my machine and lost my bluetooth, to my dismay. I discovered that my user needs to be a member of the audio group. Since I’m in an active directory environment I think the local audio group got removed at reboot. So, to restore it, as root I had to run this:

usermod -aG audio <user>

After doing that, to prevent logging out and back in again, you can do the following:

su - <USERNAME>

Once that’s done all the bluetoothctl commands worked again.

CentOS 7 Enterprise desktop setup

These are my notes for standing up a CentOS 7 desktop in an enterprise environment.

Packages

Install the EPEL repository for a better experience:

sudo yum -y install epel-release

Desktop experience packages:

sudo yum -y install vlc libreoffice java gstreamer gstreamer1 gstreamer-ffmpeg gstreamer-plugins-good gstreamer-plugins-ugly gstreamer1-plugins-bad-freeworld gstreamer1-libav pidgin rhythmbox ffmpeg keepass xdotool ntfs-3g gvfs-fuse gvfs-smb fuse sshfs redshift-gtk stoken-gui stoken-cli

Additional packages that may come in handy

sudo yum -y install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum -y install libdvdcss gstreamer{,1}-plugins-ugly gstreamer-plugins-bad-nonfree gstreamer1-plugins-bad-freeworld libde265 x265

Enable ssh:

sudo systemctl enable sshd
sudo systemctl start sshd

Google Chrome

Paste into /etc/yum.repos.d/google-chrome.repo:

[google64]
name=Google - x86_64
baseurl=http://dl.google.com/linux/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
sudo yum -y install google-chrome-stable

Domain

It’s just easier to use PowerBroker Open from beyondtrust

sudo wget -O /etc/yum.repos.d/pbiso.repo http://repo.pbis.beyondtrust.com/yum/pbiso.repo
sudo yum -y install pbis-open

Cliff notes for joining the domain:

domainname=<your_domain_name>
domain_prefix=<your_domain_netbios_name>
domainaccount=<your_domain_admin_account

sudo domainjoin-cli join $domainname $domainaccount 
<enter password>

sudo /opt/pbis/bin/config UserDomainPrefix $domain_prefix
sudo /opt/pbis/bin/config AssumeDefaultDomain true
sudo /opt/pbis/bin/config LoginShellTemplate /bin/bash
sudo /opt/pbis/bin/config HomeDirTemplate %H/%U

Add domain admins to sudo, escaping spaces with a backlsash and replacing DOMAIN with your domain:

sudo visudo
%DOMAIN\\Domain\ Administrators ALL=(ALL) ALL

Reboot to make all changes go into effect.

Certificate

You might need to copy your domain’s CA certificate to your certificate trust store:

sudo cp <CA CERT FILENAME> /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust

Drive mapping

I use a simple script to use gvfs-mount to mount network drives. Change suffix to match your domain and mounts to suit your needs.

#!/bin/bash
#Simple script to mount network drives on login

suffix=<DOMAIN_SUFFIX>
MOUNTS=(
	server1$suffix/folder1
	server2$suffix/folder2
        server3$suffix/folder3
)

for i in "${MOUNTS[@]}" 
do
	gvfs-mount "smb://$i"
done

Configure in gnome to run on startup:

Add the following to ~/.config/autostart/mount-drives.desktop, changing Exec= to the path of the above script.

[Desktop Entry]
Name=Mount network drives
GenericName=Mount network drives
Comment=Script to mount network drives
Exec=<location of mount script>
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

Network Config

If you wish to add static IP and configure your DNS suffix (search domain) then run

nm-connection-editor

The other GUI for network configuration doesn’t have an option for search domains for some reason.

Smartcard

sudo yum -y install opensc pcsc-tools pcsc-lite

Be sure to install the drivers for your particular card reader. Mine came from here and here.

After installing you can test by starting pcscd and using pcsc_scan

sudo systemctl start pcscd
pcsc_scan

Vmware horizon view

Smartcard support

There is a problem with how the VMware View interacts with the opensc smartcard drivers shipped in popular Linux distributions such as CentOS and Ubuntu. View cannot load the drivers in the default configuration; therefore in order to get VMware View working with smartcards you need manually patch and compile the opensc package (thanks to this site for the information needed to do so.)

First, install the necessary development packages

sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel pcsc-lite-devel

Next, download and extract opensc-0.13 from sourceforge:

wget http://downloads.sourceforge.net/project/opensc/OpenSC/opensc-0.13.0/opensc-0.13.0.tar.gz
tar zxvf opensc-0.13.0.tar.gz
cd opensc-0.13.0

Now we have to patch two specific files in the source before compiling:

echo "--- ./src/pkcs11/opensc-pkcs11.exports
 +++ ./src/pkcs11/opensc-pkcs11.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize
 --- ./src/pkcs11/pkcs11-spy.exports
 +++ ./src/pkcs11/pkcs11-spy.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize" > opensc.patch

patch -p1 -i opensc.patch

Next, compiling and installing:

./bootstrap
./configure
make
sudo make install

Assuming there were no errors, you can now link the compiled driver to the location VMware view expects it. Note: you must rename the library from opensc-pkcs11.so to libopensc-pkcs11.so for this to work (another lovely VMware bug)

sudo mkdir -p /usr/lib/vmware/view/pkcs11/
sudo ln -s /usr/local/lib/pkcs11/opensc-pkcs11.so /usr/lib/vmware/view/pkcs11/libopensc-pkcs11.so

Lync

Install the pidgin-sipe plugin as detailed here

sudo yum -y install pidgin pidgin-sipe

Choose “Office Communicator” as the protocol. Enter your e-mail address for the username, then go to the Advanced tab and check “Use single sign-on.”

On first run all contact names were missing. Per here, simply close and restart the application.

Gnome 3

Disable audible bell

Taken from here

Disable audible bell and enable visual bell with:

gsettings set org.gnome.desktop.wm.preferences audible-bell false
gsettings set org.gnome.desktop.wm.preferences visual-bell true

and change the type of the visual bell if you don’t need the fullscreen flash:

gsettings set org.gnome.desktop.wm.preferences visual-bell-type frame-flash

Extensions

If you can find your extension via yum it tends to work better than the gnome extension site. Make sure you’re using the correct shell version from the site:

gnome-shell --version
sudo yum -y install gnome-shell-extension-top-icons gnome-shell-extension-dash-to-dock

Other useful extensions:

backslide, multi monitors add-on , No topleft hot corner, Dropdown terminal, Media player indicator, Focus my window, Workspace indicator, Native window placement, Openweather, Panel osd, Dash to dock, Gpaste

RSA

For if you have the misfortune of being in an environment that uses RSA SecurID for two factor authentication, here is the official guide

Necessary packages to be installed:

sudo yum -y install selinux-policy-devel policycoreutils-devel
  1.  Download & extract PAM agent, cd to extracted directory
    tar -xvf PAM-Agent*.tar
  2. Create /var/ace directory and place necessary files inside. Create sdopts.rec and add the IP address of the desktop.
    mkdir /var/ace
    cp sdconf.rec /var/ace
    vi /var/ace/sdopts.rec
    CLIENT_IP=<IP ADDRESS OF DESKTOP>
  3. Run the install_pam script and specify UDP authentication
    ./install_pam.sh
  4.  Modify /etc/pam.d/password-auth to add the RSA authentication agent. Insert above pam_lsass.so smartcard_prompt try_first_pass line, then comment out pam_lsass.so smartcard_prompt try_first_pass line
    auth required pam_securid.so
    auth required pam_env.so
    auth sufficient pam_lsass.so
  5. Add new system in RSA console: Access / Authentication Agents / Add new
  6. Test to make sure everything works:
    /opt/pam/bin/64bit/acetest

Managing Windows hosts with Ansible

I spun my wheels for a while trying to get Ansible to manage windows hosts. Here are my notes on how I finally successfully got ansible (on a Linux host) to use an HTTPS WinRM connection to connect to a windows host using Kerberos for authentication. This article was of great help.

Ansible Hosts file

[all:vars]
ansible_user=<user>
ansible_password=<password>
ansible_connection=winrm
ansible_winrm_transport=kerberos

Packages to install (CentOS 7)

sudo yum install gcc python2-pip
sudo pip install kerberos requests_kerberos pywinrm certifi

Playbook syntax

Modules involving Windows hosts have a win_ prefix.

Troubleshooting

Code 500

WinRMTransportError: (u'http', u'Bad
HTTP response returned from server. Code 500')

I was using -m ping for testing instead of -m win_ping. Make sure you’re using win_ping and not regular ping module.

Certificate validation failed

"msg": "kerberos: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)"

I had a self signed CA certificate on the box ansible was trying to connect to. Python doesn’t appear to trust the system’s certificate trust chain by default. Ansible has a configuration directive

ansible_winrm_ca_trust_path

but even with that pointing to my system trust it wouldn’t work. I then found this gem on the winrm page for ansible:

The CA chain can contain a single or multiple issuer certificates and each entry is contained on a new line. To then use the custom CA chain as part of the validation process, set ansible_winrm_ca_trust_path to the path of the file. If this variable is not set, the default CA chain is used instead which is located in the install path of the Python package certifi.

Challenge #1: I didn’t have certifi installed.

sudo pip install certifi

Challenge #2: I needed to know where certifi’s default trust store was located, which I discovered after reading the project github page

python
import certifi
certifi.where()

In my case the location was ‘/usr/lib/python2.7/site-packages/certifi/cacert.pem’. I then symlinked my system trust to that location (backing up existing trust first)

sudo mv /usr/lib/python2.7/site-packages/certifi/cacert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem

Et voila! No more trust issues.

Ansible Tower

Note: If you’re running Ansible Tower, you have to work with their own bundled version of python instead of the system version. For version 3.2 it was located here:

/var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

I fixed it by doing this:

sudo mv /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

This resolved the trust issues.

Fix wordpress PHP change was reverted error

Since WordPress 4.9 I’ve had a peculiar issue when trying to edit theme files using the web GUI. Whenever I tried to save changes I would get this error message:

Unable to communicate back with site to check for fatal errors, so the PHP change was reverted. You will need to upload your PHP file change by some other means, such as by using SFTP.

After following this long thread I saw the suggestion to install and use the Health Check plugin to get more information into why this is happening. In my case I kept getting this error message:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 28: Connection timed out after 10001 milliseconds

I researched what a loopback request is in this case. It’s the webserver reaching out to its own site’s url to talk to itself. My webserver was being denied internet access, which included its own URL, so it couldn’t complete the loopback request.

One solution, mentioned here, is to edit the hosts file on your webserver to point to 127.0.0.1 for the URL of your site. My solution was to open up the firewall to allow my server to connect to its URL. I then ran into a different problem:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 60: Peer's Certificate issuer is not recognized.

After digging for a while I found this site which explains how to edit php.ini to point to an acceptable certificate list. To fix this on my Cent7 machine I edited /etc/php.ini and added this line (you could also add it to /etc/php.d/curl.ini)

curl.cainfo="/etc/pki/tls/cert.pem"

This caused php’s curl module to use the same certificate trust store that the underlying OS uses.

Then restart php-fpm if you’re using it:

sudo systemctl restart php-fpm

Success! Loopback connections now work properly.


Update 7/16/2018: I still had a wordpress site that was giving me certificate grief despite the above fix. After MUCH frustration I finally found this post where André Gayle points out that wordpress ships with its own certificate bundle, independent of even curl’s ca bundle! It’s located in your wordpress directory/wp-includes/certificates folder.

My solution to this extremely frustrating problem was to remove their bundle and symlink to my own (Cent 7 box – adjust your path to match where your wordpress install and certificate trust store is located)

sudo mv /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt.old
sudo ln -s /etc/pki/tls/cert.pem /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt

FINALLY no more loopback errors in the Health Check plugin, and thus the ability to edit theme files in the editor.

Nextcloud External Files SMB not working – Empty Response from Server

I beat my head against the wall for hours trying to figure out why external storage wasn’t working with my Nextcloud instance after I migrated it over to CentOS 7. All I kept getting was a very unhelpful

There was an error with message: Empty response from the server.

I installed all the libraries multiple times. I tried different versions of smbclient and php-smbclient but the error kept happening! Eventually I decided to check the samba logs of my samba server. Sure enough:

 create_connection_session_info failed: NT_STATUS_ACCESS_DENIED

The username and password I was using was correct. I read on some forum (sorry, no link) to put something in the Workgroup field. Voila! As soon as I populated the workgroup field in Nextcloud for my SMB shares, they all worked!