Tag Archives: CentOS

Add multiple search domains in CentOS 7

I needed to add multiple domains to search DNS with on my Cent7 box. It turns out there are two ways to do it. Cent7 uses networkmanager, so you can use the cli tool to add what you want, or you can edit the file directly.

Using nmcli:

sudo nmcli con mod eth0 ipv4.dns-search "domain1.org,domain2.org,domain3.org"

This causes nmcli to add this line to your network interface config file (/etc/sysconfig/network-scripts/ifcfg-eth0 in my case)

DOMAIN="domain1.org domain2.org domain3.org"

After either using nmcli or manually editing your file, simply restart the network service and your search domains now work!

CReate a local yum repository

I had a need to copy some specific RPM files locally to my machine, but have the general YUM database recognize them (not using yum localinstall.) I found this lovely howto that explains how to do it.

In my case, I created a folder for one RPM I wanted in the local yum repository. I then installed the createrepo package, used it on my new directory containing my RPMs, then added a repository file pointing to the new local repository.

mkdir yumlocal
cp <DESIRED RPM FILES> yumlocal
yum install createrepo
cd yumlocal
createrepo .

The last piece was to create a yum repo file local.repo

[local]
name=CentOS-$releasever - local packages for $basearch
baseurl=file:///path/to/yumlocal/
enabled=1
gpgcheck=0
protect=1

That was it! Now I could use yum install <NAME OF PACKAGE IN LOCAL REPO FILE> and it works!

Podman no internet in container fix

I’ve started experimenting with CentOS 8 & Podman (a fork of Docker.) I ran into an issue where one of my containers needed internet access, but could not connect. After some digging I found this site which explains why:

I had to configure the firewall on the podman host to allow for IP masquerade:

sudo firewall-cmd --zone=public --add-masquerade --permanent

After running the above command, my container had internet access!

Add static route in CentOS7

I recently began a project of segmenting my LAN into various VLANs. One issue that cropped up had me banging my head against the wall for days. I had a particular VM that would use OpenVPN to a private VPN provider. I had that same system sending things to a file share via transmission-daemon.

Pre-subnet move everything worked, but once I moved my file server to a different subnet suddenly this VM could not access it while on the VPN. Transmission would hang for some time before finally saying

transmission-daemon.service: Failed with result 'timeout'.

The problem was since my file server was on a different subnet, it was trying to route traffic to it via the default gateway, which in this case was the VPN provider. I had to add a specific route to tell the server to use my LAN network instead of the VPN network in order to restore connectivity to the file server (thanks to this site for the primer.)

I had to create a file /etc/sysconfig/network-scripts/route-eth0 and give it the following line:

192.168.2.0/24 via 192.168.1.1 dev eth0

This instructed my VM to get to the 192.168.2 network via the 192.168.1.1 gateway on eth0. Restart the network service (or reboot) and success!

Create local CentOS 7 Repo

I’ve recently needed to create a local mirror of Cent7 packages. I followed the guide posted on techmint but also made a few tweaks to get it to work to my liking.

Create local repo mirror

  • Install necessary packages
    • sudo yum -y install epel-release nginx createrepo yum-utils moreutils
  • Create directories that will host your repo
    • sudo mkdir -p /usr/share/nginx/html/repos/{base,centosplus,extras,updates,epel}
  • Use the reposync tool to synchronize to those local directories (repeat for each directory, changing repoid= value to match)
    • reposync -g -l -d -m --repoid=base --newest-only --download-metadata --download_path=/usr/share/nginx/html/repos/
  • Use the createrepo tool to create repodata
    • base & epel have a group file, other repos do not.
    • For base & epel:
      • createrepo -g comps.xml /usr/share/nginx/html/repos/<FOLDER>
    • For the rest:
      • createrepo /usr/share/nginx/html/repos/<FOLDER>

Configure daily synchronization via cron

Copy this script to /etc/cron.daily/ and give it execute rights

#!/bin/bash
##specify all local repositories in a single variable
LOCAL_REPOS="base extras updates epel centosplus"
##a loop to update repos one at a time
for REPO in ${LOCAL_REPOS}; do
reposync -g -l -d -m --repoid=$REPO --newest-only --download-metadata --download_path=/usr/share/nginx/html/repos/Cent7/
if [[ $REPO = 'base' || $REPO = 'epel' ]]; then
        createrepo -g comps.xml /usr/share/nginx/html/repos/Cent7/$REPO/
else
        createrepo /usr/share/nginx/html/repos/Cent7/$REPO/
fi
done
chmod 755 /etc/cron.daily/<script_name>

E-mails from cron became annoying. I wanted to only get e-mailed on error. The solution is to use chronic

Modify /etc/anacrontab to add “chronic” between nice and run-parts

1 5 cron.daily nice chronic run-parts /etc/cron.daily
7 25 cron.weekly nice chronic run-parts /etc/cron.weekly
@monthly 45 cron.monthly nice chronic run-parts /etc/cron.monthly

Success.


Update 4/25/19 I encountered an issue while trying to use repsync to mirror the remi repo.

warning: /usr/share/nginx/html/repos/Cent7/remi/remi/aspell-nl-0.50-1.el7.remi.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 00f97f56: NOKEY

I found out from here that it means you need to manually import the package’s key into the RPMDB like so

sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-remi

CentOS 7 Enterprise desktop setup

These are my notes for standing up a CentOS 7 desktop in an enterprise environment.

Packages

Install the EPEL repository for a better experience:

sudo yum -y install epel-release

Desktop experience packages:

sudo yum -y install vlc libreoffice java gstreamer gstreamer1 gstreamer-ffmpeg gstreamer-plugins-good gstreamer-plugins-ugly gstreamer1-plugins-bad-freeworld gstreamer1-libav pidgin rhythmbox ffmpeg keepass xdotool ntfs-3g gvfs-fuse gvfs-smb fuse sshfs redshift-gtk stoken-gui stoken-cli

Additional packages that may come in handy

sudo yum -y install http://li.nux.ro/download/nux/dextop/el7/x86_64/nux-dextop-release-0-5.el7.nux.noarch.rpm
sudo yum -y install libdvdcss gstreamer{,1}-plugins-ugly gstreamer-plugins-bad-nonfree gstreamer1-plugins-bad-freeworld libde265 x265

Enable ssh:

sudo systemctl enable sshd
sudo systemctl start sshd

Google Chrome

Paste into /etc/yum.repos.d/google-chrome.repo:

[google64]
name=Google - x86_64
baseurl=http://dl.google.com/linux/rpm/stable/x86_64
enabled=1
gpgcheck=1
gpgkey=https://dl-ssl.google.com/linux/linux_signing_key.pub
sudo yum -y install google-chrome-stable

Domain

It’s just easier to use PowerBroker Open from beyondtrust

sudo wget -O /etc/yum.repos.d/pbiso.repo http://repo.pbis.beyondtrust.com/yum/pbiso.repo
sudo yum -y install pbis-open

Cliff notes for joining the domain:

domainname=<your_domain_name>
domain_prefix=<your_domain_netbios_name>
domainaccount=<your_domain_admin_account

sudo domainjoin-cli join $domainname $domainaccount 
<enter password>

sudo /opt/pbis/bin/config UserDomainPrefix $domain_prefix
sudo /opt/pbis/bin/config AssumeDefaultDomain true
sudo /opt/pbis/bin/config LoginShellTemplate /bin/bash
sudo /opt/pbis/bin/config HomeDirTemplate %H/%U

Add domain admins to sudo, escaping spaces with a backlsash and replacing DOMAIN with your domain:

sudo visudo
%DOMAIN\\Domain\ Administrators ALL=(ALL) ALL

Reboot to make all changes go into effect.

Certificate

You might need to copy your domain’s CA certificate to your certificate trust store:

sudo cp <CA CERT FILENAME> /etc/pki/ca-trust/source/anchors/
sudo update-ca-trust

Drive mapping

I use a simple script to use gvfs-mount to mount network drives. Change suffix to match your domain and mounts to suit your needs.

#!/bin/bash
#Simple script to mount network drives on login

suffix=<DOMAIN_SUFFIX>
MOUNTS=(
	server1$suffix/folder1
	server2$suffix/folder2
        server3$suffix/folder3
)

for i in "${MOUNTS[@]}" 
do
	gvfs-mount "smb://$i"
done

Configure in gnome to run on startup:

Add the following to ~/.config/autostart/mount-drives.desktop, changing Exec= to the path of the above script.

[Desktop Entry]
Name=Mount network drives
GenericName=Mount network drives
Comment=Script to mount network drives
Exec=<location of mount script>
Terminal=false
Type=Application
X-GNOME-Autostart-enabled=true

Network Config

If you wish to add static IP and configure your DNS suffix (search domain) then run

nm-connection-editor

The other GUI for network configuration doesn’t have an option for search domains for some reason.

Smartcard

sudo yum -y install opensc pcsc-tools pcsc-lite

Be sure to install the drivers for your particular card reader. Mine came from here and here.

After installing you can test by starting pcscd and using pcsc_scan

sudo systemctl start pcscd
pcsc_scan

Vmware horizon view

Smartcard support

There is a problem with how the VMware View interacts with the opensc smartcard drivers shipped in popular Linux distributions such as CentOS and Ubuntu. View cannot load the drivers in the default configuration; therefore in order to get VMware View working with smartcards you need manually patch and compile the opensc package (thanks to this site for the information needed to do so.)

First, install the necessary development packages

sudo yum -y groupinstall "Development Tools"
sudo yum -y install openssl-devel pcsc-lite-devel

Next, download and extract opensc-0.13 from sourceforge:

wget http://downloads.sourceforge.net/project/opensc/OpenSC/opensc-0.13.0/opensc-0.13.0.tar.gz
tar zxvf opensc-0.13.0.tar.gz
cd opensc-0.13.0

Now we have to patch two specific files in the source before compiling:

echo "--- ./src/pkcs11/opensc-pkcs11.exports
 +++ ./src/pkcs11/opensc-pkcs11.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize
 --- ./src/pkcs11/pkcs11-spy.exports
 +++ ./src/pkcs11/pkcs11-spy.exports
 @@ -1 +1,3 @@
  C_GetFunctionList
 +C_Initialize
 +C_Finalize" > opensc.patch

patch -p1 -i opensc.patch

Next, compiling and installing:

./bootstrap
./configure
make
sudo make install

Assuming there were no errors, you can now link the compiled driver to the location VMware view expects it. Note: you must rename the library from opensc-pkcs11.so to libopensc-pkcs11.so for this to work (another lovely VMware bug)

sudo mkdir -p /usr/lib/vmware/view/pkcs11/
sudo ln -s /usr/local/lib/pkcs11/opensc-pkcs11.so /usr/lib/vmware/view/pkcs11/libopensc-pkcs11.so

Lync

Install the pidgin-sipe plugin as detailed here

sudo yum -y install pidgin pidgin-sipe

Choose “Office Communicator” as the protocol. Enter your e-mail address for the username, then go to the Advanced tab and check “Use single sign-on.”

On first run all contact names were missing. Per here, simply close and restart the application.

Gnome 3

Disable audible bell

Taken from here

Disable audible bell and enable visual bell with:

gsettings set org.gnome.desktop.wm.preferences audible-bell false
gsettings set org.gnome.desktop.wm.preferences visual-bell true

and change the type of the visual bell if you don’t need the fullscreen flash:

gsettings set org.gnome.desktop.wm.preferences visual-bell-type frame-flash

Extensions

If you can find your extension via yum it tends to work better than the gnome extension site. Make sure you’re using the correct shell version from the site:

gnome-shell --version
sudo yum -y install gnome-shell-extension-top-icons gnome-shell-extension-dash-to-dock

Other useful extensions:

backslide, multi monitors add-on , No topleft hot corner, Dropdown terminal, Media player indicator, Focus my window, Workspace indicator, Native window placement, Openweather, Panel osd, Dash to dock, Gpaste

RSA

For if you have the misfortune of being in an environment that uses RSA SecurID for two factor authentication, here is the official guide

Necessary packages to be installed:

sudo yum -y install selinux-policy-devel policycoreutils-devel
  1.  Download & extract PAM agent, cd to extracted directory
    tar -xvf PAM-Agent*.tar
  2. Create /var/ace directory and place necessary files inside. Create sdopts.rec and add the IP address of the desktop.
    mkdir /var/ace
    cp sdconf.rec /var/ace
    vi /var/ace/sdopts.rec
    CLIENT_IP=<IP ADDRESS OF DESKTOP>
  3. Run the install_pam script and specify UDP authentication
    ./install_pam.sh
  4.  Modify /etc/pam.d/password-auth to add the RSA authentication agent. Insert above pam_lsass.so smartcard_prompt try_first_pass line, then comment out pam_lsass.so smartcard_prompt try_first_pass line
    auth required pam_securid.so
    auth required pam_env.so
    auth sufficient pam_lsass.so
  5. Add new system in RSA console: Access / Authentication Agents / Add new
  6. Test to make sure everything works:
    /opt/pam/bin/64bit/acetest

Fix wordpress PHP change was reverted error

Since WordPress 4.9 I’ve had a peculiar issue when trying to edit theme files using the web GUI. Whenever I tried to save changes I would get this error message:

Unable to communicate back with site to check for fatal errors, so the PHP change was reverted. You will need to upload your PHP file change by some other means, such as by using SFTP.

After following this long thread I saw the suggestion to install and use the Health Check plugin to get more information into why this is happening. In my case I kept getting this error message:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 28: Connection timed out after 10001 milliseconds

I researched what a loopback request is in this case. It’s the webserver reaching out to its own site’s url to talk to itself. My webserver was being denied internet access, which included its own URL, so it couldn’t complete the loopback request.

One solution, mentioned here, is to edit the hosts file on your webserver to point to 127.0.0.1 for the URL of your site. My solution was to open up the firewall to allow my server to connect to its URL. I then ran into a different problem:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 60: Peer's Certificate issuer is not recognized.

After digging for a while I found this site which explains how to edit php.ini to point to an acceptable certificate list. To fix this on my Cent7 machine I edited /etc/php.ini and added this line (you could also add it to /etc/php.d/curl.ini)

curl.cainfo="/etc/pki/tls/cert.pem"

This caused php’s curl module to use the same certificate trust store that the underlying OS uses.

Then restart php-fpm if you’re using it:

sudo systemctl restart php-fpm

Success! Loopback connections now work properly.


Update 7/16/2018: I still had a wordpress site that was giving me certificate grief despite the above fix. After MUCH frustration I finally found this post where André Gayle points out that wordpress ships with its own certificate bundle, independent of even curl’s ca bundle! It’s located in your wordpress directory/wp-includes/certificates folder.

My solution to this extremely frustrating problem was to remove their bundle and symlink to my own (Cent 7 box – adjust your path to match where your wordpress install and certificate trust store is located)

sudo mv /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt.old
sudo ln -s /etc/pki/tls/cert.pem /var/www/html/wordpress/wp-includes/certificates/ca-bundle.crt

FINALLY no more loopback errors in the Health Check plugin, and thus the ability to edit theme files in the editor.

Backup your systems with urBackup

In addition to my ZFS snapshots I decided to implement a secondary backup system. I decided to land on urbackup for ease of use and, more importantly, it was easier to set up.

Server Install

Assuming a Cent-based system:

cd /etc/yum.repos.d/
sudo wget http://download.opensuse.org/repositories/home:uroni/CentOS_7/home:uroni.repo
sudo yum -y install urbackup-server
sudo systemctl enable urbackup-server
sudo systemctl start urbackup-server

Open up necessary ports for the server:

sudo firewall-cmd --add-port=55413-55415/tcp --permanent
sudo systemctl reload firewalld

By default urbackup listens on port 55414 for connections. You can change this to port 80 and/or 443 for HTTPS by installing nginx and having it proxy the connections for you.

sudo yum -y install nginx
sudo systemctl enable nginx
sudo setsebool -P httpd_can_network_connect 1 #if you're using selinux

Copy the following into /etc/nginx/conf.d/urbackup.conf (make sure to change server_name to suit your needs)

server {
        server_name backup;

        location / {
                proxy_pass http://localhost:55414/;
        }
}

Then start nginx:

sudo systemctl start nginx

You should then be able to access the urbackup console by navigating to the IP / hostname of your backup server in a browser.

Client Install:

Urbackup can use a snapshot system known as dattobd. You should use it if you can in order to get more consistent backups, otherwise urbackup will simply copy files from the host which isn’t always desirable (databases, for example)

Install dattobd (optional):

sudo yum -y update
# reboot if your kernel ends up being updated
sudo yum -y localinstall https://cpkg.datto.com/datto-rpm/repoconfig/datto-el-rpm-release-$(rpm -E %rhel)-latest.noarch.rpm
sudo yum -y install dkms-dattobd dattobd-utils

Install urbackup client:

TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF
#Select dattobd when prompted if desired

Configure Firewall:

sudo firewall-cmd --add-port=35621-35623/tcp --permanent
sudo systemctl reload firewalld

Once a client is installed, assuming they’re on the same network as the backup server, they will automatically add themselves and begin backing up. If they don’t show up it’s usually a firewall issue.

Restore

Restoration of individual files is easily done through the web console. If you have a windows system, restoring from an image backup is also easy.

Linux hosts

Recovery is trickier if you want to restore a Linux system. Install an empty system of same distribution. Give it the same hostname. Install the client as outlined above, then run:

sudo /usr/local/bin/urbackupclientctl restore-start -b last

Troubleshooting

If for some reason the client not showing up after removing it from the GUI: Uninstall & re-install client software

sudo /usr/local/sbin/uninstall_urbackupclient
TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF

Install wordpress on CentOS7 with nginx & caching

Lately I’ve become interested in the LEMP stack (as opposed to the LAMP stack.) As such I’ve decided to set up a wordpress site running CentOS 7, nginx, mariadb, php-fpm, zend-opcache, apc, and varnish. This writeup will borrow heavily from two of my other writeups, Install Wordpress on CentOS 7 with  SELinux and Speed up WordPress in CentOS7 using caching. This will be a mashup of those two with an nginx twist with guidance from digitalocean. Let’s begin.

Repositories

To install the required addons we will need to have the epel repository enabled:

yum -y install epel-release

nginx

Install necessary packages:

sudo yum -y install nginx
sudo systemctl enable nginx

Optional: symlink /usr/share/nginx/ to /var/www/ (for those of us who are used to apache)

sudo ln -s /usr/share/nginx/ /var/www

Open necessary firewall ports:

sudo firewall-cmd --add-service=http --permanent
sudo systemctl restart firewalld

start nginx:

sudo systemctl start nginx

Navigate to your new site to make sure it brings up the default page properly.

MariaDB

Install:

sudo yum -y install mariadb-server mariadb
sudo systemctl enable mariadb

Run initial mysql configuration to set database root password

sudo systemctl start mariadb
sudo mysql_secure_installation

Configure:

Create a wordpress database and user:

mysql -u root -p 
#enter your mysql root password here
create user wordpress;
create database wordpress;
GRANT ALL PRIVILEGES ON wordpress.* To 'wordpress'@'localhost' IDENTIFIED BY 'password';
quit;

php-fpm

Install:

sudo yum -y install php-fpm php-mysql php-pclzip
sudo systemctl enable php-fpm

Configure:

Uncomment cgi.fix_pathinfo and change value to 0:

sudo sed -i 's/\;\(cgi.fix_pathinfo=\)1/\10/g' /etc/php.ini

Modify the listen= parameter to listen to UNIX socket instead of TCP:

sudo sed -i 's/\(listen =\).*/\1 \/var\/run\/php-fpm\/php-fpm.sock/g' /etc/php-fpm.d/www.conf

Change listen.owner and listen.group to nobody:

sudo sed -i 's/\(listen.owner = \).*/\1nobody/g; s/\(listen.group = \).*/\1nobody/g' /etc/php-fpm.d/www.conf

Change running user & group from apache to nginx:

sudo sed -i 's/\(^user = \).*/\1nginx/g; s/\(^group = \).*/\1nginx/g' /etc/php-fpm.d/www.conf

Start php-fpm:

sudo systemctl start php-fpm

Caching

To speed up wordpress further we need to install a few bits of caching software. Accept defaults when prompted.

zend-opcache & apc:

Install necessary packages:

sudo yum -y install php-pecl-zendopcache php-pecl-apcu php-devel gcc
sudo pecl install apc

Add apc extension to php configuration:

sudo sh -c "echo '\

#Add apc extension 
extension=apc.so' >> /etc/php.ini"

Restart php-fpm:

sudo systemctl restart php-fpm

Varnish

Install:

sudo yum -y install varnish
sudo systemctl enable varnish

Configure nginx to listen on port 8080 instead of port 80:

sudo sed -i /etc/nginx/nginx.conf -e 's/listen.*80/&80 /'

Change varnish to listen on port 80 instead of port 6081:

sudo sed -i /etc/varnish/varnish.params -e 's/\(VARNISH_LISTEN_PORT=\).*/\180/g'

Optional: change varnish to cache files in memory with a limit of 256M (caching to memory is much faster than caching to disk)

sudo sed -i /etc/varnish/varnish.params -e 's/\(VARNISH_STORAGE=\).*/\1\"malloc,256M\"/g'

Add varnish configuration to work with caching wordpress sites:

Update 2/25/2018 added section to allow facebook to properly scrape varnish cached sites.

sudo sh -c 'echo "/* SET THE HOST AND PORT OF WORDPRESS
 * *********************************************************/
vcl 4.0;
import std;

backend default {
 .host = \"127.0.0.1\";
 .port = \"8080\";
 .first_byte_timeout = 60s;
 .connect_timeout = 300s;
}
 
# SET THE ALLOWED IP OF PURGE REQUESTS
# ##########################################################
acl purge {
 \"localhost\";
 \"127.0.0.1\";
}

#THE RECV FUNCTION
# ##########################################################
sub vcl_recv {

#Facebook workaround to allow proper shares
if (req.http.user-agent ~ \"facebookexternalhit\")
        {
        return(pipe);
        }

# set realIP by trimming CloudFlare IP which will be used for various checks
set req.http.X-Actual-IP = regsub(req.http.X-Forwarded-For, \"[, ].*$\", \"\"); 

 # FORWARD THE IP OF THE REQUEST
 if (req.restarts == 0) {
 if (req.http.x-forwarded-for) {
 set req.http.X-Forwarded-For =
 req.http.X-Forwarded-For + \", \" + client.ip;
 } else {
 set req.http.X-Forwarded-For = client.ip;
 }
 }

 # Purge request check sections for hash_always_miss, purge and ban
 # BLOCK IF NOT IP is not in purge acl
 # ##########################################################

 # Enable smart refreshing using hash_always_miss
if (req.http.Cache-Control ~ \"no-cache\") {
 if (client.ip ~ purge) {
 set req.hash_always_miss = true;
 }
}

if (req.method == \"PURGE\") {
 if (!client.ip ~ purge) {
 return(synth(405,\"Not allowed.\"));
 }

 ban (\"req.url ~ \"+req.url);
 return(synth(200,\"Purged.\"));

}

if (req.method == \"BAN\") {
 # Same ACL check as above:
 if (!client.ip ~ purge) {
 return(synth(403, \"Not allowed.\"));
 }
 ban(\"req.http.host == \" + req.http.host +
 \" && req.url == \" + req.url);

 # Throw a synthetic page so the
 # request wont go to the backend.
 return(synth(200, \"Ban added\"));
}

# Unset cloudflare cookies
# Remove has_js and CloudFlare/Google Analytics __* cookies.
 set req.http.Cookie = regsuball(req.http.Cookie, \"(^|;\s*)(_[_a-z]+|has_js)=[^;]*\", \"\");
 # Remove a \";\" prefix, if present.
 set req.http.Cookie = regsub(req.http.Cookie, \"^;\s*\", \"\");

 # For Testing: If you want to test with Varnish passing (not caching) uncomment
 # return( pass );

# DO NOT CACHE RSS FEED
 if (req.url ~ \"/feed(/)?\") {
 return ( pass ); 
}

#Pass wp-cron

if (req.url ~ \"wp-cron\.php.*\") {
 return ( pass );
}

## Do not cache search results, comment these 3 lines if you do want to cache them

if (req.url ~ \"/\?s\=\") {
 return ( pass ); 
}

# CLEAN UP THE ENCODING HEADER.
 # SET TO GZIP, DEFLATE, OR REMOVE ENTIRELY. WITH VARY ACCEPT-ENCODING
 # VARNISH WILL CREATE SEPARATE CACHES FOR EACH
 # DO NOT ACCEPT-ENCODING IMAGES, ZIPPED FILES, AUDIO, ETC.
 # ##########################################################
 if (req.http.Accept-Encoding) {
 if (req.url ~ \"\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$\") {
 # No point in compressing these
 unset req.http.Accept-Encoding;
 } elsif (req.http.Accept-Encoding ~ \"gzip\") {
 set req.http.Accept-Encoding = \"gzip\";
 } elsif (req.http.Accept-Encoding ~ \"deflate\") {
 set req.http.Accept-Encoding = \"deflate\";
 } else {
 # unknown algorithm
 unset req.http.Accept-Encoding;
 }
 }

 # PIPE ALL NON-STANDARD REQUESTS
 # ##########################################################
 if (req.method != \"GET\" &&
 req.method != \"HEAD\" &&
 req.method != \"PUT\" && 
 req.method != \"POST\" &&
 req.method != \"TRACE\" &&
 req.method != \"OPTIONS\" &&
 req.method != \"DELETE\") {
 return (pipe);
 }
 
 # ONLY CACHE GET AND HEAD REQUESTS
 # ##########################################################
 if (req.method != \"GET\" && req.method != \"HEAD\") {
 return (pass);
 }
 
 # OPTIONAL: DO NOT CACHE LOGGED IN USERS (THIS OCCURS IN FETCH TOO, EITHER
 # COMMENT OR UNCOMMENT BOTH
 # ##########################################################
 if ( req.http.cookie ~ \"wordpress_logged_in|resetpass\" ) {
 return( pass );
 }

 #fix CloudFlare Mixed Content with Flexible SSL
 if (req.http.X-Forwarded-Proto) {
 return(hash);
 }

 # IF THE REQUEST IS NOT FOR A PREVIEW, WP-ADMIN OR WP-LOGIN
 # THEN UNSET THE COOKIES
 # ##########################################################
 if (!(req.url ~ \"wp-(login|admin)\") 
 && !(req.url ~ \"&preview=true\" ) 
 ){
 unset req.http.cookie;
 }

 # IF BASIC AUTH IS ON THEN DO NOT CACHE
 # ##########################################################
 if (req.http.Authorization || req.http.Cookie) {
 return (pass);
 }
 
 # IF YOU GET HERE THEN THIS REQUEST SHOULD BE CACHED
 # ##########################################################
 return (hash);
 # This is for phpmyadmin
if (req.http.Host == \"pmadomain.com\") {
return (pass);
}
}

sub vcl_hash {

if (req.http.X-Forwarded-Proto) {
 hash_data(req.http.X-Forwarded-Proto);
 }
}


# HIT FUNCTION
# ##########################################################
sub vcl_hit {
 return (deliver);
}

# MISS FUNCTION
# ##########################################################
sub vcl_miss {
 return (fetch);
}

# FETCH FUNCTION
# ##########################################################
sub vcl_backend_response {
 # I SET THE VARY TO ACCEPT-ENCODING, THIS OVERRIDES W3TC 
 # TENDANCY TO SET VARY USER-AGENT. YOU MAY OR MAY NOT WANT
 # TO DO THIS
 # ##########################################################
 set beresp.http.Vary = \"Accept-Encoding\";

 # IF NOT WP-ADMIN THEN UNSET COOKIES AND SET THE AMOUNT OF 
 # TIME THIS PAGE WILL STAY CACHED (TTL), add other locations or subdomains you do not want to cache here in case they set cookies
 # ##########################################################
 if (!(bereq.url ~ \"wp-(login|admin)\") && !bereq.http.cookie ~ \"wordpress_logged_in|resetpass\" ) {
 unset beresp.http.set-cookie;
 set beresp.ttl = 1w;
 set beresp.grace =3d;
 }

 if (beresp.ttl <= 0s ||
 beresp.http.Set-Cookie ||
 beresp.http.Vary == \"*\") {
 set beresp.ttl = 120 s;
 # set beresp.ttl = 120s;
 set beresp.uncacheable = true;
 return (deliver);
 }

 return (deliver);
}

# DELIVER FUNCTION
# ##########################################################
sub vcl_deliver {
 # IF THIS PAGE IS ALREADY CACHED THEN RETURN A HIT TEXT 
 # IN THE HEADER (GREAT FOR DEBUGGING)
 # ##########################################################
 if (obj.hits > 0) {
 set resp.http.X-Cache = \"HIT\";
 # IF THIS IS A MISS RETURN THAT IN THE HEADER
 # ##########################################################
 } else {
 set resp.http.X-Cache = \"MISS\";
 }
}" > /etc/varnish/default.vcl'

Restart varnish & nginx:

sudo systemctl restart nginx varnish

Logging

By default varnish does not log its traffic. This means that your apache log will only log things varnish does not cache. We have to configure varnish to log traffic so you don’t lose insight into who is visiting your site.

Update 2/14/2017:  I’ve discovered a better way to do this. The old way is still included below, but you really should use this other way.

New way:

CentOS ships with some systemd scripts for you. You can use them out of the box by simply issuing

systemctl start varnishncsa
systemctl enable varnishncsa

If you are behind a reverse proxy then you will want to tweak the varnishncsa output a bit to reflect x-forwarded-for header values (thanks to this github discussion for the guidance.) Accomplish this by appending a modified log output format string to /lib/systemd/system/varnishncsa.service:

sudo sed -i /lib/systemd/system/varnishncsa.service -e "s/ExecStart.*/& -F \'%%{X-Forwarded-For}i %%l %%u %%t \"%%r\" %%s %%b \"%%{Referer}i\" \"%%{User-agent}i\"\' /g"

Lastly, reload systemd configuration, enable, and start the varnishncsa service:

sudo systemctl daemon-reload
sudo systemctl enable varnishncsa
sudo systemctl start varnishncsa

 


Old way:

First, enable rc.local

sudo chmod +x /etc/rc.local
sudo systemctl enable rc-local

Next, add this entry to the rc.local file:

sudo sh -c 'echo "varnishncsa -a -w /var/log/varnish/access.log -D -P /var/run/varnishncsa.pid" >> /etc/rc.local'

If your varnish server is behind a reverse proxy (like a web application firewall) then modify the above code slightly (thanks to this site for the information on how to do so)

sudo sh -c "echo varnishncsa -a -F \'%{X-Forwarded-For}i %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\"\' -w /var/log/varnish/access.log -D -P /var/run/varnishncsa.pid >> /etc/rc.local"

Once configuration is in place, start the rc-local service

sudo systemctl start rc-local

WordPress

Download, extract, and set permissions for your wordpress installation (this assumes your wordpress site is the only site on the server)

wget https://wordpress.org/latest.zip
sudo unzip latest.zip -d /usr/share/nginx/html/
sudo chown nginx:nginx -R /usr/share/nginx/html/

Configure nginx

Follow best practice for nginx by creating a new configuration file for your wordpress site. In this example I’ve created a file wordpress.conf inside the /etc/nginx/conf.d directory.

Create the file:

sudo vim /etc/nginx/conf.d/wordpress.conf

Insert the following, making sure to modify the server_name directive:

server {
    listen 8080;
    listen [::]:8080;
    server_name wordpress;
    root /usr/share/nginx/html/wordpress;
    port_in_redirect off;


    location / {
        index index.php;
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    error_page 404 /404.html;     
    location = /40x.html {
     }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    }
}

Restart nginx

sudo systemctl restart nginx

Configure upload directory

If you want users to upload content, then you will want to assign the http_sys_rw_content_t selinux security context for the wp-uploads directory (create it if it doesn’t exist)

sudo mkdir /usr/share/nginx/html/wordpress/wp-content/uploads
sudo chown nginx:nginx /usr/share/nginx/html/wordpress/wp-content/uploads
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/usr/share/nginx/html/wordpress/wp-content/uploads(/.*)?"
sudo restorecon -Rv /usr/share/nginx/html/wordpress/wp-content/uploads

Run the wizard

In order for the wizard to run properly we need to temporarily give the wordpress directory httpd_sys_rw_content_t selinux context

sudo chcon -t httpd_sys_rw_content_t /usr/share/nginx/html/wordpress/

Now navigate to your new website in a browser and follow the wizard, which will create a wp-config.php file inside the wordpress directory.

W3 Total Cache

Once your wordpress site is set up you will want to configure it to communicate with varnish. This will make sure the cache is always up to date when changes are made.

Install the W3 Total Cache wordpress plugin and configure it as follows:

opcode-database-object

Opcode cache: Opcode:Zend Opcache

Database cache: Check enable, select Opcode: Alternative PHP Cache (APC / APCu)

Object cache: Check enable, select Opcode: Alternative PHP Cache (APC / APCu)

Fragment cache: Opcode: Alternative PHP cache (APC / APCu)

proxy

Reverse Proxy: Check “Enable reverse proxy caching via varnish”
Specify 127.0.0.1 in the varnish servers box. Click save all settings.

Wrapping up

Once your site is properly set up, restore the original security context for the wordpress directory:

sudo restorecon -v /usr/share/nginx/html/wordpress/

Lastly restart nginx and varnish:

sudo systemctl restart nginx varnish

Success! Everything is working within the proper SELinux contexts and caching configuration.

Troubleshooting

403 forbidden

I received this error after setting everything up. After some digging I came across this site which explained what could be happening.

For me this meant that nginx couldn’t find an index file and was trying to default to a directory listing, which is not allowed by default. This is fixed by inserting a proper directive to find index files, in my case, index.php. Make sure you have “index index.php” in your nginx.conf inside the location / block:

    location / {
        index index.php;
    }

 Accessing wp-admin redirects you to port 8080, times out

If you find going to /wp-admin redirects you to a wrong port and times out, it’s because nginx is forwarding the portnumber. We want to turn that off (thanks to this site for the help.) Add this to your nginx.conf:

  port_in_redirect off;

Rename LVM group in CentOS7

I recently made the discovery that all my VMs have the same volume group name – the default that is given when CentOS is installed. My OCD got the best of me and I set out to change these names to reflect the hostname. The problem is if you rename the volume group containing the root partition, the system will not boot.

The solution is to run a series of commands to get things updated. Thanks to the centOS forums for the information. In my case I had already made the mistake of renaming the group and ending up with an unbootable system. This is what you have to do to get it working again:

Boot into a Linux rescue shell (from the installer DVD, for example) and activate the volume groups (if not activated by default)

vgchange -ay

Mount the root and boot volumes (replace VG_NAME with name of your volume group and BOOT_PARTITION with the device name of your boot partition, typically sda1)

mount /dev/VG_NAME/root /mnt
mount /dev/BOOT_PARTITION /mnt/boot

Mount necessary system devices into our chroot:

mount --bind /proc /mnt/proc
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys

Chroot into our broken system:

chroot /mnt/ /bin/bash

Modify fstab and grub files to reflect new volume group name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 's/OLD_VG_NAME/NEW_VG_NAME/g'

Run dracut to modify boot images:

dracut -f

Remove your recovery boot CD and reboot into your newly fixed VM

exit
reboot

If you want to avoid having to boot into a recovery environment, do the following steps on the machine whose VG you want to rename:

Rename the volume group:

vgrename OLD_VG_NAME NEW_VG_VAME

Modify necessary boot files to reflect the new name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 
's/OLD_VG_NAME/NEW_VG_NAME/g'

Rebuild boot images:

dracut -f