Tag Archives: CentOS 7

Managing Windows hosts with Ansible

I spun my wheels for a while trying to get Ansible to manage windows hosts. Here are my notes on how I finally successfully got ansible (on a Linux host) to use an HTTPS WinRM connection to connect to a windows host using Kerberos for authentication. This article was of great help.

Ansible Hosts file

[all:vars]
ansible_user=<user>
ansible_password=<password>
ansible_connection=winrm
ansible_winrm_transport=kerberos

Packages to install (CentOS 7)

sudo yum install gcc python2-pip
sudo pip install kerberos requests_kerberos pywinrm certifi

Playbook syntax

Modules involving Windows hosts have a win_ prefix.

Troubleshooting

Code 500

WinRMTransportError: (u'http', u'Bad
HTTP response returned from server. Code 500')

I was using -m ping for testing instead of -m win_ping. Make sure you’re using win_ping and not regular ping module.

Certificate validation failed

"msg": "kerberos: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)"

I had a self signed CA certificate on the box ansible was trying to connect to. Python doesn’t appear to trust the system’s certificate trust chain by default. Ansible has a configuration directive

ansible_winrm_ca_trust_path

but even with that pointing to my system trust it wouldn’t work. I then found this gem on the winrm page for ansible:

The CA chain can contain a single or multiple issuer certificates and each entry is contained on a new line. To then use the custom CA chain as part of the validation process, set ansible_winrm_ca_trust_path to the path of the file. If this variable is not set, the default CA chain is used instead which is located in the install path of the Python package certifi.

Challenge #1: I didn’t have certifi installed.

sudo pip install certifi

Challenge #2: I needed to know where certifi’s default trust store was located, which I discovered after reading the project github page

python
import certifi
certifi.where()

In my case the location was ‘/usr/lib/python2.7/site-packages/certifi/cacert.pem’. I then symlinked my system trust to that location (backing up existing trust first)

sudo mv /usr/lib/python2.7/site-packages/certifi/cacert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /usr/lib/python2.7/site-packages/certifi/cacert.pem

Et voila! No more trust issues.

Ansible Tower

Note: If you’re running Ansible Tower, you have to work with their own bundled version of python instead of the system version. For version 3.2 it was located here:

/var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

I fixed it by doing this:

sudo mv /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem.old
sudo ln -s /etc/pki/tls/cert.pem /var/lib/awx/venv/ansible/lib/python2.7/site-packages/requests/cacert.pem

This resolved the trust issues.

Fix wordpress PHP change was reverted error

Since WordPress 4.9 I’ve had a peculiar issue when trying to edit theme files using the web GUI. Whenever I tried to save changes I would get this error message:

Unable to communicate back with site to check for fatal errors, so the PHP change was reverted. You will need to upload your PHP file change by some other means, such as by using SFTP.

After following this long thread I saw the suggestion to install and use the Health Check plugin to get more information into why this is happening. In my case I kept getting this error message:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 28: Connection timed out after 10001 milliseconds

I researched what a loopback request is in this case. It’s the webserver reaching out to its own site’s url to talk to itself. My webserver was being denied internet access, which included its own URL, so it couldn’t complete the loopback request.

One solution, mentioned here, is to edit the hosts file on your webserver to point to 127.0.0.1 for the URL of your site. My solution was to open up the firewall to allow my server to connect to its URL. I then ran into a different problem:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 60: Peer's Certificate issuer is not recognized.

After digging for a while I found this site which explains how to edit php.ini to point to an acceptable certificate list. To fix this on my Cent7 machine I edited /etc/php.ini and added this line (you could also add it to /etc/php.d/curl.ini)

curl.cainfo="/etc/pki/tls/cert.pem"

This caused php’s curl module to use the same certificate trust store that the underlying OS uses.

Then restart php-fpm if you’re using it:

sudo systemctl restart php-fpm

Success! Loopback connections now work properly.

Nextcloud External Files SMB not working – Empty Response from Server

I beat my head against the wall for hours trying to figure out why external storage wasn’t working with my Nextcloud instance after I migrated it over to CentOS 7. All I kept getting was a very unhelpful

There was an error with message: Empty response from the server.

I installed all the libraries multiple times. I tried different versions of smbclient and php-smbclient but the error kept happening! Eventually I decided to check the samba logs of my samba server. Sure enough:

 create_connection_session_info failed: NT_STATUS_ACCESS_DENIED

The username and password I was using was correct. I read on some forum (sorry, no link) to put something in the Workgroup field. Voila! As soon as I populated the workgroup field in Nextcloud for my SMB shares, they all worked!

Make Java run on privileged ports in CentOS 7

I recently gnashed my teeth at trying to get java to directly bind to port 443 instead of using nginx to proxy to a java application I had to use. I was surprised at the complication of finding the solution, but I eventually did thanks to the following sites:

https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443/892391

https://github.com/kaitoy/pcap4j/issues/63

First, determine the full path of your current java install:

sudo update-alternatives --config java

In my CentOS 7 install, the java binary was located here:

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java

Next, use setcap to configure java to be able to bind to port 443:

sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java

Now, test to make sure java works:

java -version

java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory

The above error means that after setting setcap, it breaks how java looks for its library to run. To fix this, we need to symlink the library it’s looking for into /usr/lib, then run ldconfig

sudo ln -s /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib/
sudo ldconfig

Now test Java again:

java -version

It took longer than I like to admit to get this working, but it it does indeed work this way.

gvfs-mount doesn’t use kerberos ticket fix

I had a very frustrating issue when I switched my work desktop from Linux Mint to CentOS 7. My network drives no longer would mount using a kerberos ticket. Before, when I would initialize a kerberos ticket I could mount network shares without any kind of username or password prompt:

kinit
gvfs-mount smb://server/share

With CentOS 7, though, the above process would produce a username and password prompt. After much searching I came across this forum that contained the answer: append your fully qualified domain name to the server. Now the process is like this:

kinit
gvfs-mount smb://server.full.fqdn.name/share

and it worked!

Backup your systems with urBackup

In addition to my ZFS snapshots I decided to implement a secondary backup system. I decided to land on urbackup for ease of use and, more importantly, it was easier to set up.

Server Install

Assuming a Cent-based system:

cd /etc/yum.repos.d/
sudo wget http://download.opensuse.org/repositories/home:uroni/CentOS_7/home:uroni.repo
sudo yum -y install urbackup-server
sudo systemctl enable urbackup-server
sudo systemctl start urbackup-server

Open up necessary ports for the server:

sudo firewall-cmd --add-port=55413-55415/tcp --permanent
sudo systemctl reload firewalld

By default urbackup listens on port 55414 for connections. You can change this to port 80 and/or 443 for HTTPS by installing nginx and having it proxy the connections for you.

sudo yum -y install nginx
sudo systemctl enable nginx
sudo setsebool -P httpd_can_network_connect 1 #if you're using selinux

Copy the following into /etc/nginx/conf.d/urbackup.conf (make sure to change server_name to suit your needs)

server {
        server_name backup;

        location / {
                proxy_pass http://localhost:55414/;
        }
}

Then start nginx:

sudo systemctl start nginx

You should then be able to access the urbackup console by navigating to the IP / hostname of your backup server in a browser.

Client Install:

Urbackup can use a snapshot system known as dattobd. You should use it if you can in order to get more consistent backups, otherwise urbackup will simply copy files from the host which isn’t always desirable (databases, for example)

Install dattobd (optional):

sudo yum -y update
# reboot if your kernel ends up being updated
sudo yum -y localinstall https://cpkg.datto.com/datto-rpm/repoconfig/datto-el-rpm-release-$(rpm -E %rhel)-latest.noarch.rpm
sudo yum -y install dkms-dattobd dattobd-utils

Install urbackup client:

TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF
#Select dattobd when prompted if desired

Configure Firewall:

sudo firewall-cmd --add-port=35621-35623/tcp --permanent
sudo systemctl reload firewalld

Once a client is installed, assuming they’re on the same network as the backup server, they will automatically add themselves and begin backing up. If they don’t show up it’s usually a firewall issue.

Restore

Restoration of individual files is easily done through the web console. If you have a windows system, restoring from an image backup is also easy.

Linux hosts

Recovery is trickier if you want to restore a Linux system. Install an empty system of same distribution. Give it the same hostname. Install the client as outlined above, then run:

sudo /usr/local/bin/urbackupclientctl restore-start -b last

Troubleshooting

If for some reason the client not showing up after removing it from the GUI: Uninstall & re-install client software

sudo /usr/local/sbin/uninstall_urbackupclient
TF=`mktemp` && wget "https://hndl.urbackup.org/Client/2.1.15/UrBackup%20Client%20Linux%202.1.15.sh" -O $TF && sudo sh $TF; rm $TF

CentOS7, nginx, reverse proxy, & let’s encrypt

With the loss of trust of Startcom certs I found myself needing a new way to obtain free SSL certificates. Let’s Encrypt is perfect for this. Unfortunately SophosUTM does not support Let’s Encrypt. It became time to replace Sophos as my reverse proxy. Enter nginx.

The majority of the information I used to get this up and running came from digitalocean with help from howtoforge. My solution involves CentOS7, nginx, and the let’s encrypt software.

Install necessary packages

sudo yum install nginx letsencrypt
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo systemctl enable nginx

Inform selinux to allow nginx to make http network connections:

sudo setsebool -P httpd_can_network_connect 1

Generate certificates

Generate your SSL certificates with the letsencrypt command. This command relies on being able to reach your site over the internet using port 80 and public DNS. Replace arguments below to reflect your setup

sudo letsencrypt certonly -a webroot --webroot-path=/usr/share/nginx/html -d example.com -d www.example.com

The above command places the certs in /etc/letsencrypt/live/<domain_name>

Sophos UTM certificates

In my case I had a few paid SSL certificates I wanted to copy over from Sophos UTM to nginx. In order to do this I had to massage them a little bit as outlined here.

Download p12 from Sophos, also download certificate authority file, then use openssl to convert the p12 to a key bundle nginx will take.

openssl pkcs12 -nokeys -in server-cert-key-bundle.p12 -out server.pem
openssl pkcs12 -nocerts -nodes -in server-cert-key-bundle.p12 -out server.key
cat server.pem Downloaded_CA_file.pem > server-ca-bundle.pem

Once you have your keyfiles you can copy them wherever you like and use them in your site-specific SSL configuration file.

Auto renewal

First make sure that the renew command works successfully:

sudo letsencrypt renew

If the output is a success (a message saying not up for renewal) then add this to a cron job to check monthly for renewal:

sudo crontab -e
30 2 1 * * /usr/bin/letsencrypt renew >> /var/log/le-renew.log
35 2 1 * * /bin/systemctl reload nginx

Configure nginx

Uncomment the https settings block in /etc/nginx/nginx.conf to allow for HTTPS connections.

Generate a strong DH group:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Create SSL conf snippets in /etc/nginx/conf.d/ssl-<sitename>.conf. Make sure to include the proper location of your SSL certificate files as generated with the letsencrypt command.

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

Here is a sample ssl.conf file:

server {
        listen 443;

        ssl_certificate /etc/letsencrypt/live/<HOSTNAME>/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/<HOSTNAME>/privkey.pem;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        access_log /var/log/<HOSTNAME.log>;

        server_name <HOSTNAME>;

        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;

                proxy_pass http://<BACKEND_HOSTNAME>/;
        }
}

 

Redirect http to https by creating a redirect configuration file (optional)

sudo vim /etc/nginx/conf.d/redirect.conf
server {
	server_name
		<DOMAIN_1>
                ...
		<DOMAIN_N>;

        location /.well-known {
              alias /usr/share/nginx/html/.well-known;
              allow all;
	}
	location / {
               return 301 https://$host$request_uri; 
	}
}

 

Restart nginx:

sudo systemctl restart nginx

Troubleshooting

HTTPS redirects always go to the host at the top of the list

Solution found here:  use the $host variable instead of the $server_name variable in your configuration.

Websockets HTTP 400 error

Websockets require a bit more massaging in the configuration file as outlined here. Modify your site-specific configuration to add these lines:

# we're in the http context here
map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {     proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
}

 

Install wordpress on CentOS7 with nginx & caching

Lately I’ve become interested in the LEMP stack (as opposed to the LAMP stack.) As such I’ve decided to set up a wordpress site running CentOS 7, nginx, mariadb, php-fpm, zend-opcache, apc, and varnish. This writeup will borrow heavily from two of my other writeups, Install Wordpress on CentOS 7 with  SELinux and Speed up WordPress in CentOS7 using caching. This will be a mashup of those two with an nginx twist with guidance from digitalocean. Let’s begin.

Repositories

To install the required addons we will need to have the epel repository enabled:

yum -y install epel-release

nginx

Install necessary packages:

sudo yum -y install nginx
sudo systemctl enable nginx

Optional: symlink /usr/share/nginx/ to /var/www/ (for those of us who are used to apache)

sudo ln -s /usr/share/nginx/ /var/www

Open necessary firewall ports:

sudo firewall-cmd --add-service=http --permanent
sudo systemctl restart firewalld

start nginx:

sudo systemctl start nginx

Navigate to your new site to make sure it brings up the default page properly.

MariaDB

Install:

sudo yum -y install mariadb-server mariadb
sudo systemctl enable mariadb

Run initial mysql configuration to set database root password

sudo systemctl start mariadb
sudo mysql_secure_installation

Configure:

Create a wordpress database and user:

mysql -u root -p 
#enter your mysql root password here
create user wordpress;
create database wordpress;
GRANT ALL PRIVILEGES ON wordpress.* To 'wordpress'@'localhost' IDENTIFIED BY 'password';
quit;

php-fpm

Install:

sudo yum -y install php-fpm php-mysql php-pclzip
sudo systemctl enable php-fpm

Configure:

Uncomment cgi.fix_pathinfo and change value to 0:

sudo sed -i 's/\;\(cgi.fix_pathinfo=\)1/\10/g' /etc/php.ini

Modify the listen= parameter to listen to UNIX socket instead of TCP:

sudo sed -i 's/\(listen =\).*/\1 \/var\/run\/php-fpm\/php-fpm.sock/g' /etc/php-fpm.d/www.conf

Change listen.owner and listen.group to nobody:

sudo sed -i 's/\(listen.owner = \).*/\1nobody/g; s/\(listen.group = \).*/\1nobody/g' /etc/php-fpm.d/www.conf

Change running user & group from apache to nginx:

sudo sed -i 's/\(^user = \).*/\1nginx/g; s/\(^group = \).*/\1nginx/g' /etc/php-fpm.d/www.conf

Start php-fpm:

sudo systemctl start php-fpm

Caching

To speed up wordpress further we need to install a few bits of caching software. Accept defaults when prompted.

zend-opcache & apc:

Install necessary packages:

sudo yum -y install php-pecl-zendopcache php-pecl-apcu php-devel gcc
sudo pecl install apc

Add apc extension to php configuration:

sudo sh -c "echo '\

#Add apc extension 
extension=apc.so' >> /etc/php.ini"

Restart php-fpm:

sudo systemctl restart php-fpm

Varnish

Install:

sudo yum -y install varnish
sudo systemctl enable varnish

Configure nginx to listen on port 8080 instead of port 80:

sudo sed -i /etc/nginx/nginx.conf -e 's/listen.*80/&80 /'

Change varnish to listen on port 80 instead of port 6081:

sudo sed -i /etc/varnish/varnish.params -e 's/\(VARNISH_LISTEN_PORT=\).*/\180/g'

Optional: change varnish to cache files in memory with a limit of 256M (caching to memory is much faster than caching to disk)

sudo sed -i /etc/varnish/varnish.params -e 's/\(VARNISH_STORAGE=\).*/\1\"malloc,256M\"/g'

Add varnish configuration to work with caching wordpress sites:

Update 2/25/2018 added section to allow facebook to properly scrape varnish cached sites.

sudo sh -c 'echo "/* SET THE HOST AND PORT OF WORDPRESS
 * *********************************************************/
vcl 4.0;
import std;

backend default {
 .host = \"127.0.0.1\";
 .port = \"8080\";
 .first_byte_timeout = 60s;
 .connect_timeout = 300s;
}
 
# SET THE ALLOWED IP OF PURGE REQUESTS
# ##########################################################
acl purge {
 \"localhost\";
 \"127.0.0.1\";
}

#THE RECV FUNCTION
# ##########################################################
sub vcl_recv {

#Facebook workaround to allow proper shares
if (req.http.user-agent ~ \"facebookexternalhit\")
        {
        return(pipe);
        }

# set realIP by trimming CloudFlare IP which will be used for various checks
set req.http.X-Actual-IP = regsub(req.http.X-Forwarded-For, \"[, ].*$\", \"\"); 

 # FORWARD THE IP OF THE REQUEST
 if (req.restarts == 0) {
 if (req.http.x-forwarded-for) {
 set req.http.X-Forwarded-For =
 req.http.X-Forwarded-For + \", \" + client.ip;
 } else {
 set req.http.X-Forwarded-For = client.ip;
 }
 }

 # Purge request check sections for hash_always_miss, purge and ban
 # BLOCK IF NOT IP is not in purge acl
 # ##########################################################

 # Enable smart refreshing using hash_always_miss
if (req.http.Cache-Control ~ \"no-cache\") {
 if (client.ip ~ purge) {
 set req.hash_always_miss = true;
 }
}

if (req.method == \"PURGE\") {
 if (!client.ip ~ purge) {
 return(synth(405,\"Not allowed.\"));
 }

 ban (\"req.url ~ \"+req.url);
 return(synth(200,\"Purged.\"));

}

if (req.method == \"BAN\") {
 # Same ACL check as above:
 if (!client.ip ~ purge) {
 return(synth(403, \"Not allowed.\"));
 }
 ban(\"req.http.host == \" + req.http.host +
 \" && req.url == \" + req.url);

 # Throw a synthetic page so the
 # request wont go to the backend.
 return(synth(200, \"Ban added\"));
}

# Unset cloudflare cookies
# Remove has_js and CloudFlare/Google Analytics __* cookies.
 set req.http.Cookie = regsuball(req.http.Cookie, \"(^|;\s*)(_[_a-z]+|has_js)=[^;]*\", \"\");
 # Remove a \";\" prefix, if present.
 set req.http.Cookie = regsub(req.http.Cookie, \"^;\s*\", \"\");

 # For Testing: If you want to test with Varnish passing (not caching) uncomment
 # return( pass );

# DO NOT CACHE RSS FEED
 if (req.url ~ \"/feed(/)?\") {
 return ( pass ); 
}

#Pass wp-cron

if (req.url ~ \"wp-cron\.php.*\") {
 return ( pass );
}

## Do not cache search results, comment these 3 lines if you do want to cache them

if (req.url ~ \"/\?s\=\") {
 return ( pass ); 
}

# CLEAN UP THE ENCODING HEADER.
 # SET TO GZIP, DEFLATE, OR REMOVE ENTIRELY. WITH VARY ACCEPT-ENCODING
 # VARNISH WILL CREATE SEPARATE CACHES FOR EACH
 # DO NOT ACCEPT-ENCODING IMAGES, ZIPPED FILES, AUDIO, ETC.
 # ##########################################################
 if (req.http.Accept-Encoding) {
 if (req.url ~ \"\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$\") {
 # No point in compressing these
 unset req.http.Accept-Encoding;
 } elsif (req.http.Accept-Encoding ~ \"gzip\") {
 set req.http.Accept-Encoding = \"gzip\";
 } elsif (req.http.Accept-Encoding ~ \"deflate\") {
 set req.http.Accept-Encoding = \"deflate\";
 } else {
 # unknown algorithm
 unset req.http.Accept-Encoding;
 }
 }

 # PIPE ALL NON-STANDARD REQUESTS
 # ##########################################################
 if (req.method != \"GET\" &&
 req.method != \"HEAD\" &&
 req.method != \"PUT\" && 
 req.method != \"POST\" &&
 req.method != \"TRACE\" &&
 req.method != \"OPTIONS\" &&
 req.method != \"DELETE\") {
 return (pipe);
 }
 
 # ONLY CACHE GET AND HEAD REQUESTS
 # ##########################################################
 if (req.method != \"GET\" && req.method != \"HEAD\") {
 return (pass);
 }
 
 # OPTIONAL: DO NOT CACHE LOGGED IN USERS (THIS OCCURS IN FETCH TOO, EITHER
 # COMMENT OR UNCOMMENT BOTH
 # ##########################################################
 if ( req.http.cookie ~ \"wordpress_logged_in|resetpass\" ) {
 return( pass );
 }

 #fix CloudFlare Mixed Content with Flexible SSL
 if (req.http.X-Forwarded-Proto) {
 return(hash);
 }

 # IF THE REQUEST IS NOT FOR A PREVIEW, WP-ADMIN OR WP-LOGIN
 # THEN UNSET THE COOKIES
 # ##########################################################
 if (!(req.url ~ \"wp-(login|admin)\") 
 && !(req.url ~ \"&preview=true\" ) 
 ){
 unset req.http.cookie;
 }

 # IF BASIC AUTH IS ON THEN DO NOT CACHE
 # ##########################################################
 if (req.http.Authorization || req.http.Cookie) {
 return (pass);
 }
 
 # IF YOU GET HERE THEN THIS REQUEST SHOULD BE CACHED
 # ##########################################################
 return (hash);
 # This is for phpmyadmin
if (req.http.Host == \"pmadomain.com\") {
return (pass);
}
}

sub vcl_hash {

if (req.http.X-Forwarded-Proto) {
 hash_data(req.http.X-Forwarded-Proto);
 }
}


# HIT FUNCTION
# ##########################################################
sub vcl_hit {
 return (deliver);
}

# MISS FUNCTION
# ##########################################################
sub vcl_miss {
 return (fetch);
}

# FETCH FUNCTION
# ##########################################################
sub vcl_backend_response {
 # I SET THE VARY TO ACCEPT-ENCODING, THIS OVERRIDES W3TC 
 # TENDANCY TO SET VARY USER-AGENT. YOU MAY OR MAY NOT WANT
 # TO DO THIS
 # ##########################################################
 set beresp.http.Vary = \"Accept-Encoding\";

 # IF NOT WP-ADMIN THEN UNSET COOKIES AND SET THE AMOUNT OF 
 # TIME THIS PAGE WILL STAY CACHED (TTL), add other locations or subdomains you do not want to cache here in case they set cookies
 # ##########################################################
 if (!(bereq.url ~ \"wp-(login|admin)\") && !bereq.http.cookie ~ \"wordpress_logged_in|resetpass\" ) {
 unset beresp.http.set-cookie;
 set beresp.ttl = 1w;
 set beresp.grace =3d;
 }

 if (beresp.ttl <= 0s ||
 beresp.http.Set-Cookie ||
 beresp.http.Vary == \"*\") {
 set beresp.ttl = 120 s;
 # set beresp.ttl = 120s;
 set beresp.uncacheable = true;
 return (deliver);
 }

 return (deliver);
}

# DELIVER FUNCTION
# ##########################################################
sub vcl_deliver {
 # IF THIS PAGE IS ALREADY CACHED THEN RETURN A HIT TEXT 
 # IN THE HEADER (GREAT FOR DEBUGGING)
 # ##########################################################
 if (obj.hits > 0) {
 set resp.http.X-Cache = \"HIT\";
 # IF THIS IS A MISS RETURN THAT IN THE HEADER
 # ##########################################################
 } else {
 set resp.http.X-Cache = \"MISS\";
 }
}" > /etc/varnish/default.vcl'

Restart varnish & nginx:

sudo systemctl restart nginx varnish

Logging

By default varnish does not log its traffic. This means that your apache log will only log things varnish does not cache. We have to configure varnish to log traffic so you don’t lose insight into who is visiting your site.

Update 2/14/2017:  I’ve discovered a better way to do this. The old way is still included below, but you really should use this other way.

New way:

CentOS ships with some systemd scripts for you. You can use them out of the box by simply issuing

systemctl start varnishncsa
systemctl enable varnishncsa

If you are behind a reverse proxy then you will want to tweak the varnishncsa output a bit to reflect x-forwarded-for header values (thanks to this github discussion for the guidance.) Accomplish this by appending a modified log output format string to /lib/systemd/system/varnishncsa.service:

sudo sed -i /lib/systemd/system/varnishncsa.service -e "s/ExecStart.*/& -F \'%%{X-Forwarded-For}i %%l %%u %%t \"%%r\" %%s %%b \"%%{Referer}i\" \"%%{User-agent}i\"\' /g"

Lastly, reload systemd configuration, enable, and start the varnishncsa service:

sudo systemctl daemon-reload
sudo systemctl enable varnishncsa
sudo systemctl start varnishncsa

 


Old way:

First, enable rc.local

sudo chmod +x /etc/rc.local
sudo systemctl enable rc-local

Next, add this entry to the rc.local file:

sudo sh -c 'echo "varnishncsa -a -w /var/log/varnish/access.log -D -P /var/run/varnishncsa.pid" >> /etc/rc.local'

If your varnish server is behind a reverse proxy (like a web application firewall) then modify the above code slightly (thanks to this site for the information on how to do so)

sudo sh -c "echo varnishncsa -a -F \'%{X-Forwarded-For}i %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\"\' -w /var/log/varnish/access.log -D -P /var/run/varnishncsa.pid >> /etc/rc.local"

Once configuration is in place, start the rc-local service

sudo systemctl start rc-local

WordPress

Download, extract, and set permissions for your wordpress installation (this assumes your wordpress site is the only site on the server)

wget https://wordpress.org/latest.zip
sudo unzip latest.zip -d /usr/share/nginx/html/
sudo chown nginx:nginx -R /usr/share/nginx/html/

Configure nginx

Follow best practice for nginx by creating a new configuration file for your wordpress site. In this example I’ve created a file wordpress.conf inside the /etc/nginx/conf.d directory.

Create the file:

sudo vim /etc/nginx/conf.d/wordpress.conf

Insert the following, making sure to modify the server_name directive:

server {
    listen 8080;
    listen [::]:8080;
    server_name wordpress;
    root /usr/share/nginx/html/wordpress;
    port_in_redirect off;


    location / {
        index index.php;
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    error_page 404 /404.html;     
    location = /40x.html {
     }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    }
}

Restart nginx

sudo systemctl restart nginx

Configure upload directory

If you want users to upload content, then you will want to assign the http_sys_rw_content_t selinux security context for the wp-uploads directory (create it if it doesn’t exist)

sudo mkdir /usr/share/nginx/html/wordpress/wp-content/uploads
sudo chown nginx:nginx /usr/share/nginx/html/wordpress/wp-content/uploads
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/usr/share/nginx/html/wordpress/wp-content/uploads(/.*)?"
sudo restorecon -Rv /usr/share/nginx/html/wordpress/wp-content/uploads

Run the wizard

In order for the wizard to run properly we need to temporarily give the wordpress directory httpd_sys_rw_content_t selinux context

sudo chcon -t httpd_sys_rw_content_t /usr/share/nginx/html/wordpress/

Now navigate to your new website in a browser and follow the wizard, which will create a wp-config.php file inside the wordpress directory.

W3 Total Cache

Once your wordpress site is set up you will want to configure it to communicate with varnish. This will make sure the cache is always up to date when changes are made.

Install the W3 Total Cache wordpress plugin and configure it as follows:

opcode-database-object

Opcode cache: Opcode:Zend Opcache

Database cache: Check enable, select Opcode: Alternative PHP Cache (APC / APCu)

Object cache: Check enable, select Opcode: Alternative PHP Cache (APC / APCu)

Fragment cache: Opcode: Alternative PHP cache (APC / APCu)

proxy

Reverse Proxy: Check “Enable reverse proxy caching via varnish”
Specify 127.0.0.1 in the varnish servers box. Click save all settings.

Wrapping up

Once your site is properly set up, restore the original security context for the wordpress directory:

sudo restorecon -v /usr/share/nginx/html/wordpress/

Lastly restart nginx and varnish:

sudo systemctl restart nginx varnish

Success! Everything is working within the proper SELinux contexts and caching configuration.

Troubleshooting

403 forbidden

I received this error after setting everything up. After some digging I came across this site which explained what could be happening.

For me this meant that nginx couldn’t find an index file and was trying to default to a directory listing, which is not allowed by default. This is fixed by inserting a proper directive to find index files, in my case, index.php. Make sure you have “index index.php” in your nginx.conf inside the location / block:

    location / {
        index index.php;
    }

 Accessing wp-admin redirects you to port 8080, times out

If you find going to /wp-admin redirects you to a wrong port and times out, it’s because nginx is forwarding the portnumber. We want to turn that off (thanks to this site for the help.) Add this to your nginx.conf:

  port_in_redirect off;

Rename LVM group in CentOS7

I recently made the discovery that all my VMs have the same volume group name – the default that is given when CentOS is installed. My OCD got the best of me and I set out to change these names to reflect the hostname. The problem is if you rename the volume group containing the root partition, the system will not boot.

The solution is to run a series of commands to get things updated. Thanks to the centOS forums for the information. In my case I had already made the mistake of renaming the group and ending up with an unbootable system. This is what you have to do to get it working again:

Boot into a Linux rescue shell (from the installer DVD, for example) and activate the volume groups (if not activated by default)

vgchange -ay

Mount the root and boot volumes (replace VG_NAME with name of your volume group and BOOT_PARTITION with the device name of your boot partition, typically sda1)

mount /dev/VG_NAME/root /mnt
mount /dev/BOOT_PARTITION /mnt/boot

Mount necessary system devices into our chroot:

mount --bind /proc /mnt/proc
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys

Chroot into our broken system:

chroot /mnt/ /bin/bash

Modify fstab and grub files to reflect new volume group name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 's/OLD_VG_NAME/NEW_VG_NAME/g'

Run dracut to modify boot images:

dracut -f

Remove your recovery boot CD and reboot into your newly fixed VM

exit
reboot

If you want to avoid having to boot into a recovery environment, do the following steps on the machine whose VG you want to rename:

Rename the volume group:

vgrename OLD_VG_NAME NEW_VG_VAME

Modify necessary boot files to reflect the new name:

sed -i /etc/fstab -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /etc/default/grub -e 's/OLD_VG_NAME/NEW_VG_NAME/g'
sed -i /boot/grub2/grub.cfg -e 
's/OLD_VG_NAME/NEW_VG_NAME/g'

Rebuild boot images:

dracut -f