Category Archives: Web

Fix wordpress PHP change was reverted error

Since WordPress 4.9 I’ve had a peculiar issue when trying to edit theme files using the web GUI. Whenever I tried to save changes I would get this error message:

Unable to communicate back with site to check for fatal errors, so the PHP change was reverted. You will need to upload your PHP file change by some other means, such as by using SFTP.

After following this long thread I saw the suggestion to install and use the Health Check plugin to get more information into why this is happening. In my case I kept getting this error message:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 28: Connection timed out after 10001 milliseconds

I researched what a loopback request is in this case. It’s the webserver reaching out to its own site’s url to talk to itself. My webserver was being denied internet access, which included its own URL, so it couldn’t complete the loopback request.

One solution, mentioned here, is to edit the hosts file on your webserver to point to 127.0.0.1 for the URL of your site. My solution was to open up the firewall to allow my server to connect to its URL. I then ran into a different problem:

The loopback request to your site failed, this may prevent WP_Cron from working, along with theme and plugin editors.<br>Error encountered: (0) cURL error 60: Peer's Certificate issuer is not recognized.

After digging for a while I found this site which explains how to edit php.ini to point to an acceptable certificate list. To fix this on my Cent7 machine I edited /etc/php.ini and added this line (you could also add it to /etc/php.d/curl.ini)

curl.cainfo="/etc/pki/tls/cert.pem"

This caused php’s curl module to use the same certificate trust store that the underlying OS uses.

Then restart php-fpm if you’re using it:

sudo systemctl restart php-fpm

Success! Loopback connections now work properly.

Make Java run on privileged ports in CentOS 7

I recently gnashed my teeth at trying to get java to directly bind to port 443 instead of using nginx to proxy to a java application I had to use. I was surprised at the complication of finding the solution, but I eventually did thanks to the following sites:

https://superuser.com/questions/710253/allow-non-root-process-to-bind-to-port-80-and-443/892391

https://github.com/kaitoy/pcap4j/issues/63

First, determine the full path of your current java install:

sudo update-alternatives --config java

In my CentOS 7 install, the java binary was located here:

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java

Next, use setcap to configure java to be able to bind to port 443:

sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/bin/java

Now, test to make sure java works:

java -version

java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory

The above error means that after setting setcap, it breaks how java looks for its library to run. To fix this, we need to symlink the library it’s looking for into /usr/lib, then run ldconfig

sudo ln -s /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el7_4.x86_64/jre/lib/amd64/jli/libjli.so /usr/lib/
sudo ldconfig

Now test Java again:

java -version

It took longer than I like to admit to get this working, but it it does indeed work this way.

Whitelist IPs with varnish

I recently needed to restrict which IP addresses can access wp-login.php for a wordpress site of mine. This site is sitting behind varnish cache for speed. Modifying htaccess doesn’t work in this case so I have to modify the varnish configuration in order to get this to work.

The varnish documentation is actually quite good at telling you what you need to do.  You have to first specify an acl and then in the vcl_recv function specify the action when these IPs are used.

I kept running into a problem where varnish wouldn’t compile. I kept receiving this error:

"Expected CSTR got 'admin_net'" (C String?)

It turns out my load balancer does not support the PROXY protocol, so client.ip is always the IP of the load balancer, not the IP of the person making the request.

The solution was finally found here where it was explained that in absence of PROXY protocol you can use the std.ip() function to convert a string containing an IP address to the value varnish expects an IP address to be, in order to check it against an ACL (see here for syntax reference.)

I then had to take it a step further to trim all the extraneous commas and quotes from X-Forwarded-For so that the std.ip() function would work:

# set realIP by trimming CloudFlare IP which will be used for various checks
set req.http.X-Actual-IP = regsub(req.http.X-Forwarded-For, "[, ].*$", "");

With these three bits combined I was able to properly restrict access to wp-login.php to a specified whitelist of IP addresses:

acl admin {
 "10.0.0.0/24";
 "10.0.1.0/24";
 "10.0.2.0/24";
}

...

sub vcl_recv {
...

 # set realIP by trimming CloudFlare IP which will be used for various checks
 set req.http.X-Actual-IP = regsub(req.http.X-Forwarded-For, "[, ].*$", "");

 #Deny wp-login.php access if not in admin ACL
 if ((std.ip(req.http.X-Actual-IP, "0.0.0.0") !~ admin) && req.url ~ "^/wp-login.php") {
  return(synth(403, "Access denied."));
 }
...
}

Success.

Fix WordPress “Sorry, you are not allowed to access this page.”

I recently came across an issue with my WordPress installation. It’s situated behind a load balancer where SSL is terminated. The load balancer takes HTTPS traffic, then forwards it as HTTP on port 80 to the wordpress server.

I was running issues with a redirect loop after installing wordpress. The solution was to add this bit of code to wp-config.php:

define('FORCE_SSL_ADMIN', true);
if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
 $_SERVER['HTTPS']='on';

This solves the redirect loop issue but then I ran into a different problem. When I tried to sign into wp-admin I would get this message:

Sorry, you are not allowed to access this page.

After much digging I found this post which emphasizes that you must place that code BEFORE anything else in wp-config.php (except for the beginning PHP tag.) Success!

CentOS7, nginx, reverse proxy, & let’s encrypt

With the loss of trust of Startcom certs I found myself needing a new way to obtain free SSL certificates. Let’s Encrypt is perfect for this. Unfortunately SophosUTM does not support Let’s Encrypt. It became time to replace Sophos as my reverse proxy. Enter nginx.

The majority of the information I used to get this up and running came from digitalocean with help from howtoforge. My solution involves CentOS7, nginx, and the let’s encrypt software.

Install necessary packages

sudo yum install nginx letsencrypt
sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-service=https --permanent
sudo systemctl enable nginx

Inform selinux to allow nginx to make http network connections:

sudo setsebool -P httpd_can_network_connect 1

Generate certificates

Generate your SSL certificates with the letsencrypt command. This command relies on being able to reach your site over the internet using port 80 and public DNS. Replace arguments below to reflect your setup

sudo letsencrypt certonly -a webroot --webroot-path=/usr/share/nginx/html -d example.com -d www.example.com

The above command places the certs in /etc/letsencrypt/live/<domain_name>

Sophos UTM certificates

In my case I had a few paid SSL certificates I wanted to copy over from Sophos UTM to nginx. In order to do this I had to massage them a little bit as outlined here.

Download p12 from Sophos, also download certificate authority file, then use openssl to convert the p12 to a key bundle nginx will take.

openssl pkcs12 -nokeys -in server-cert-key-bundle.p12 -out server.pem
openssl pkcs12 -nocerts -nodes -in server-cert-key-bundle.p12 -out server.key
cat server.pem Downloaded_CA_file.pem > server-ca-bundle.pem

Once you have your keyfiles you can copy them wherever you like and use them in your site-specific SSL configuration file.

Auto renewal

First make sure that the renew command works successfully:

sudo letsencrypt renew

If the output is a success (a message saying not up for renewal) then add this to a cron job to check monthly for renewal:

sudo crontab -e
30 2 1 * * /usr/bin/letsencrypt renew >> /var/log/le-renew.log
35 2 1 * * /bin/systemctl reload nginx

Configure nginx

Uncomment the https settings block in /etc/nginx/nginx.conf to allow for HTTPS connections.

Generate a strong DH group:

sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Create SSL conf snippets in /etc/nginx/conf.d/ssl-<sitename>.conf. Make sure to include the proper location of your SSL certificate files as generated with the letsencrypt command.

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;

Here is a sample ssl.conf file:

server {
        listen 443;

        ssl_certificate /etc/letsencrypt/live/<HOSTNAME>/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/<HOSTNAME>/privkey.pem;
        ssl_dhparam /etc/ssl/certs/dhparam.pem;

        access_log /var/log/<HOSTNAME.log>;

        server_name <HOSTNAME>;

        location / {
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;

                proxy_pass http://<BACKEND_HOSTNAME>/;
        }
}

 

Redirect http to https by creating a redirect configuration file (optional)

sudo vim /etc/nginx/conf.d/redirect.conf
server {
	server_name
		<DOMAIN_1>
                ...
		<DOMAIN_N>;

        location /.well-known {
              alias /usr/share/nginx/html/.well-known;
              allow all;
	}
	location / {
               return 301 https://$host$request_uri; 
	}
}

 

Restart nginx:

sudo systemctl restart nginx

Troubleshooting

HTTPS redirects always go to the host at the top of the list

Solution found here:  use the $host variable instead of the $server_name variable in your configuration.

Websockets HTTP 400 error

Websockets require a bit more massaging in the configuration file as outlined here. Modify your site-specific configuration to add these lines:

# we're in the http context here
map $http_upgrade $connection_upgrade {
  default upgrade;
  ''      close;
}

server {     proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
}

 

Monitor your servers with phpservermonitor

I have a handful of servers and for years I’ve been wanting to get some sort of monitoring in place. Today I tried out php server monitor and found it was pretty easy to set up and use.

Download

The installation process was pretty straightforward.

  • Install PHP, mysql, and apache
  • Create database, user, password, and access rights for mysql
  • Download .tar.gz and extract to /var/www
  • Configure Apache site file to point to phpservermonitor directory
  • Navigate to the IP / URL of your apache server and run the installation script

The above process is documented fairly well on their website. I configured this to run on my Raspberry Pi 2. This was my process:

Install dependencies:

sudo apt-get install php5 php5-curl php5-mysql mysql-server

Configure mysql:

sudo mysql_secure_installation

Create database:

mysql -u root -p
create database phpservermon;
create user 'phpservermon'@'localhost' IDENTIFIED BY 'password';
 grant all privileges on phpservermon.* TO 'phpservermon'@'localhost'; 
flush privileges;

Extract phpservermon to /var/www and grant permissions

tar zxvf <phpservermon_gzip_filename> -C /var/www
sudo chown -R www-data /var/www/*

Configure php:

sudo vim /etc/php5/apache2/php.ini
#uncomment date.timezone and set your timezone
date.timezone = "America/Boise"

Configure apache:

sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/phpservermon

#Modify /etc/apache2/sites-available/phpservermon server root to point to directory above, also add a ServerName if desired

sudo a2ensite phpservermon
sudo service apache2 reload

Configure cron (I have it check every minute but you can configure whatever you like)

*/1 * * * * /usr/bin/php /var/www/phpservermon/cron/status.cron.php

Navigate to the web address you’ve configured in apache and follow the wizard.

It’s pretty simple but it works! A nice php application to monitor websites and services.

 

Generate static html files from any website with httrack

I came across a need to take a cakePHP dynamically generated site and turn it into a collection of HTML files. After some trial and error I came across httrack which fit the need beautifully (thanks to this site for pointing me there.)

To install httrack run the following (for ubuntu-based distros)

sudo add-apt-repository ppa:upubuntu-com/web
sudo apt-get update
sudo apt-get install webhttrack httrack

Oncne httrack is installed simply launch it from the command line:

httrack

Follow the prompts and in no time you’ll have a folder with static HTML files for your entire website. Easy!

Purge Varnish cache by visiting URL

I came across a need to allow web developers to purge Varnish cache. The problem is the developers aren’t allowed access to the production machine and our web application firewall blocks purge requests. I needed for there to be a way for them to simply access a page hosted on the webserver and cause it to purge its own varnish cache.

I was able to accomplish this by placing a PHP file in the webserver’s directory and controlling access to it via .htaccess. Thanks to this site for the php script and this one for the .htaccess guidance.

Place this PHP file where the web devs can access it, making sure to modify the $hostname variable to suit your needs and to rename the file to have a .php extension.

<?php

#Simple script to purge varnish cache
#Adapted from the script from http://felipeferreira.net/index.php/2013/08/script-purge-varnish-cache/
#This script runs locally on the webserver and will purge all cached pages for the site specified in $hostname
#Modify the $hostname parameter below to match the hostname of your site

$hostname = $_SERVER['SERVER_NAME'];

header( 'Cache-Control: max-age=0' );

$port     = 80;
$debug    = false;
$URL      =  "/*";

purgeURL( $hostname, $port, $URL, $debug );

function purgeURL( $hostname, $port, $purgeURL, $debug )
{
    $finalURL = sprintf(
        "http://%s:%d%s", $hostname, $port, $purgeURL
    );

    print( "<br> Purging ${finalURL} <br><br>" );

    $curlOptionList = array(
        CURLOPT_RETURNTRANSFER    => true,
        CURLOPT_CUSTOMREQUEST     => 'PURGE',
        CURLOPT_HEADER            => true ,
        CURLOPT_NOBODY            => true,
        CURLOPT_URL               => $finalURL,
        CURLOPT_CONNECTTIMEOUT_MS => 2000
    );

    $fd = true;
    if( $debug == true ) {
        print "<br>---- Curl debug -----<br>";
        $fd = fopen("php://output", 'w+');
        $curlOptionList[CURLOPT_VERBOSE] = true;
        $curlOptionList[CURLOPT_STDERR]  = $fd;
    }

    $curlHandler = curl_init();
    curl_setopt_array( $curlHandler, $curlOptionList );
    $return = curl_exec($curlHandler);

    if(curl_error($curlHandler)) {
    print "<br><hr><br>CRITICAL - Error to connect to $hostname port $port - Error:  curl_error($curl) <br>";
    exit(2);
 }

    curl_close( $curlHandler );
    if( $fd !== false ) {
        fclose( $fd );
    }
    if( $debug == true ) {
        print "<br> Output: <br><br> $return <br><br><hr>";
    }
}
?>

<title>Purge cache</title>
Press submit to purge the cache
<form method="post" action="<?php echo $PHP_SELF;?>">
<input type="submit" value="submit" name="submit">
</form>

Place (or append) the following .htaccess code in the same directory you placed the php file:

#Only allow internal IPs to access cache purge page
<Files "purge.php">
    Order deny,allow
    Deny from all
    SetEnvIf X-Forwarded-For ^10\. env_allow_1
    Allow from env=env_allow_1
    Satisfy Any
</Files>

The above code only allows access to the purge.php page from IP addresses beginning with “10.” (internal IPs)

This PHP / .htaccess combo allowed the web devs to purge cache without any system access or firewall rule changes. Hooray!

Install wordpress on CentOS7 with nginx & caching

Lately I’ve become interested in the LEMP stack (as opposed to the LAMP stack.) As such I’ve decided to set up a wordpress site running CentOS 7, nginx, mariadb, php-fpm, zend-opcache, apc, and varnish. This writeup will borrow heavily from two of my other writeups, Install Wordpress on CentOS 7 with  SELinux and Speed up WordPress in CentOS7 using caching. This will be a mashup of those two with an nginx twist with guidance from digitalocean. Let’s begin.

Repositories

To install the required addons we will need to have the epel repository enabled:

yum -y install epel-release

nginx

Install necessary packages:

sudo yum -y install nginx
sudo systemctl enable nginx

Optional: symlink /usr/share/nginx/ to /var/www/ (for those of us who are used to apache)

sudo ln -s /usr/share/nginx/ /var/www

Open necessary firewall ports:

sudo firewall-cmd --add-service=http --permanent
sudo systemctl restart firewalld

start nginx:

sudo systemctl start nginx

Navigate to your new site to make sure it brings up the default page properly.

MariaDB

Install:

sudo yum -y install mariadb-server mariadb
sudo systemctl enable mariadb

Run initial mysql configuration to set database root password

sudo systemctl start mariadb
sudo mysql_secure_installation

Configure:

Create a wordpress database and user:

mysql -u root -p 
#enter your mysql root password here
create user wordpress;
create database wordpress;
GRANT ALL PRIVILEGES ON wordpress.* To 'wordpress'@'localhost' IDENTIFIED BY 'password';
quit;

php-fpm

Install:

sudo yum -y install php-fpm php-mysql php-pclzip
sudo systemctl enable php-fpm

Configure:

Uncomment cgi.fix_pathinfo and change value to 0:

sudo sed -i 's/\;\(cgi.fix_pathinfo=\)1/\10/g' /etc/php.ini

Modify the listen= parameter to listen to UNIX socket instead of TCP:

sudo sed -i 's/\(listen =\).*/\1 \/var\/run\/php-fpm\/php-fpm.sock/g' /etc/php-fpm.d/www.conf

Change listen.owner and listen.group to nobody:

sudo sed -i 's/\(listen.owner = \).*/\1nobody/g; s/\(listen.group = \).*/\1nobody/g' /etc/php-fpm.d/www.conf

Change running user & group from apache to nginx:

sudo sed -i 's/\(^user = \).*/\1nginx/g; s/\(^group = \).*/\1nginx/g' /etc/php-fpm.d/www.conf

Start php-fpm:

sudo systemctl start php-fpm

Caching

To speed up wordpress further we need to install a few bits of caching software. Accept defaults when prompted.

zend-opcache & apc:

Install necessary packages:

sudo yum -y install php-pecl-zendopcache php-pecl-apcu php-devel gcc
sudo pecl install apc

Add apc extension to php configuration:

sudo sh -c "echo '\

#Add apc extension 
extension=apc.so' >> /etc/php.ini"

Restart php-fpm:

sudo systemctl restart php-fpm

Varnish

Install:

sudo yum -y install varnish
sudo systemctl enable varnish

Configure nginx to listen on port 8080 instead of port 80:

sudo sed -i /etc/nginx/nginx.conf -e 's/listen.*80/&80 /'

Change varnish to listen on port 80 instead of port 6081:

sudo sed -i /etc/varnish/varnish.params -e 's/\(VARNISH_LISTEN_PORT=\).*/\180/g'

Optional: change varnish to cache files in memory with a limit of 256M (caching to memory is much faster than caching to disk)

sudo sed -i /etc/varnish/varnish.params -e 's/\(VARNISH_STORAGE=\).*/\1\"malloc,256M\"/g'

Add varnish configuration to work with caching wordpress sites:

Update 2/25/2018 added section to allow facebook to properly scrape varnish cached sites.

sudo sh -c 'echo "/* SET THE HOST AND PORT OF WORDPRESS
 * *********************************************************/
vcl 4.0;
import std;

backend default {
 .host = \"127.0.0.1\";
 .port = \"8080\";
 .first_byte_timeout = 60s;
 .connect_timeout = 300s;
}
 
# SET THE ALLOWED IP OF PURGE REQUESTS
# ##########################################################
acl purge {
 \"localhost\";
 \"127.0.0.1\";
}

#THE RECV FUNCTION
# ##########################################################
sub vcl_recv {

#Facebook workaround to allow proper shares
if (req.http.user-agent ~ \"facebookexternalhit\")
        {
        return(pipe);
        }

# set realIP by trimming CloudFlare IP which will be used for various checks
set req.http.X-Actual-IP = regsub(req.http.X-Forwarded-For, \"[, ].*$\", \"\"); 

 # FORWARD THE IP OF THE REQUEST
 if (req.restarts == 0) {
 if (req.http.x-forwarded-for) {
 set req.http.X-Forwarded-For =
 req.http.X-Forwarded-For + \", \" + client.ip;
 } else {
 set req.http.X-Forwarded-For = client.ip;
 }
 }

 # Purge request check sections for hash_always_miss, purge and ban
 # BLOCK IF NOT IP is not in purge acl
 # ##########################################################

 # Enable smart refreshing using hash_always_miss
if (req.http.Cache-Control ~ \"no-cache\") {
 if (client.ip ~ purge) {
 set req.hash_always_miss = true;
 }
}

if (req.method == \"PURGE\") {
 if (!client.ip ~ purge) {
 return(synth(405,\"Not allowed.\"));
 }

 ban (\"req.url ~ \"+req.url);
 return(synth(200,\"Purged.\"));

}

if (req.method == \"BAN\") {
 # Same ACL check as above:
 if (!client.ip ~ purge) {
 return(synth(403, \"Not allowed.\"));
 }
 ban(\"req.http.host == \" + req.http.host +
 \" && req.url == \" + req.url);

 # Throw a synthetic page so the
 # request wont go to the backend.
 return(synth(200, \"Ban added\"));
}

# Unset cloudflare cookies
# Remove has_js and CloudFlare/Google Analytics __* cookies.
 set req.http.Cookie = regsuball(req.http.Cookie, \"(^|;\s*)(_[_a-z]+|has_js)=[^;]*\", \"\");
 # Remove a \";\" prefix, if present.
 set req.http.Cookie = regsub(req.http.Cookie, \"^;\s*\", \"\");

 # For Testing: If you want to test with Varnish passing (not caching) uncomment
 # return( pass );

# DO NOT CACHE RSS FEED
 if (req.url ~ \"/feed(/)?\") {
 return ( pass ); 
}

#Pass wp-cron

if (req.url ~ \"wp-cron\.php.*\") {
 return ( pass );
}

## Do not cache search results, comment these 3 lines if you do want to cache them

if (req.url ~ \"/\?s\=\") {
 return ( pass ); 
}

# CLEAN UP THE ENCODING HEADER.
 # SET TO GZIP, DEFLATE, OR REMOVE ENTIRELY. WITH VARY ACCEPT-ENCODING
 # VARNISH WILL CREATE SEPARATE CACHES FOR EACH
 # DO NOT ACCEPT-ENCODING IMAGES, ZIPPED FILES, AUDIO, ETC.
 # ##########################################################
 if (req.http.Accept-Encoding) {
 if (req.url ~ \"\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$\") {
 # No point in compressing these
 unset req.http.Accept-Encoding;
 } elsif (req.http.Accept-Encoding ~ \"gzip\") {
 set req.http.Accept-Encoding = \"gzip\";
 } elsif (req.http.Accept-Encoding ~ \"deflate\") {
 set req.http.Accept-Encoding = \"deflate\";
 } else {
 # unknown algorithm
 unset req.http.Accept-Encoding;
 }
 }

 # PIPE ALL NON-STANDARD REQUESTS
 # ##########################################################
 if (req.method != \"GET\" &&
 req.method != \"HEAD\" &&
 req.method != \"PUT\" && 
 req.method != \"POST\" &&
 req.method != \"TRACE\" &&
 req.method != \"OPTIONS\" &&
 req.method != \"DELETE\") {
 return (pipe);
 }
 
 # ONLY CACHE GET AND HEAD REQUESTS
 # ##########################################################
 if (req.method != \"GET\" && req.method != \"HEAD\") {
 return (pass);
 }
 
 # OPTIONAL: DO NOT CACHE LOGGED IN USERS (THIS OCCURS IN FETCH TOO, EITHER
 # COMMENT OR UNCOMMENT BOTH
 # ##########################################################
 if ( req.http.cookie ~ \"wordpress_logged_in|resetpass\" ) {
 return( pass );
 }

 #fix CloudFlare Mixed Content with Flexible SSL
 if (req.http.X-Forwarded-Proto) {
 return(hash);
 }

 # IF THE REQUEST IS NOT FOR A PREVIEW, WP-ADMIN OR WP-LOGIN
 # THEN UNSET THE COOKIES
 # ##########################################################
 if (!(req.url ~ \"wp-(login|admin)\") 
 && !(req.url ~ \"&preview=true\" ) 
 ){
 unset req.http.cookie;
 }

 # IF BASIC AUTH IS ON THEN DO NOT CACHE
 # ##########################################################
 if (req.http.Authorization || req.http.Cookie) {
 return (pass);
 }
 
 # IF YOU GET HERE THEN THIS REQUEST SHOULD BE CACHED
 # ##########################################################
 return (hash);
 # This is for phpmyadmin
if (req.http.Host == \"pmadomain.com\") {
return (pass);
}
}

sub vcl_hash {

if (req.http.X-Forwarded-Proto) {
 hash_data(req.http.X-Forwarded-Proto);
 }
}


# HIT FUNCTION
# ##########################################################
sub vcl_hit {
 return (deliver);
}

# MISS FUNCTION
# ##########################################################
sub vcl_miss {
 return (fetch);
}

# FETCH FUNCTION
# ##########################################################
sub vcl_backend_response {
 # I SET THE VARY TO ACCEPT-ENCODING, THIS OVERRIDES W3TC 
 # TENDANCY TO SET VARY USER-AGENT. YOU MAY OR MAY NOT WANT
 # TO DO THIS
 # ##########################################################
 set beresp.http.Vary = \"Accept-Encoding\";

 # IF NOT WP-ADMIN THEN UNSET COOKIES AND SET THE AMOUNT OF 
 # TIME THIS PAGE WILL STAY CACHED (TTL), add other locations or subdomains you do not want to cache here in case they set cookies
 # ##########################################################
 if (!(bereq.url ~ \"wp-(login|admin)\") && !bereq.http.cookie ~ \"wordpress_logged_in|resetpass\" ) {
 unset beresp.http.set-cookie;
 set beresp.ttl = 1w;
 set beresp.grace =3d;
 }

 if (beresp.ttl <= 0s ||
 beresp.http.Set-Cookie ||
 beresp.http.Vary == \"*\") {
 set beresp.ttl = 120 s;
 # set beresp.ttl = 120s;
 set beresp.uncacheable = true;
 return (deliver);
 }

 return (deliver);
}

# DELIVER FUNCTION
# ##########################################################
sub vcl_deliver {
 # IF THIS PAGE IS ALREADY CACHED THEN RETURN A HIT TEXT 
 # IN THE HEADER (GREAT FOR DEBUGGING)
 # ##########################################################
 if (obj.hits > 0) {
 set resp.http.X-Cache = \"HIT\";
 # IF THIS IS A MISS RETURN THAT IN THE HEADER
 # ##########################################################
 } else {
 set resp.http.X-Cache = \"MISS\";
 }
}" > /etc/varnish/default.vcl'

Restart varnish & nginx:

sudo systemctl restart nginx varnish

Logging

By default varnish does not log its traffic. This means that your apache log will only log things varnish does not cache. We have to configure varnish to log traffic so you don’t lose insight into who is visiting your site.

Update 2/14/2017:  I’ve discovered a better way to do this. The old way is still included below, but you really should use this other way.

New way:

CentOS ships with some systemd scripts for you. You can use them out of the box by simply issuing

systemctl start varnishncsa
systemctl enable varnishncsa

If you are behind a reverse proxy then you will want to tweak the varnishncsa output a bit to reflect x-forwarded-for header values (thanks to this github discussion for the guidance.) Accomplish this by appending a modified log output format string to /lib/systemd/system/varnishncsa.service:

sudo sed -i /lib/systemd/system/varnishncsa.service -e "s/ExecStart.*/& -F \'%%{X-Forwarded-For}i %%l %%u %%t \"%%r\" %%s %%b \"%%{Referer}i\" \"%%{User-agent}i\"\' /g"

Lastly, reload systemd configuration, enable, and start the varnishncsa service:

sudo systemctl daemon-reload
sudo systemctl enable varnishncsa
sudo systemctl start varnishncsa

 


Old way:

First, enable rc.local

sudo chmod +x /etc/rc.local
sudo systemctl enable rc-local

Next, add this entry to the rc.local file:

sudo sh -c 'echo "varnishncsa -a -w /var/log/varnish/access.log -D -P /var/run/varnishncsa.pid" >> /etc/rc.local'

If your varnish server is behind a reverse proxy (like a web application firewall) then modify the above code slightly (thanks to this site for the information on how to do so)

sudo sh -c "echo varnishncsa -a -F \'%{X-Forwarded-For}i %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-agent}i\"\' -w /var/log/varnish/access.log -D -P /var/run/varnishncsa.pid >> /etc/rc.local"

Once configuration is in place, start the rc-local service

sudo systemctl start rc-local

WordPress

Download, extract, and set permissions for your wordpress installation (this assumes your wordpress site is the only site on the server)

wget https://wordpress.org/latest.zip
sudo unzip latest.zip -d /usr/share/nginx/html/
sudo chown nginx:nginx -R /usr/share/nginx/html/

Configure nginx

Follow best practice for nginx by creating a new configuration file for your wordpress site. In this example I’ve created a file wordpress.conf inside the /etc/nginx/conf.d directory.

Create the file:

sudo vim /etc/nginx/conf.d/wordpress.conf

Insert the following, making sure to modify the server_name directive:

server {
    listen 8080;
    listen [::]:8080;
    server_name wordpress;
    root /usr/share/nginx/html/wordpress;
    port_in_redirect off;


    location / {
        index index.php;
        try_files $uri $uri/ /index.php?q=$uri&$args;
    }

    location ~ \.php$ {
        fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    error_page 404 /404.html;     
    location = /40x.html {
     }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
    }
}

Restart nginx

sudo systemctl restart nginx

Configure upload directory

If you want users to upload content, then you will want to assign the http_sys_rw_content_t selinux security context for the wp-uploads directory (create it if it doesn’t exist)

sudo mkdir /usr/share/nginx/html/wordpress/wp-content/uploads
sudo chown nginx:nginx /usr/share/nginx/html/wordpress/wp-content/uploads
sudo semanage fcontext -a -t httpd_sys_rw_content_t "/usr/share/nginx/html/wordpress/wp-content/uploads(/.*)?"
sudo restorecon -Rv /usr/share/nginx/html/wordpress/wp-content/uploads

Run the wizard

In order for the wizard to run properly we need to temporarily give the wordpress directory httpd_sys_rw_content_t selinux context

sudo chcon -t httpd_sys_rw_content_t /usr/share/nginx/html/wordpress/

Now navigate to your new website in a browser and follow the wizard, which will create a wp-config.php file inside the wordpress directory.

W3 Total Cache

Once your wordpress site is set up you will want to configure it to communicate with varnish. This will make sure the cache is always up to date when changes are made.

Install the W3 Total Cache wordpress plugin and configure it as follows:

opcode-database-object

Opcode cache: Opcode:Zend Opcache

Database cache: Check enable, select Opcode: Alternative PHP Cache (APC / APCu)

Object cache: Check enable, select Opcode: Alternative PHP Cache (APC / APCu)

Fragment cache: Opcode: Alternative PHP cache (APC / APCu)

proxy

Reverse Proxy: Check “Enable reverse proxy caching via varnish”
Specify 127.0.0.1 in the varnish servers box. Click save all settings.

Wrapping up

Once your site is properly set up, restore the original security context for the wordpress directory:

sudo restorecon -v /usr/share/nginx/html/wordpress/

Lastly restart nginx and varnish:

sudo systemctl restart nginx varnish

Success! Everything is working within the proper SELinux contexts and caching configuration.

Troubleshooting

403 forbidden

I received this error after setting everything up. After some digging I came across this site which explained what could be happening.

For me this meant that nginx couldn’t find an index file and was trying to default to a directory listing, which is not allowed by default. This is fixed by inserting a proper directive to find index files, in my case, index.php. Make sure you have “index index.php” in your nginx.conf inside the location / block:

    location / {
        index index.php;
    }

 Accessing wp-admin redirects you to port 8080, times out

If you find going to /wp-admin redirects you to a wrong port and times out, it’s because nginx is forwarding the portnumber. We want to turn that off (thanks to this site for the help.) Add this to your nginx.conf:

  port_in_redirect off;