Skip to content

Setups and Servers

SSL certificate installation on EC2 (Amazon Lightsail)

A TLDR installation guide for installing an SSL certificate in nginx on an Ubuntu EC2 instance. Assuming you already have a site up and running under ngi

While updating a Ghost blog, had to install SSL certificate. As Ghost was being served over nginx, there were a few hoops to jump through that I’d not come across before. Thankfully, straight forward to install. We’ll be using a LetsEncrypt certificate.

First, we need to add CertBot.

  • sudo add-apt-repository ppa:certbot/certbot
  • sudo apt-get update
  • sudo apt-get install python-certbot-nginx
  • sudo certbot --nginx -d -d

And that’s all there is to it.

Dev Notes

If you see the following error it means that CertBot was unable to connect:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for
tls-sni-01 challenge for
Waiting for verification…
Cleaning up challenges
Failed authorization procedure. (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Timeout, (tls-sni-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Timeout

This is likely due to DNS configuration issues or that the port is being blocked. Check that port 443 is allowed through the FireWall.

Amazon Lightsail firewall config

Visual regression testing with Wraith

I’ve had a play with this before, but new machine, new set-up, this time I’ll document it…

First, what is Visual Regression Testing?

Simply, the visual comparison of two versions of a page, and highlighting any differences. It’s spot the difference for web folks.

It’s important to note this compliments other testing, catching the sort of difference easily missed in QA and by functional testing (be that in in CI or elsewhere). It also still requires someone to actually look and the output. Humans are significantly better at spotting things that are wrong, but they first need to be presented with the possibility that something could be wrong.

Okay, what is Wraith?

Wraith is a tool created by the BBC for automating some aspects visual regression testing.

Given two domains, it can compare pages between those domains. Wraith will highlight differences between pages (for example, a page on your local dev or pre-prod environment and a production environment) and save these as a screenshot for you to manually check. It will also flag unexpected percentage of difference, indicating if you have a potential problem.

Screenshot of Wraith output
Wraith output
Wraith output screenshot
Wraith visual output screenshot

Installing Wraith

  • Install HomeBrew. This is will take all the pain away. HomeBrew is a package manager for macOS and saves you having to mess around with installation paths and version management.
  • Install ImageMagick. This is required by Wraith to generate screenshots for comparison. From a terminal:
brew install imagemagick
  • Intall PhantomJS. This is a headless web browser that will load your site
brew install phantomjs
  • Install Wraith. Wraith itself is a Ruby package, and this assumes you have Ruby installed. Ruby ships with macOS, so we’re good to go:
gem install wraith

Now we have all the bits we need to run Wraith, we need to configure it to run.

You could run wraith setup but I prefer a hand-crafted config. You can save this into a config.yaml folder in your project, or wherever you’re doing CI testing. It should be pretty self-explanatory. Here’s the config for one one my sites:

browser: "phantomjs"
  current:  ""
  dev:   "http://localhost:3001"
  home:     /
  - 320
  - 600x768
  - 768
  - 1024
  - 1280
directory: 'wraith'
fuzz: '20%'
Default: 0
threshold: 5
  template: 'slideshow_template'
  thumb_width:  200
  thumb_height: 200
mode: diffs_first

On my particular setup, I have a dev copy running on port 3001 of my local Mac. This could be set to wherever your test environments live (CI environment, for example).

Running Wraith

As we installed a global ruby gem for Wraith and global prerequisites via Homebrew, you can run Wraith from anywhere on your system. It doesn’t have to be in your project folder.

Note: You may actually want to avoid committing your Wraith configuration to your project as it could expose testing URLs and other setup that you don’t want public.

  • Run Wraith with our configuration
wraith capture ./configs/capture.yaml

Wraith terminal screenshot

Wraith terminal screenshot

  • Wraith will provide a summary and will generate a series of screenshots. These can be found (in my case) in ./wraith/home/. These contain screenshots of each URL, a thumbnail, and a 3rd file containing the differences between URL.
  • Wraith also generates a gallery of ./wraith/gallery.html

In my setup I ran Wraith from my project folder, so Wraith’s gallery was served along with that on port 3001

Wraith gallery screenshot


Wraith documentation

Adding a CDN to WordPress

A CDN can significantly improve site performance. I’ve home-spun a few in the past (using spare servers), but that’s not going to give the distributed performance boost of a truly localised CDN service.

Step 1

We’ll be using KeyCDN for our CDN services. There are plenty of these services to choose from, but keyCDN offers a relatively inexpensive service and decent geolocation, and more importantly the ability to use on-demand SSL certificates for free via LetsEncrypt—which is critical if your site is already using SSL and you want to make use of a CDN.

First we create a Zone, and point this our primary domain ( Ensure that SSL is enabled. No site should be running without SSL these days.

Now we can create a Zone Alias, which configures the subdomain we’ll be using on our site. Once created, this will provide us with the “real” domain ( We’ll need to add this to a CNAME record in our DNS in the next step.

Note, on KeyCDN, you can generate an SSL certificate for your domain (for free) by selecting LetsEncrypt from the SSL options.

Step 2, creating a subdomain

To avoid mixed content warnings (and future-proof our setup) we need to use SSL on our CDN because our main domain is also using SSL. We enabled SSL in step 1, but to make use of this, we’ll need to create an alias for it on our own DNS.

Via your DNS provider (in this case, Digital Ocean), create a new CNAME record and point it at

It’ll take a while for this to propagate before the CDN will correctly be associated with your domain.

Step 3, configure WordPress

We’ll be using the WordPress Super Cache plugin here to do the work for us. Basically, it can rewrite the URLs of any assets you upload to WordPress to our newly created CDN.

All you have to do is enter the subdomain in the CDN tab.

Now we can test that it’s all working. Browse to the site and take a look at the networking tools. We should see our assets now being served via

Renewing SSL with crontab failed

LetsEncrypt SSL certificates expire every 90 days.

Looks like something not quite working with the auto renewal of SSL certs for my person domains. Caught an email from LetsEncrypt warning me that various domains were about to lose their certificates, despite a cron job running on the server.

To edit the crontab (Ubuntu):

 sudo crontab -e

Which now reads:

0 4 * * 1 /usr/bin/letsencrypt renew >> /var/log/le-renew.log

Updated cron to run on Mondays at 4am (using this handy cron calculator).

For reference…

sudo /usr/bin/letsencrypt renew

…will manually renew any certificates that are about to expire.


Installing HTTP/2 on Ubuntu

The why?

HTTP/2 is a modern protocol that offers big improvement performances over HTTP/1.1. A binary protocol, it effectively streams the data, circumventing the age-old problem of slow sites due to many small assets. This means we can (in many circumstances) remove the necessity of concatenating scripts/CSS or fuss with sprite sheets. HTTP/2 provides potentially faster performance by interleaving the files as they are served. Secondly, HTTP/2 allows for the server to “push” files to the browser that it thinks the browser will need next. This can work in tandem with a new preload attribute in your page markup.

Enabling HTTP/2 on Ubuntu

  • First we need to update our Ubuntu install and tell it where to find HTTP
sudo add-apt-repository -y ppa:ondrej/apache2
sudo add-apt-repository -y ppa:ondrej/php5
sudo apt-get update && sudo apt-get dist-upgrade
  • Now we can install HTTP/2
sudo a2enmod http2

Note that for HTTP/2 to work, we need to have previously setup SSL certificates for any domains we want to serve. I covered that here.

  • Next we’ll need to tell Apache that we want to let sites be served over the new protocol:
sudo nano /etc/apache2/apache2.conf

Add a section:

# HTTP Protocols
Protocols h2 http/1.1
  • Save the file. Now we’ll need to restart Apache for the change to stick:
sudo service apache2 restart

You shouldn’t need to do anything else. Browsers will automatically try the newer protocol and the server should serve up files with no changes to your codebase.

To test whether HTTP/2 is working, you can check in Chrome’s dev tools under networking. If the Protocol column is missing, right-click the columns to add it. H2 is HTTP/2.

Screenshot of networking tool in Chrome

Dev Notes

As ever, when playing with stuff like this, take a snapshot of your machine first…

Adding IPv6 to Ubuntu

Finally made some time to investigate and set-up IPv6 support on my Ubuntu (Digital Ocean) servers. Turns out, it’s not that hard. Here’s the TLDR.

  1. Take a snapshot. Or backup. Because.
  2. Now, get an address… In my case, that was as simple as clicking a button on my Digital Ocean droplet admin. Unfortunately, that meant an unscheduled machine reboot… as it can only be applied while the machine is turned off (or being newly provisioned). Oh well.
  3. SSH to your server.
  4. Then add your address:
  5. sudo ip -6 addr add public_ipv6_address/64 dev eth0
  6. Followed by your gateway:
  7. ip -6 route add default via public_ipv6_gateway dev eth0

    I’ll admit I was confused initially by this step as I was accidentally adding the ipv6 address again. The gateway address is different.

  8. sudo nano /etc/network/interfaces
  9. Add the following. Obviously, add your own v6 address (note, you do not add the /64 we entered when setting the address up initially) plus gateway. The dns addresses here are for Digital Ocean, so if you are using a different host, you’ll need to to change them.
    iface eth0 inet6 static
      address primary_ipv6_address
      netmask 64
      gateway ipv6_gateway
      autoconf 0
      dns-nameservers 2001:4860:4860::8844 2001:4860:4860::8888
  10. Save the file. Reboot the server.

More info? Digital Ocean have an detailed article + comments.

A quick guide to hosting WordPress sites on Ubuntu

I’ve recently set-up a clutch of new WordPress sites on an Ubuntu server. This post pulls together my notes for doing it with minimal fuss.


  • SSH access to the server
  • Admin level user for editing Apache settings
  • A user with permission to create databases (root)
  • A domain name already pointed at the IP address of this server
  • Basic knowledge of Nano (file editing)

Setup the database

We’ll assume we know how to get into our database. We’ll be prompted for a password (in this case for the user root):

mysql -uroot -p
CREATE USER 'mysiteuser'@'localhost' IDENTIFIED BY 'passwordhere';
USE mysite;
GRANT ALL PRIVILEGES ON mysite.* TO 'mysiteuser'@'localhost’;

This creates a database called “mysite”, and a user that has full privileges to just that database.

Download WordPress

Now we need to download the latest WordPress release using wget.

tar -xzvf latest.tar.gz
This will result in a folder named wordpress/ in our home folder.
On Ubuntu servers, web sites are usually located in /var/www/. So we’ll create a new folder there for our WordPress site, copy WordPress into that folder, and ensure the web server (Apache) has rights to serve any files.
mkdir /var/www/mysite
cp -r wordpress/ /var/www/mysite
cd /var/www
sudo chown -R www-data:www-data mysite
sudo chmod -R 775 mysite

To check the permissions applied, use ls -ltr.

Setting up the Web Server

Now we need to tell Apache about the site.
cd /etc/apache2/sites-available/
There’s likely a default configuration file already here, or files for other sites already setup. We’ll take a copy of the default one (on an Digital Ocean WordPress droplets this is named 000-default.conf.dpkg-dist) and name for our site:
cp default.conf mysite.conf
We now need to edit this file and ensure that we change any references to our new site and the folder we previously copied it into are correct.
sudo nano mysite.conf
The file should be edited to look similar to this:
<VirtualHost *:80>
 ServerAdmin webmaster@localhost
 DocumentRoot /var/www/mysite

 <Directory /var/www/mysite />
 Options Indexes FollowSymLinks
 AllowOverride All
 Require all granted

 ErrorLog ${APACHE_LOG_DIR}/error.log
 CustomLog ${APACHE_LOG_DIR}/access.log combined

Enabling SSL and starting your site

All moderns sites should also use SSL as it promotes security of our visitors. The default Digital Ocean WordPress droplet comes with this pre-installed, but it’s easy enough to add to Ubuntu yourself. Lets Encrypt* will walk us through the setup.
sudo letsencrypt --apache -d -d
sudo a2ensite mysite
service apache2 reload

Note that we add all the aliases for this site where we want SSL to be enabled (which should include all subdomains!).

That’s our site good to go.

* If Lets Encrypt complains that it’s unable to verify the domain it’s likely that the DNS entries for our domain are not yet resolving (can’t find a server). If we’ve only just setup this domain, we may need to wait up to 24 hours.