A screenshot of the Wordpress site, showing updates available

wp-upgrade.sh – A simple tool to update and upgrade WordPress components via cron

A simple tool to update and upgrade WordPress components

A few years ago, I hosted my blog on Dreamhost. They’ve customized something inside the blog which means it doesn’t automatically update itself. I’ve long since moved this blog off to my own hosting, but I can’t figure out which thing it was they changed, and I’ve got so much content and stuff in here, I don’t really want to mess with it.

Anyway, I like keeping my systems up to date, and I hate logging into a blog and finding updates are pending, so I wrote this script. It uses wp-cli which I have installed to /usr/local/bin/wp as per the install guide. This is also useful if you’re hosting your site in such a way that you can’t make changes to core or plugins from the web interface.

This script updates:

  1. All core files (lines core update-db, core update and language core update)
  2. All plugins (lines plugin update --all and language plugin update --all)
  3. All themes (lines theme update --all and language theme update --all)

To remove any part of this script, just delete those lines, including the /usr/local/bin/wp and --quiet && \ fragments!

I then run sudo -u www-data crontab -e (replacing www-data with the real account name of the user who controls the blog, which can be found by doing an ls -l /var/www/html/ replacing the path to where your blog is located) and I add the bottom line to that crontab file (the rest is just comments to remind you what the fields are!)

#                                         day of month [1-31]
#                                             month [1-12]
#                                                 day of week [1-6 Mon-Sat, 0/7 Sun]
# minute   hour                                         command
1          1,3,5,7,9,11,13,15,17,19,21,23 *   *   *     /usr/local/bin/wp-upgrade.sh /var/www/jon.sprig.gs/blog

This means that every other hour, at 1 minute past the hour, every day, every month, I run the update :)

If you’ve got email setup for this host and user, you’ll get an email whenever it upgrades a component too.

"Shipping Containers" by "asgw" on Flickr

Creating my first Docker containerized LEMP (Linux, nginx, MariaDB, PHP) application

Want to see what I built without reading the why’s and wherefore’s? The git repository with all the docker-compose goodness is here!

Late edit 2020-01-16: The fantastic Jerry Steel, my co-host on The Admin Admin podcast looked at what I wrote, and made a few suggestions. I’ve updated the code in the git repo, and I’ll try to annotate below when I’ve changed something. If I miss it, it’s right in the Git repo!

One of the challenges I set myself this Christmas was to learn enough about Docker to put an arbitrary PHP application, that I would previously have misused Vagrant to contain.

Just before I started down this rabbit hole, I spoke to my Aunt about some family tree research my father had left behind after he died, and how I wished I could easily share the old tree with her (I organised getting her a Chromebook a couple of years ago, after fighting with doing remote support for years on Linux and Windows laptops). In the end, I found a web application for genealogical research called HuMo-gen, that is a perfect match for both projects I wanted to look at.

HuMo-gen was first created in 1999, with a PHP version being released in 2005. It used MySQL or MariaDB as the Database engine. I was reasonably confident that I could have created a Vagrantfile to deliver this on my home server, but I wanted to try something new. I wanted to use the “standard” building blocks of Docker and Docker-Compose, and some common containers to make my way around learning Docker.

I started by looking for some resources on how to build a Docker container. Much of the guidance I’d found was to use Docker-Compose, as this allows you to stand several components up at the same time!

In contrast to how Vagrant works (which is basically a CLI wrapper to many virtual machine services), Docker isolates resources for a single process that runs on a machine. Where in Vagrant, you might run several processes on one machine (perhaps, in this instance, nginx, PHP-FPM and MariaDB), with Docker, you’re encouraged to run each “service” as their own containers, and link them together with an overlay network. It’s possible to also do the same with Vagrant, but you’ll end up with an awful lot of VM overhead to separate out each piece.

So, I first needed to select my services. My initial line-up was:

  • MariaDB
  • PHP-FPM
  • Apache’s httpd2 (replaced by nginx)

I was able to find official Docker images for PHP, MariaDB and httpd, but after extensive tweaking, I couldn’t make the httpd image talk the way I wanted it to with the PHP image. Bowing to what now seems to be conventional wisdom, I swapped out the httpd service for nginx.

One of the stumbling blocks for me, particularly early on, was how to build several different Dockerfiles (these are basically the instructions for the container you’re constructing). Here is the basic outline of how to do this:

version: '3'
services:
  yourservice:
    build:
      context: .
      dockerfile: relative/path/to/Dockerfile

In this docker-compose.yml file, I tell it that to create the yourservice service, it needs to build the docker container, using the file in ./relative/path/to/Dockerfile. This file in turn contains an instruction to import an image.

Each service stacks on top of each other in that docker-compose.yml file, like this:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2

Late edit 2020-01-16: This previously listed Dockerfile/service1, however, much of the documentation suggested that Docker gets quite opinionated about the file being called Dockerfile. While docker-compose can work around this, it’s better to stick to tradition :) The docker-compose.yml files below have also been adjusted accordingly. I’ve also added an image: somehost:1234/image_name line to help with tagging the images for later use. It’s not critical to what’s going on here, but I found it useful with some later projects.

To allow containers to see ports between themselves, you add the expose: command in your docker-compose.yml, and to allow that port to be visible from the “outside” (i.e. to the host and upwards), use the ports: command listing the “host” port (the one on the host OS), then a colon and then the “target” port (the one in the container), like these:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
    expose:
    - 12345
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2
    ports:
    - 8000:80

Now, let’s take a quick look into the Dockerfiles. Each “statement” in a Dockerfile adds a new “layer” to the image. For local operations, this probably isn’t a problem, but when you’re storing these images on a hosted provider, you want to keep these images as small as possible.

I built a Database Dockerfile, which is about as small as you can make it!

FROM mariadb:10.4.10

Yep, one line. How cool is that? In the docker-compose.yml file, I invoke this, like this:

version: '3'
services:
  db:
    build:
      context: .
      dockerfile: mariadb/Dockerfile
    image: localhost:32000/db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: a_root_pw
      MYSQL_USER: a_user
      MYSQL_PASSWORD: a_password
      MYSQL_DATABASE: a_db
    expose:
      - 3306

OK, so this one is a bit more complex! I wanted it to build my Dockerfile, which is “mariadb/Dockerfile“. I wanted it to restart the container whenever it failed (which hopefully isn’t that often!), and I wanted to inject some specific environment variables into the file – the root and user passwords, a user account and a database name. Initially I was having some issues where it wasn’t building the database with these credentials, but I think that’s because I wasn’t “building” the new database, I was just using it. I also expose the MariaDB (MySQL) port, 3306 to the other containers in the docker-compose.yml file.

Let’s take a look at the next part! PHP-FPM. Here’s the Dockerfile:

FROM php:7.4-fpm
RUN docker-php-ext-install pdo pdo_mysql
ADD --chown=www-data:www-data public /var/www/html

There’s a bit more to this, but not loads. We build our image from a named version of PHP, and install two extensions to PHP, pdo and pdo_mysql. Lastly, we copy the content of the “public” directory into the /var/www/html path, and make sure it “belongs” to the right user (www-data).

I’d previously tried to do a lot more complicated things with this Dockerfile, but it wasn’t working, so instead I slimmed it right down to just this, and the docker-compose.yml is a lot simpler too.

  phpfpm:
    build:
      context: .
      dockerfile: phpfpm/Dockerfile
    image: localhost:32000/phpfpm

See! Loads simpler! Now we need the complicated bit! :) This is the Dockerfile for nginx.

FROM nginx:1.17.7
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

COPY public /var/www/html

Weirdly, even though I’ve added version numbers for MariaDB and PHP, I’ve not done the same for nginx, perhaps I should! Late edit 2020-01-16: I’ve put a version number on there now, previously where it said nginx:1.17.7 it actually said nginx:latest.

I’ve created the configuration block for nginx in a single “RUN” line. Late edit 2020-01-16: This Dockerfile now doesn’t have a giant echo 'stuff' > file block either, following Jerry’s advice, and I’m using COPY instead of ADD on his advice too. I’ll show that config file below. There’s a couple of high points for me here!

server {
  index index.php index.html;
  server_name _;
  error_log /proc/self/fd/2;
  access_log /proc/self/fd/1;
  root /var/www/html;
  location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass phpfpm:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}
  • server_name _; means “use this block for all unnamed requests”.
  • access_log /proc/self/fd/1; and error_log /proc/self/fd/2;These are links to the “stdout” and “stderr” file descriptors (or pointers to other parts of the filesystem), and basically means that when you do docker-compose logs, you’ll see the HTTP logs for the server! These two files are guaranteed to be there, while /dev/stderr isn’t!

Because nginx is “just” caching the web content, and I know the content doesn’t need to be written to from nginx, I knew I didn’t need to do the chown action, like I did with the PHP-FPM block.

Lastly, I need to configure the docker-compose.yml file for nginx:

  nginx:
    build:
      context: .
      dockerfile: Dockerfile/nginx
    image: localhost:32000/nginx
    ports:
      - 127.0.0.1:1980:80

I’ve gone for a slightly unusual ports configuration when I deployed this to my web server… you see, I already have the HTTP port (TCP/80) configured for use on my home server – for running the rest of my web services. During development, on my home machine, the ports line instead showed “1980:80” because I was running this on Instead, I’m running this application bound to “localhost” (127.0.0.1) on a different port number (1980 selected because it could, conceivably, be a birthday of someone on this system), and then in my local web server configuration, I’m proxying connections to this service, with HTTPS encryption as well. That’s all outside the scope of this article (as I probably should be using something like Traefik, anyway) but it shows you how you could bind to a separate port too.

Anyway, that was my Docker journey over Christmas, and I look forward to using it more, going forward!

Featured image is “Shipping Containers” by “asgw” on Flickr and is released under a CC-BY license.

Creating Self Signed certificates in Ansible

In my day job, I sometimes need to use a self-signed certificate when building a box. As I love using Ansible, I wanted to make the self-signed certificate piece something that was part of my Ansible workflow.

Here follows a bit of basic code that you could use to work through how the process of creating a self-signed certificate would work. I would strongly recommend using something more production-ready (e.g. LetsEncrypt) when you’re looking to move from “development” to “production” :)

---
- hosts: localhost
  vars:
  - dnsname: your.dns.name
  - tmppath: "./tmp/"
  - crtpath: "{{ tmppath }}{{ dnsname }}.crt"
  - pempath: "{{ tmppath }}{{ dnsname }}.pem"
  - csrpath: "{{ tmppath }}{{ dnsname }}.csr"
  - pfxpath: "{{ tmppath }}{{ dnsname }}.pfx"
  - private_key_password: "password"
  tasks:
  - file:
      path: "{{ tmppath }}"
      state: absent
  - file:
      path: "{{ tmppath }}"
      state: directory
  - name: "Generate the private key file to sign the CSR"
    openssl_privatekey:
      path: "{{ pempath }}"
      passphrase: "{{ private_key_password }}"
      cipher: aes256
  - name: "Generate the CSR file signed with the private key"
    openssl_csr:
      path: "{{ csrpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      common_name: "{{ dnsname }}"
  - name: "Sign the CSR file as a CA to turn it into a certificate"
    openssl_certificate:
      path: "{{ crtpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      csr_path: "{{ csrpath }}"
      provider: selfsigned
  - name: "Convert the signed certificate into a PKCS12 file with the attached private key"
    openssl_pkcs12:
      action: export
      path: "{{ pfxpath }}"
      name: "{{ dnsname }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      passphrase: password
      certificate_path: "{{ crtpath }}"
      state: present

One to read: A Beginner’s Guide to IPFS

One to read: “A Beginner’s Guide to IPFS”

Ever wondered about IPFS (the “Inter Planetary File System”) – a new way to share and store content. This doesn’t rely on a central server (e.g. Facebook, Google, Digital Ocean, or your home NAS) but instead uses a system like bittorrent combined with published records to keep the content in the system.

If your host goes down (where the original content is stored) it’s also cached on other nodes who have visited your site.

These caches are cleared over time, so are suitable for short outages, or you can have other nodes who “pin” your content (and this can be seen as a paid solution that can fund hosts).

IPFS is great at hosting static content, but how to deal with dynamic content? That’s where PubSub comes into play (which isn’t in this article). There’s a database service which sits on IPFS and uses PubSub to sync data content across the network, called Orbit-DB.

It’s looking interesting, especially in light of the announcement from CloudFlare about their introduction of an available IPFS gateway.

It’s looking good for IPFS!

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

One to read or watch: “Programming is Forgetting: Toward a New Hacker Ethic”

Here is a transcript of a talk by Allison Parrish at the Open Hardware Summit in Portland, OR. The talk “Programming is Forgetting: Toward a New Hacker Ethic” is a discussion about the failings of the book “Hackers” by Steven Levy. Essentially, that book proposed (in the 80’s) a set of ethics for Hackers (which is to say, creative programmers or engineers, not malicious operators). Allison suggests that many of the parables in the book do not truly reflect the “Hacker Ethic”, and revises them for today’s world.

Her new questions (not statements) are as follows:

  • Who gets to use what I make? Who am I leaving out? How does what I make facilitate or hinder access?
  • What data am I using? Whose labor produced it and what biases and assumptions are built into it? Why choose this particular phenomenon for digitization or transcription? And what do the data leave out?
  • What systems of authority am I enacting through what I make? What systems of support do I rely on? How does what I make support other people?
  • What kind of community am I assuming? What community do I invite through what I make? How are my own personal values reflected in what I make?

This is a significant re-work of the original “Hacker Ethic“, and you should really either watch or read the talk to see how she got to these from the original, especially as it’s not as punchy as the original.

I’d like to think I was thinking of things like these questions when I wrote CampFireManager and CCHits.

A quick update

So, my last post ended with me sacking it all in. Fortunately for open source, and my projects in general, a few people stepped up and reminded me why I do this stuff. So, CCHits is still going, and I don’t feel as alone any more with it (which is nice :) ) and CFM development is back on the cards. MOTP-AS is still a bit on hiatus until I get my head around a MVC framework (I’m currently using ZF2 in CFM3).

What is really nice about where I am right now is that I’m learning stuff about the tools I’m using, so it’s not all focusing on stuff which isn’t working, and is instead focusing on the shiny :)

I’ll try and get some posts about Vagrant, Puppet, and ZF2 out as I discover non-trivial stuff about it :)

Proxying and using alternate host names with Apache

After spotting this comment on StatusNet saying about using another port on the same IP address for a web service, I thought I’d jot down what I do instead, to ensure I use the standard HTTP and HTTPS ports for my web applications.

In /etc/apache2/sites-available, I create a file called subdomain.your.host.name

<VirtualHost *:80>
    ServerAdmin webmaster@localhost
    ServerName subdomain.your.host.name

    ErrorLog ${APACHE_LOG_DIR}/subdomain.your.host.name.error.log

    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn

    CustomLog ${APACHE_LOG_DIR}/subdomain.your.host.name.access.log combined

    ProxyPass / http://127.0.0.1:12345/
    ProxyPassReverse / http://127.0.0.1:12345/
</VirtualHost>

Configure your non-apache app to bind to a port on 127.0.0.1, here I’ve set it to 12345

This proxies an HTTP only application… but if you want to proxy an HTTPS application, you either need to have a wildcard SSL certificate, use multiple IP addresses, or, as the original post suggested, use an alternate port.

If you’re proxying an application for HTTPS, try this:

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin webmaster@localhost
    ServerName subdomain.your.host.name

    ErrorLog ${APACHE_LOG_DIR}/ssl_subdomain.your.host.name.error.log
    LogLevel warn
    CustomLog ${APACHE_LOG_DIR}/ssl_subdomain.your.host.name.access.log combined

    SSLEngine on
    SSLCertificateChainFile /etc/openssl/root.crt
    SSLCertificateFile /etc/openssl/server.crt
    SSLCertificateKeyFile /etc/openssl/server.key

    BrowserMatch "MSIE [2-6]" \
        nokeepalive ssl-unclean-shutdown \
        downgrade-1.0 force-response-1.0
    # MSIE 7 and newer should be able to use keepalive
    BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown

    ProxyPass / http://127.0.0.1:4670/
    ProxyPassReverse / http://127.0.0.1:4670/
</VirtualHost>
</IfModule>

Of course, if you’re looking to create several virtual hosts for apache, rather than proxy them, you can instead do this:

<VirtualHost *:80>
    ServerName subdomain.your.host.name
    ServerAdmin webmaster@localhost

    DocumentRoot /var/www_subdomain.your.host.name/
    <Directory />
        Options FollowSymLinks
        AllowOverride None
    </Directory>

    <Directory /var/www_subdomain.your.host.name/>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
        Order allow,deny
        allow from all
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/subdomain.your.host.name.error.log

    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn

    CustomLog ${APACHE_LOG_DIR}/subdomain.your.host.name.access.log combined

</VirtualHost>

Once you’ve got your config files up, you’ll need to enable them with the following command:

a2ensite subdomain.your.host.name

That assumes you named the file /etc/apache2/sites-available/subdomain.your.host.name

You may need to enable the proxy module with the command:

a2enmod proxy

A quick note on autoloaders for PHP

Over the past few days, as you may have noticed, I’ve been experimenting with PHPUnit, and writing up notes on what I’ve learned. Here’s a biggie, but it’s such a small actual change, I didn’t want to miss it.

So, when you have your autoloader written, you’ll have a function like this (probably):

<?php
function __autoload($classname)
{
    if (file_exists(dirname(__FILE__) . '/classes/' . $classname . '.php')) {
        require_once dirname(__FILE__) . '/classes/' . $classname . '.php';
    }
}

Load this from your test, or in a bootstrap file (more to come on that particular subject, I think!), like this:

<?php
require_once dirname(__FILE__) . '/../autoloader.php';
class SomeClassTest extends ........

And you’ll probably notice the autoloader doesn’t do anything… but why is this? Because PHPUnit has it’s own autoloader, and you need to chain our autoloader to the end. So, in your autoloader file, add this line to the end:

<?php
function __autoload($classname)
{
    if (file_exists(dirname(__FILE__) . '/classes/' . $classname . '.php')) {
        require_once dirname(__FILE__) . '/classes/' . $classname . '.php';
    }
}

spl_autoload_register('__autoload');

And it all should just work, which is nice :)

A quick word on salting your hashes.

If you don’t know what hashing is in relation to coding, the long version is here: Cryptographic Hash Function but the short version is that it performs a mathermatical formula to components of the file, string or data, and returns a much shorter number with a slim chance of “collisions”.

I don’t know whether it’s immediately clear to anyone else, but I used to think this was a good idea.

<?php
$password = sha1($_POST['password']);

Then I went to a PHPNW session, and asked someone to take a look at my code, and got a thorough drubbing for not adding a cryptographic salt (wikipedia).

For those who don’t know, a salt is a set of characters you add before or after the password (or both!) to make it so that a simple “rainbow table analysis” doesn’t work (essentially a brute-force attack against the authentication data by hashing lots and lots of strings looking for another hash which matches the stored hash). In order to make it possible to actually authenticate with that string again in the future, the string should be easily repeatable, and a way to do that is to use other data that’s already in the user record.

For example, this is a simple salt:

<?php
$password = sha1('salt' . $_POST['password']);

I read in the April 2012 edition of 2600 magazine something that I should have been doing with my hashes all along. How’s this for more secure code?

<?php
$site_salt = 'pepper';
$SQL = "SELECT intUserID FROM users WHERE strUsername = ?";
$DB = new PDO($dsn);
$query = $DB->prepare($SQL);
$query->execute(array(strtolower($_POST['username'])));
$userid = $query->fetch();
if ($userid == false) {
  return false;
}
$prefix = '';
$suffix = '';
if ($userid % 2 == 0) {
  $prefix = $site_salt;
} else {
  $suffix = $site_salt;
}
if ($userid % 3 == 0) {
  $prefix .= strtolower($_POST['username']);
} else {
  $suffix .= strtolower($_POST['username']);
}
if ($userid % 4 == 0) {
  $prefix = strrev($prefix);
}
if ($userid % 5 == 0) {
  $suffix = strrev($suffix);
}
$hashedPassword = sha1($prefix . $_POST['password'] . $suffix);

So, this gives you an easily repeatable string, that’s relatively hard to calculate without easy access to the source code :)