Want to see what I built without reading the why’s and wherefore’s? The git repository with all the docker-compose goodness is here!
Late edit 2020-01-16: The fantastic Jerry Steel, my co-host on The Admin Admin podcast looked at what I wrote, and made a few suggestions. I’ve updated the code in the git repo, and I’ll try to annotate below when I’ve changed something. If I miss it, it’s right in the Git repo!
One of the challenges I set myself this Christmas was to learn enough about Docker to put an arbitrary PHP application, that I would previously have misused Vagrant to contain.
Just before I started down this rabbit hole, I spoke to my Aunt about some family tree research my father had left behind after he died, and how I wished I could easily share the old tree with her (I organised getting her a Chromebook a couple of years ago, after fighting with doing remote support for years on Linux and Windows laptops). In the end, I found a web application for genealogical research called HuMo-gen, that is a perfect match for both projects I wanted to look at.
HuMo-gen was first created in 1999, with a PHP version being released in 2005. It used MySQL or MariaDB as the Database engine. I was reasonably confident that I could have created a Vagrantfile to deliver this on my home server, but I wanted to try something new. I wanted to use the “standard” building blocks of Docker and Docker-Compose, and some common containers to make my way around learning Docker.
I started by looking for some resources on how to build a Docker container. Much of the guidance I’d found was to use Docker-Compose, as this allows you to stand several components up at the same time!
In contrast to how Vagrant works (which is basically a CLI wrapper to many virtual machine services), Docker isolates resources for a single process that runs on a machine. Where in Vagrant, you might run several processes on one machine (perhaps, in this instance, nginx, PHP-FPM and MariaDB), with Docker, you’re encouraged to run each “service” as their own containers, and link them together with an overlay network. It’s possible to also do the same with Vagrant, but you’ll end up with an awful lot of VM overhead to separate out each piece.
So, I first needed to select my services. My initial line-up was:
- MariaDB
- PHP-FPM
- Apache’s httpd2 (replaced by nginx)
I was able to find official Docker images for PHP, MariaDB and httpd, but after extensive tweaking, I couldn’t make the httpd image talk the way I wanted it to with the PHP image. Bowing to what now seems to be conventional wisdom, I swapped out the httpd service for nginx.
One of the stumbling blocks for me, particularly early on, was how to build several different Dockerfiles (these are basically the instructions for the container you’re constructing). Here is the basic outline of how to do this:
version: '3'
services:
yourservice:
build:
context: .
dockerfile: relative/path/to/Dockerfile
In this docker-compose.yml
file, I tell it that to create the yourservice
service, it needs to build the docker container, using the file in ./relative/path/to/Dockerfile
. This file in turn contains an instruction to import an image.
Each service stacks on top of each other in that docker-compose.yml
file, like this:
version: '3'
services:
service1:
build:
context: .
dockerfile: service1/Dockerfile
image: localhost:32000/service1
service2:
build:
context: .
dockerfile: service2/Dockerfile
image: localhost:32000/service2
Late edit 2020-01-16: This previously listed Dockerfile/service1
, however, much of the documentation suggested that Docker gets quite opinionated about the file being called Dockerfile. While docker-compose can work around this, it’s better to stick to tradition :) The docker-compose.yml
files below have also been adjusted accordingly. I’ve also added an image: somehost:1234/image_name
line to help with tagging the images for later use. It’s not critical to what’s going on here, but I found it useful with some later projects.
To allow containers to see ports between themselves, you add the expose:
command in your docker-compose.yml
, and to allow that port to be visible from the “outside” (i.e. to the host and upwards), use the ports:
command listing the “host” port (the one on the host OS), then a colon and then the “target” port (the one in the container), like these:
version: '3'
services:
service1:
build:
context: .
dockerfile: service1/Dockerfile
image: localhost:32000/service1
expose:
- 12345
service2:
build:
context: .
dockerfile: service2/Dockerfile
image: localhost:32000/service2
ports:
- 8000:80
Now, let’s take a quick look into the Dockerfiles. Each “statement” in a Dockerfile adds a new “layer” to the image. For local operations, this probably isn’t a problem, but when you’re storing these images on a hosted provider, you want to keep these images as small as possible.
I built a Database Dockerfile, which is about as small as you can make it!
FROM mariadb:10.4.10
Yep, one line. How cool is that? In the docker-compose.yml
file, I invoke this, like this:
version: '3'
services:
db:
build:
context: .
dockerfile: mariadb/Dockerfile
image: localhost:32000/db
restart: always
environment:
MYSQL_ROOT_PASSWORD: a_root_pw
MYSQL_USER: a_user
MYSQL_PASSWORD: a_password
MYSQL_DATABASE: a_db
expose:
- 3306
OK, so this one is a bit more complex! I wanted it to build my Dockerfile, which is “mariadb/Dockerfile
“. I wanted it to restart the container whenever it failed (which hopefully isn’t that often!), and I wanted to inject some specific environment variables into the file – the root and user passwords, a user account and a database name. Initially I was having some issues where it wasn’t building the database with these credentials, but I think that’s because I wasn’t “building” the new database, I was just using it. I also expose the MariaDB (MySQL) port, 3306 to the other containers in the docker-compose.yml
file.
Let’s take a look at the next part! PHP-FPM. Here’s the Dockerfile:
FROM php:7.4-fpm
RUN docker-php-ext-install pdo pdo_mysql
ADD --chown=www-data:www-data public /var/www/html
There’s a bit more to this, but not loads. We build our image from a named version of PHP, and install two extensions to PHP, pdo
and pdo_mysql
. Lastly, we copy the content of the “public
” directory into the /var/www/html
path, and make sure it “belongs” to the right user (www-data
).
I’d previously tried to do a lot more complicated things with this Dockerfile, but it wasn’t working, so instead I slimmed it right down to just this, and the docker-compose.yml
is a lot simpler too.
phpfpm:
build:
context: .
dockerfile: phpfpm/Dockerfile
image: localhost:32000/phpfpm
See! Loads simpler! Now we need the complicated bit! :) This is the Dockerfile for nginx.
FROM nginx:1.17.7
COPY nginx/default.conf /etc/nginx/conf.d/default.conf
COPY public /var/www/html
Weirdly, even though I’ve added version numbers for MariaDB and PHP, I’ve not done the same for nginx, perhaps I should! Late edit 2020-01-16: I’ve put a version number on there now, previously where it said nginx:1.17.7
it actually said nginx:latest
.
I’ve created the configuration block for nginx in a single “RUN” line. Late edit 2020-01-16: This Dockerfile now doesn’t have a giant echo 'stuff' > file
block either, following Jerry’s advice, and I’m using COPY
instead of ADD
on his advice too. I’ll show that config file below. There’s a couple of high points for me here!
server {
index index.php index.html;
server_name _;
error_log /proc/self/fd/2;
access_log /proc/self/fd/1;
root /var/www/html;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass phpfpm:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
server_name _;
means “use this block for all unnamed requests”.access_log /proc/self/fd/1;
and error_log /proc/self/fd/2;
These are links to the “stdout
” and “stderr
” file descriptors (or pointers to other parts of the filesystem), and basically means that when you do docker-compose logs
, you’ll see the HTTP logs for the server! These two files are guaranteed to be there, while /dev/stderr
isn’t!
Because nginx is “just” caching the web content, and I know the content doesn’t need to be written to from nginx, I knew I didn’t need to do the chown action, like I did with the PHP-FPM block.
Lastly, I need to configure the docker-compose.yml
file for nginx:
nginx:
build:
context: .
dockerfile: Dockerfile/nginx
image: localhost:32000/nginx
ports:
- 127.0.0.1:1980:80
I’ve gone for a slightly unusual ports configuration when I deployed this to my web server… you see, I already have the HTTP port (TCP/80) configured for use on my home server – for running the rest of my web services. During development, on my home machine, the ports line instead showed “1980:80” because I was running this on Instead, I’m running this application bound to “localhost” (127.0.0.1) on a different port number (1980 selected because it could, conceivably, be a birthday of someone on this system), and then in my local web server configuration, I’m proxying connections to this service, with HTTPS encryption as well. That’s all outside the scope of this article (as I probably should be using something like Traefik, anyway) but it shows you how you could bind to a separate port too.
Anyway, that was my Docker journey over Christmas, and I look forward to using it more, going forward!
Featured image is “Shipping Containers” by “asgw” on Flickr and is released under a CC-BY license.