"Root" by "llee_wu" on Flickr

A quick note on using Firefox in Windows in a Corporate or Enterprise environment

I’ve been using Firefox as my “browser of choice” for around 15 years. I tend to prefer to use it for all sorts of reasons, but the main thing I expect is support for extensions. Not many of them, but … well, there’s a few!

There are two stumbling blocks for using Firefox in a corporate or enterprise setting. These are:

  1. NTLM or Kerberos Authentication for resources like Sharepoint and ADFS.
  2. Enterprise TLS certificates (usually deployed via GPO as part of the domain)

These are both trivially fixed in the about:config screen, but first you need to get past a scary looking warning page!

In the address bar, where it probably currently says jon.sprig.gs, click in there and type about:config.

Getting to about:config

This brings you to a scary page!

Proceed with caution! (of course!!)

Click the “Accept the Risk and Continue” (note, this is with Firefox 76. Wording with later or earlier versions may differ).

As if it wasn’t obvious enough from the previous screen, this “may impact performance and security”…

And then you get a search box.

In the “Search preference name” type in ntlm and find the line that says network.automatic-ntlm-auth.trusted-uris.

The “NTLM Options page”

Type in there the suffixes of any TRUSTED domains. For example, if your company uses the domain names of bigcompany.com, bigco.local and big.company then you’d type in:

bigcompany.com,bigco.local,big.company

Any pages that you browse to, where they request NTLM authentication, will receive an NTLM set of credentials if prompted (same as IE, Edge, and Chrome already do!) NTLM is effectively a way to pass a trusted Kerberos ticket (a bit like your domain credentials) into a web page.

Next up, let’s get those pesky certificate errors removed!

This assumes that you have a centrally managed TLS Root Certificate, and the admins in your network haven’t just been dumping self signed certificates everywhere (nothing gets around that… just sayin’).

Still in about:config, clear the search box and type enterprise, like this!

Enterprise Roots are here!

Find the line security.enterprise_roots.enabled and make sure it says true. If it doesn’t double click it, so it does.

Now you can close your preferences page, and you should be fine to visit your internal source code repository, time sheeting system or sharepoint site, with almost no interruptions!

If you’ve been tasked for turning this stuff on in your estate of managed desktop environment machines, then you might find this article (on Autoconfiguration of Firefox) of use (but I’ve not tried it!)

Featured image is “Root” by “llee_wu” on Flickr and is released under a CC-BY-ND license.

"Debian" by "medithIT" on Flickr

One to read: Installing #Debian on #QNAP TS-219P

A couple of years ago, a very lovely co-worked gave me his QNAP TS-219P which he no longer required. I’ve had it sitting around storing data, but not making the most use of it since he gave it to me.

After a bit of tinkering in my home network, I decided that I needed a more up-to-date OS on this device, so I found this page that tells you how to install Debian Buster. This will wipe the device, so make sure you’ve got a full backup of your content!

https://www.cyrius.com/debian/kirkwood/qnap/ts-219/install/

Essentially, you backup the existing firmware with these commands:

cat /dev/mtdblock0 > mtd0
cat /dev/mtdblock1 > mtd1
cat /dev/mtdblock2 > mtd2
cat /dev/mtdblock3 > mtd3
cat /dev/mtdblock4 > mtd4
cat /dev/mtdblock5 > mtd5

These files need to be transferred off, in “case of emergency”, then download the installation files:

mkdir /tmp/debian
cd /tmp/debian
busybox wget http://ftp.debian.org/debian/dists/buster/main/installer-armel/current/images/kirkwood/network-console/qnap/ts-21x/{initrd,kernel-6281,kernel-6282,flash-debian,model}
sh flash-debian
reboot

The flash-debian command takes around 3-5 minutes, apparently, although I did start the job and walk away, so it might have taken anywhere up to 30 minutes!

Then, SSH to the IP of the device, and use the credentials installer and install as username and password.

Complete the installation steps in the SSH session, then let it reboot.

Be aware that the device is likely to have at least one swap volume it will try to load, so it might be worth opening a shell and running the command “swapoff -a” before removing the swap partitions. It’s also worth removing all the partitions and then rebooting and starting again if you have any problems with the partitioning.

When it comes back, you have a base installation of Debian, which doesn’t have sudo installed, so use su - and put in the root password.

Good luck!

"the home automation system designed by loren amelang himself" by "Nicolás Boullosa" on Flickr

One to read: Ansible for Networking – Part 3: Cisco IOS

One to read: “Ansible for Networking – Part 3: Cisco IOS”

One of the guest hosts and stalwart member of the Admin Admin Telegram group has been documenting how he has built his Ansible Networking lab.

Stuart has done three posts so far, but this is the first one actually dealing with the technology. It’s a mammoth read, so I’d recommend doing it on a computer, and not on a tablet or phone!

Posts one and two were about what the series would cover and how the lab has been constructed.

Featured image is “the home automation system designed by loren amelang himself” by “Nicolás Boullosa” on Flickr and is released under a CC-BY license.

Opening to my video: Screencast 002 - A quick walk through Git

Screencast 002: A quick walk through Git (a mentoring style video)

I have done a follow-up Mentoring style video to support my last one. This video shows how to fix some of the issues in Git I came across in my last mentoring video!

Screencast 002: A quick walk through Git

I took some advice from a colleague who noticed that I skipped past a couple of issues with my Git setup, so I re-did them :) I hope this makes sense, and at 35 minutes, is a bit more understandable than the last 1h15 video!

Also on LBRY and Archive.org

Opening to my video: Screencast 001 - Ansible and Inspec using Vagrant

Screencast 001: Ansible and Inspec with Vagrant and Git (a mentoring style video)

If you’ve ever wondered how I use Ansible and Inspec, or wondered why some of my Vagrant files look like they do, well, I want to start recording some “mentor” style videos… You know how, if you were sitting next to someone who’s a mentor to you, and you watch how they build a solution.

The first one was released last night!

I recently saw a video by Chris Hartjes on how he creates his TDD (Test driven development) based PHP projects, and I really wanted to emulate that style, but talking about the things I use.

This was my second attempt at recording a mentoring style video yesterday, the first was shown to the Admin Admin Podcast listeners group on Telegram, and then sacrificed to the demo gods (there were lots of issues in that first video) never to be seen again.

From a tooling perspective, I’m using a remote virtual machine running Ubuntu Mate 18.04 over RDP (to improve performance) with xrdp and Remmina, OBS is running locally to record the content, and I’m using Visual Studio Code, git, Vagrant and Virtualbox, as well as Ansible and Inspec.

Late edit 2020-02-29: Like videos like this, hate YouTube? It’s also on archive.org: https://archive.org/details/JonTheNiceGuyScreencast001

Late edit 2020-03-01: Popey told me about LBRY.tv when I announced this on the Admin Admin Podcast telegram channel, and so I’ve also copied the video to there: https://lbry.tv/@JonTheNiceGuy:b/Screencast001-Ansible-and-Inspec-with-Vagrant:8

"vieux port Marseille" by "Jeanne Menjoulet" on Flickr

Networking tricks with Multipass in Virtualbox on Windows (Bridged interfaces and Port Forwards)

TL;DR? Want to “just” bridge one or more interfaces to a Multipass instance when you’re using Virtualbox? See the Bridging Summary below. Want to do a port forward? See the Port Forward section below. You will need the psexec command and to execute this as an administrator. The use of these two may be considered a security incident on your computing environment, depending on how your security processes and infrastructure are defined and configured.

Ah Multipass. This is a tool created by Canonical to create a “A mini-cloud on your Mac or Windows workstation.” (from their website)…

I’ve often seen this endorsed as the tool of choice from Canonical employees to do “stuff” like run Kubernetes, develop tools for UBPorts (previously Ubuntu Touch) devices, and so on.

So far, it seems interesting. It’s a little bit like Vagrant with an in-built cloud-init Provisioner, and as I want to test out the cloud-init files I’m creating for AWS and Azure, that’d be so much easier than actually building the AWS or Azure machines, or finding a viable cloud-init plugin for Vagrant to test it out.

BUT… Multipass is really designed for Linux systems (running LibVirt), OS X (running HyperKit) and Windows (running Hyper-V). Even if I were using Windows 10 Pro on this machine, I use Virtualbox for “things” on my Windows Machine, and Hyper-V steals the VT-X bit, which means that VirtualBox can’t run x64 code…. Soooo I can’t use the Hyper-V mode.

Now, there is a “fix” for this. You can put Multipass into Virtualbox mode, which lets you run Multipass on Windows or OS X without using their designed-for hypervisor, but this has a downside, you see, VirtualBox doesn’t give MultiPass the same interface to route networking connections to the VM, and there’s currently no CLI or GUI options to say “bridge my network” or “forward a port” (in part because it needs to be portable to the native hypervisor options, apparently). So, I needed to fudge some things so I can get my beloved bridged connections.

I got to the point where I could do this, thanks to the responses to a few issues I raised on the Multipass Github issues, mostly #1333.

The first thing you need to install in Windows is PsExec, because Multipass runs it’s Virtual Machines as the SYSTEM account, and talking to SYSTEM account processes is nominally hard. Get PsExec from the SysInternals website. Some IT Security professionals will note the addition of PsExec as a potential security incident, but then again, they might also see the running of a virtual machine as a security incident too, as these aren’t controlled with a central image. Anyway… Just bear it in mind, and don’t shout at me if you get frogmarched in front of your CISO.

I’m guessing if you’re here, you’ve already installed Multipass, (but if not, and it seems interesting – it’s over at https://multipass.run. Get it and install it, then carry on…) and you’ve probably enabled the VirtualBox mode (if not – open a command prompt as administrator, and run “multipass set local.driver=virtualbox“). Now, you can start sorting out your bridges.

Sorting out bridges

First things first, you need to launch a virtual machine. I did, and it generated a name for my image.

C:\Users\JON>multipass launch
Launched: witty-kelpie

Fab! We have a running virtual machine, and you should be able to get a shell in there by running multipass shell "witty-kelpie" (the name of the machine it launched before). But, uh-oh. We have the “default” NAT interface of this device mapped, not a bridged interface.

C:\Users\JON>multipass shell "witty-kelpie"
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-76-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu Feb  6 10:56:38 GMT 2020

  System load:  0.3               Processes:             82
  Usage of /:   20.9% of 4.67GB   Users logged in:       0
  Memory usage: 11%               IP address for enp0s3: 10.0.2.15
  Swap usage:   0%


0 packages can be updated.
0 updates are security updates.


To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@witty-kelpie:~$

So, exit the machine, and issue a multipass stop "witty-kelpie" command to ask Virtualbox to shut it down.

So, this is where the fun[1] part begins.
[1] The “Fun” part here depends on how you view this specific set of circumstances 😉

We need to get the descriptions of all the interfaces we might want to bridge to this device. I have three interfaces on my machine – a WiFi interface, a Ethernet interface on my laptop, and an Ethernet interface on my USB3 dock. At some point in the past, I renamed these interfaces, so I’d recognise them in the list of interfaces, so they’re not just called “Connection #1”, “Connection #2” and so on… but you should recognise your interfaces.

To get this list of interfaces, open PowerShell (as a “user”), and run this command:

PS C:\Users\JON> Get-NetAdapter -Physical | format-list -property "Name","DriverDescription"

Name              : On-Board Network Connection
DriverDescription : Intel(R) Ethernet Connection I219-LM

Name              : Wi-Fi
DriverDescription : Intel(R) Dual Band Wireless-AC 8260

Name              : Dock Network Connection
DriverDescription : DisplayLink Network Adapter NCM

For reasons best known to the Oracle team, they use the “Driver Description” to identify the interfaces, not the name assigned to the device by the user, so, before we get started, find your interface, and note down the description for later. If you want to bridge “all” of them, make a note of all the interfaces in question, and in the order you want to attach them. Note that Virtualbox doesn’t really like exposing more than 8 NICs without changing the Chipset to ICH9 (but really… 9+ NICs? really??) and the first one is already consumed with the NAT interface you’re using to connect to it… so that gives you 7 bridgeable interfaces. Whee!

So, now you know what interfaces you want to bridge, let’s configure the Virtualbox side. Like I said before you need psexec. I’ve got psexec stored in my Downloads folder. You can only run psexec as administrator, so open up an Administrator command prompt or powershell session, and run your command.

Just for clarity, your commands are likely to have some different paths, so remember that wherever “your” PsExec64.exe command is located, mine is in C:\Users\JON\Downloads\sysinternals\PsExec64.exe, and wherever your vboxmanage.exe is located, mine is in C:\Program Files\Oracle\VirtualBox\vboxmanage.exe.

Here, I’m going to attach my dock port (“DisplayLink Network Adapter NCM”) to the second VirtualBox interface, the Wifi adaptor to the third interface and my locally connected interface to the fourth interface. Your interfaces WILL have different descriptions, and you’re likely not to need quite so many of them!

C:\WINDOWS\system32>C:\Users\JON\Downloads\sysinternals\PsExec64.exe -s "c:\program files\oracle\virtualbox\vboxmanage" modifyvm "witty-kelpie" --nic2 bridged --bridgeadapter2 "DisplayLink Network Adapter NCM" --nic3 bridged --bridgeadapter3 "Intel(R) Dual Band Wireless-AC 8260" --nic4 bridged --bridgeadapter4 "Intel(R) Ethernet Connection I219-LM"

PsExec v2.2 - Execute processes remotely
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals - www.sysinternals.com

c:\program files\oracle\virtualbox\vboxmanage exited on MINILITH with error code 0.

An error code of 0 means that it completed successfuly and with no issues.

If you wanted to use a “Host Only” network (if you’re used to using Vagrant, you might know it as “Private” Networking), then change the NIC you’re interested in from --nicX bridged --bridgeadapterX "Some Description" to --nicX hostonly --hostonlyadapterX "VirtualBox Host-Only Ethernet Adapter" (where X is replaced with the NIC number you want to swap, ranged between 2 and 8, as 1 is the NAT interface you use to SSH into the virtual machine.)

Now we need to check to make sure the machine has it’s requisite number of interfaces. We use the showvminfo flag to the vboxmanage command. It produces a LOT of content, so I’ve manually filtered the lines I want, but you should spot it reasonably quickly.

C:\WINDOWS\system32>C:\Users\JON\Downloads\sysinternals\PsExec64.exe -s "c:\program files\oracle\virtualbox\vboxmanage" showvminfo "witty-kelpie"

PsExec v2.2 - Execute processes remotely
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals - www.sysinternals.com


Name:                        witty-kelpie
Groups:                      /Multipass
Guest OS:                    Ubuntu (64-bit)
<SNIP SOME CONTENT>
NIC 1:                       MAC: 0800273CCED0, Attachment: NAT, Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 1 Settings:  MTU: 0, Socket (send: 64, receive: 64), TCP Window (send:64, receive: 64)
NIC 1 Rule(0):   name = ssh, protocol = tcp, host ip = , host port = 53507, guest ip = , guest port = 22
NIC 2:                       MAC: 080027303758, Attachment: Bridged Interface 'DisplayLink Network Adapter NCM', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 3:                       MAC: 0800276EA174, Attachment: Bridged Interface 'Intel(R) Dual Band Wireless-AC 8260', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 4:                       MAC: 080027042135, Attachment: Bridged Interface 'Intel(R) Ethernet Connection I219-LM', Cable connected: on, Trace: off (file: none), Type: 82540EM, Reported speed: 0 Mbps, Boot priority: 0, Promisc Policy: deny, Bandwidth group: none
NIC 5:                       disabled
NIC 6:                       disabled
NIC 7:                       disabled
NIC 8:                       disabled
<SNIP SOME CONTENT>

Configured memory balloon size: 0MB

c:\program files\oracle\virtualbox\vboxmanage exited on MINILITH with error code 0.

Fab! We now have working interfaces… But wait, let’s start that VM back up and see what happens.

C:\Users\JON>multipass shell "witty-kelpie"
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-76-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Thu Feb  6 11:31:08 GMT 2020

  System load:  0.1               Processes:             84
  Usage of /:   21.1% of 4.67GB   Users logged in:       0
  Memory usage: 11%               IP address for enp0s3: 10.0.2.15
  Swap usage:   0%


0 packages can be updated.
0 updates are security updates.


Last login: Thu Feb  6 10:56:45 2020 from 10.0.2.2
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@witty-kelpie:~$

Wait, what….. We’ve still only got the one interface up with an IP address… OK, let’s fix this!

As of Ubuntu 18.04, interfaces are managed using Netplan, and, well, when the VM was built, it didn’t know about any interface past the first one, so we need to get Netplan to get them enabled. Let’s check they’re detected by the VM, and see what they’re all called:

ubuntu@witty-kelpie:~$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 08:00:27:3c:ce:d0 brd ff:ff:ff:ff:ff:ff
3: enp0s8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 08:00:27:30:37:58 brd ff:ff:ff:ff:ff:ff
4: enp0s9: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 08:00:27:6e:a1:74 brd ff:ff:ff:ff:ff:ff
5: enp0s10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 08:00:27:04:21:35 brd ff:ff:ff:ff:ff:ff
ubuntu@witty-kelpie:~$ 

If you compare the link/ether lines to the output from showvminfo we executed before, you’ll see that the MAC address against enp0s3 matches the NAT interface, while enp0s8 matches the DisplayLink adapter, and so on… So we basically want to ask NetPlan to do a DHCP lookup for all the new interfaces we’ve added to it. If you’ve got 1 NAT and 7 physical interfaces (why oh why…) then you’d have enp0s8, 9, 10, 16, 17, 18 and 19 (I’ll come back to the random numbering in a tic)… so we now need to ask Netplan to do DHCP on all of those interfaces (assuming we’ll be asking for them all to come up!)

If we want to push that in, then we need to add a new file in /etc/netplan called something like 60-extra-interfaces.yaml, that should contain:

network:
  ethernets:
    enp0s8:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 10
    enp0s9:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 11
    enp0s10:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 12
    enp0s16:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 13
    enp0s17:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 14
    enp0s18:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 15
    enp0s19:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 16

Going through this, we basically ask netplan not to assume the interfaces are attached. This stops the boot process for waiting for a timeout to configure each of the interfaces before proceeding, so it means your boot should be reasonably fast, particularly if you don’t always attach a network cable or join a Wifi network on all your interfaces!

We also say to assume we want IPv4 DHCP on each of those interfaces. I’ve done IPv4 only, as most people don’t use IPv6 at home, but if you are doing IPv6 as well, then you’d also need the same lines that start dhcp4 copied to show dhcp6 (like dhcp6: yes and dhcp6-overrides: route-metric: 10)

The eagle eyed of you might notice that the route metric increases for each extra interface. This is because realistically, if you have two interfaces connected (perhaps if you’ve got wifi enabled, and plug a network cable in), then you’re more likely to want to prioritize traffic going over the lower numbered interfaces than the higher number interfaces.

Once you’ve created this file, you need to run netplan apply or reboot your machine.

So, yehr, that gets you sorted on the interface front.

Bridging Summary

To review, you launch your machine with multipass launch, and immediately stop it with multipass stop "vm-name", then, as an admin, run psexec vboxmanage modifyvm "vm-name" --nic2 bridged --bridgedadapter2 "NIC description", and then start the machine with multipass start "vm-name". Lastly, ask the interface to do DHCP by manipulating your Netplan configuration.

Interface Names in VirtualBox

Just a quick note on the fact that the interface names aren’t called things like eth0 any more. A few years back, Ubuntu (amongst pretty much all of the Linux distribution vendors) changed from using eth0 style naming to what they call “Predictable Network Interface Names”. This derives the names from things like, what the BIOS provides for on-board interfaces, slot index numbers for PCI Express ports, and for this case, the “geographic location of the connector”. In Virtualbox, these interfaces are provided as the “Geographically” attached to “port 0” (so enp0 are all on port 0), but for some reason, they broadcast themselves as being attached to the port 0 at “slots” 3, 8, 9, 10, 16, 17, 18 and 19… hence enp0s3 and so on. shrug It just means that if you don’t have the interfaces coming up on the interfaces you’re expecting, you need to run ip link to confirm the MAC addresses match.

Port Forwarding

Unlike with the Bridging, we don’t need to power down the VM to add the extra interfaces, we just need to use psexec (as an admin again) to execute a vboxmanage command – in this case, it’s:

C:\WINDOWS\system32>C:\Users\JON\Downloads\sysinternals\PsExec64.exe -s "c:\program files\oracle\virtualbox\vboxmanage" controlvm "witty-kelpie" --natpf1 "myport,tcp,,1234,,2345"

OK, that’s a bit more obscure. Basically it says “Create a NAT rule on NIC 1 called ‘myport’ to forward TCP connections from port 1234 attached to any IP associated to the host OS to port 2345 attached to the DHCP supplied IP on the guest OS”.

If we wanted to run a DNS server in our VM, we could run multiple NAT rules in the same command, like this:

C:\WINDOWS\system32>C:\Users\JON\Downloads\sysinternals\PsExec64.exe -s "c:\program files\oracle\virtualbox\vboxmanage" controlvm "witty-kelpie" --natpf1 "TCP DNS,tcp,127.0.0.1,53,,53" --natpf1 "UDP DNS,udp,127.0.0.1,53,,53"

If we then decide we don’t need those NAT rules any more, we just (with psexec and appropriate paths) issue: vboxmanage controlvm "vm-name" --natpf1 delete "TCP DNS"

Using ifupdown instead of netplan

Late Edit 2020-04-01: On Github, someone asked me how they could use the same type of config with netplan, but instead on a 16.04 system. Ubuntu 16.04 doesn’t use netplan, but instead uses ifupdown instead. Here’s how to configure the file for ifupdown:

You can either add the following stanzas to /etc/network/interfaces, or create a separate file for each interface in /etc/network/interfaces.d/<number>-<interface>.cfg (e.g. /etc/network/interfaces.d/10-enp0s8.cfg)

allow-hotplug enp0s8
iface enp0s8 inet dhcp
  metric 10

To re-iterate, in the above netplan file, the interfaces we identified were: enp0s8, enp0s9, enp0s10, enp0s16, enp0s17, enp0s18 and enp0s19. Each interface was incrementally assigned a route metric, starting at 10 and ending at 16, so enp0s8 has a metric of 10, while enp0s16 has a metric of 13, and so on. To build these files, I’ve created this brief shell script you could use:

export metric=10
for int in 8 9 10 16 17 18 19
do
  echo -e "allow-hotplug enp0s${int}\niface enp0s${int} inet dhcp\n  metric $metric" > /etc/network/interfaces.d/enp0s${int}.cfg
  ((metric++))
done

As before, you could reboot to make the changes to the interfaces. Bear in mind, however, that unlike Netplan, these interfaces will try and DHCP on boot with this configuration, so boot time will take longer if every interface attached isn’t connected to a network.

Using NAT Network instead of NAT Interface

Late update 2020-05-26: Ruzsinsky contacted me by email to ask how I’d use a “NAT Network” instead of a “NAT interface”. Essentially, it’s the same as the Bridged interface above, with one other tweak first, we need to create the Net Network, with this command (as an Admin)

C:\WINDOWS\system32>C:\Users\JON\Downloads\sysinternals\PsExec64.exe -s "c:\program files\oracle\virtualbox\vboxmanage" natnetwork add --netname MyNet --network 192.0.2.0/24

Next, stop your multipass virtual machine with multipass stop "witty-kelpie", and configure your second interface, like this:

C:\WINDOWS\system32>C:\Users\JON\Downloads\sysinternals\PsExec64.exe -s "c:\program files\oracle\virtualbox\vboxmanage" modifyvm "witty-kelpie" --nic2 natnetwork --nat-network2 "MyNet"

PsExec v2.2 - Execute processes remotely
Copyright (C) 2001-2016 Mark Russinovich
Sysinternals - www.sysinternals.com

c:\program files\oracle\virtualbox\vboxmanage exited on MINILITH with error code 0.

Start the vm with multipass start "witty-kelpie", open a shell with it multipass shell "witty-kelpie", become root sudo -i and then configure the interface in /etc/netplan/60-extra-interfaces.yaml like we did before:

network:
  ethernets:
    enp0s8:
      optional: yes
      dhcp4: yes
      dhcp4-overrides:
        route-metric: 10

And then run netplan apply or reboot.

What I would say, however, is that the first interface seems to be expected to be a NAT interface, at which point, having a NAT network as well seems a bit pointless. You might be better off using a “Host Only” (or “Private”) network for any inter-host communications between nodes at a network level… But you know your environments and requirements better than I do :)

Featured image is “vieux port Marseille” by “Jeanne Menjoulet” on Flickr and is released under a CC-BY-ND license.

"Unnatural Love" by "Keith Garner" on Flickr

Configuring a Remote Desktop (Gnome Shell) for Ubuntu

I started thinking a couple of weeks ago, when my coding laptop broke, that it would be really useful to have a development machine somewhere else that I could use.

It wouldn’t need a lot of power (after all, I’m mostly developing web apps and not compiling stuff), but it does need to be a desktop OS, as I rather like being able to open code editors and suchlike, while I’ve got a web browser open.

I have an Android tablet, which while it’s great for being a tablet, it’s not much use as a desktop, and … yes, I’ve got a work laptop, but I don’t really want to install software on that (and I don’t think my admin team would be happy if I did).

Also, I quite like Linux.

Some time ago, I spotted that AWS has a “Virtual Desktop” environment, and I think that’s kinda what I’m after. Something I can spin up, run for a bit and then shut it down, so I thought I’d build something like that… but not pesky Windows, after all… who likes Windows, eh? ;)

So, I built a Virtual Desktop Environment (VDE) in AWS, using Terraform and a bit of shell script!

I start from an Ubuntu 18.04 server image, and, after the install is complete, I run this user-data script inside it. Yes, I know I could be doing this with Ansible, but… eh, I wanted it to be a quick deployment ;)

Oh, and there’s a couple of Terraform managed variables in here – ${aws_eip.vde.public_ip} is the AWS public IP address assigned to this host., ${var.firstuser} is the username we want to rename “ubuntu” (the stock server username) to. ${var.firstgecos} is the user’s “real name” which the machine identifies the user as (like “Log out Jon Spriggs” and so on). ${var.userpw} is either the password you want it to use, OR (by default) pwgen 12 which generates a 12 character long password. ${var.desktopenv} is the name of the desktop environment I want to install (Ubuntu by default) and … well, ${var.var_start} is a bit of a fudge, because I couldn’t, in a hurry, work out how to tell Terraform not to mangle the bash variable allocation of ${somevar} which is the format that Terraform also uses. D’oh.

#! /bin/bash
#################
# Set Hostname
#################
hostnamectl set-hostname vde.${aws_eip.vde.public_ip}.nip.io
#################
# Change User
#################
user=${var.firstuser}
if [ ! "$user" == 'ubuntu' ]
then
  until usermod -c "${var.firstgecos}" -l $user ubuntu ; do sleep 5 ; done
  until groupmod -n $user ubuntu ; do sleep 5 ; done
  until usermod  -d /home/$user -m $user ; do sleep 5 ; done
  if [ -f /etc/sudoers.d/90-cloudimg-ubuntu ]; then
    mv /etc/sudoers.d/90-cloudimg-ubuntu /etc/sudoers.d/90-cloud-init-users
  fi
  perl -pi -e "s/ubuntu/$user/g;" /etc/sudoers.d/90-cloud-init-users
fi
if [ '${var.userpw}' == '$(pwgen 12)' ]
then 
  apt update && apt install pwgen
fi
newpw="${var.userpw}"
echo "$newpw" > /var/log/userpw
fullpw="$newpw"
fullpw+="\n"
fullpw+="$newpw"
echo -e "$fullpw" | passwd $user
##########################
# Install Desktop and RDP
##########################
apt-get update
export DEBIAN_FRONTEND=noninteractive
apt-get full-upgrade -yq
apt-get autoremove -y
apt-get autoclean -y
apt-get install -y ${var.desktopenv}-desktop xrdp certbot
##########################
# Configure Certbot
##########################
echo "#!/bin/sh" > /etc/letsencrypt/merge_cert.sh
echo 'cat ${var.var_start}{RENEWED_LINEAGE}/privkey.pem ${var.var_start}{RENEWED_LINEAGE}/fullchain.pem > ${var.var_start}{RENEWED_LINEAGE}/merged.pem' >> /etc/letsencrypt/merge_cert.sh
echo 'chmod 640 ${var.var_start}{RENEWED_LINEAGE}/merged.pem' >> /etc/letsencrypt/merge_cert.sh
chmod 750 /etc/letsencrypt/merge_cert.sh
certbot certonly --standalone --deploy-hook /etc/letsencrypt/merge_cert.sh -n -d vde.${aws_eip.vde.public_ip}.nip.io -d ${aws_eip.vde.public_ip}.nip.io --register-unsafely-without-email --agree-tos
# Based on https://www.snel.com/support/xrdp-with-lets-encrypt-on-ubuntu-18-04/
sed -i 's~^certificate=$~certificate=/etc/letsencrypt/live/vde.${aws_eip.vde.public_ip}.nip.io/fullchain.pem~; s~^key_file=$~key_file=/etc/letsencrypt/live/vde.${aws_eip.vde.public_ip}.nip.io/privkey.pem' /etc/xrdp/xrdp.ini
##############################
# Fix colord remote user issue
##############################
# Derived from http://c-nergy.be/blog/?p=12043
echo "[Allow Colord all Users]
Identity=unix-user:*
Action=org.freedesktop.color-manager.create-device;org.freedesktop.color-manager.create-profile;org.freedesktop.color-manager.delete-device;org.freedesktop.color-manager.delete-profile;org.freedesktop.color-manager.modify-device;org.freedesktop.color-manager.modify-profile
ResultAny=no
ResultInactive=no
ResultActive=yes" > /etc/polkit-1/localauthority/50-local.d/45-allow.colord.pkla
##############################
# Configure Desktop
##############################
if [ '${var.desktopenv}' == 'ubuntu' ]
then 
  echo "#!/bin/bash" > /tmp/desktop_settings
  echo "gsettings set org.gnome.desktop.input-sources sources \"[('xkb', 'gb')]\"" >> /tmp/desktop_settings
  echo "gsettings set org.gnome.desktop.app-folders folder-children \"['Utilities', 'Sundry', 'YaST']\"" >> /tmp/desktop_settings
  echo "gsettings set org.gnome.desktop.privacy report-technical-problems false" >> /tmp/desktop_settings
  echo "gsettings set org.gnome.desktop.screensaver lock-enabled false" >> /tmp/desktop_settings
  echo "gsettings set org.gnome.desktop.session idle-delay 0" >> /tmp/desktop_settings
  echo "echo yes > /home/${var.firstuser}/.config/gnome-initial-setup-done" >> /tmp/desktop_settings
  sudo -H -u ${var.firstuser} dbus-launch --exit-with-session bash /tmp/desktop_settings
  rm -f /tmp/desktop_settings
fi
##########################
# Install VSCode
##########################
wget https://vscode-update.azurewebsites.net/latest/linux-deb-x64/stable -O /tmp/vscode.deb
apt install -y /tmp/vscode.deb
rm /var/crash/*
shutdown -r now

Ubuntu 18.04 has a “first login” wizard, that lets you pre-set up things like, what language will you be using. I bypassed this with the gsettings commands towards the end of the script, and writing the string “yes” to ~/.config/gnome-initial-setup-done.

Also, I wanted to be able to RDP to it. I’m a bit concerned by the use of VNC, especially where RDP is more than capable. It’s just an apt-install away, so… that’s what I do. But, because I’m RDP’ing into this box, I wanted to prevent the RDP session from locking, so I provide two commands to the session: gsettings set org.gnome.desktop.screensaver lock-enabled false which removes the screensaver’s ability to lock the screen, and gsettings set org.gnome.desktop.session idle-delay 0 which stops the screensaver from even starting in the first place.

Now all I need to do is to figure out where I’m going to store my code between boots ;)

So, in summary, I now have a Virtual Machine, which runs Ubuntu 18.04 Desktop, in AWS, with an RDP connection (powered by xRDP), and a disabled screensaver. Job done, I think!

Oh, and if I’m doing it “wrong”, let me know in the comments? :)

Featured image is “Unnatural Love” by “Keith Garner” on Flickr and is released under a CC-BY-SA license.

"So many coats..." by "Scott Griggs" on Flickr

Migrating from docker-compose to Kubernetes Files

Just so you know. This is a long article to explain my wandering path through understanding Kubernetes (K8S). It’s not an article to explain to you how to use K8S with your project. I hit a lot of blockers, due to the stack I’m using and I document them all. This means there’s a lot of noise and not a whole lot of Signal.

In a previous blog post I created a docker-compose.yml file for a PHP based web application. Now that I have a working Kubernetes environment, I wanted to port the configuration files into Kubernetes.

Initially, I was pointed at Kompose, a tool for converting docker-compose files to Kubernetes YAML formatted files, and, in fact, this gives me 99% of what I need… except, the current version uses beta API version flags for some of it’s outputted files, and this isn’t valid for the version of Kubernetes I’m running. So, let’s wind things back a bit, and find out what you need to do to use kompose first and then we can tweak the output file next.

Note: I’m running all these commands as root. There’s a bit of weirdness going on because I’m using the snap of Docker and I had a few issues with running these commands as a user… While I could have tried to get to the bottom of this with sudo and watching logs, I just wanted to push on… Anyway.

Here’s our “simple” docker-compose file.

version: '3'
services:
  db:
    build:
      context: .
      dockerfile: mariadb/Dockerfile
    image: localhost:32000/db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: a_root_pw
      MYSQL_USER: a_user
      MYSQL_PASSWORD: a_password
      MYSQL_DATABASE: a_db
    expose:
      - 3306
  nginx:
    build:
      context: .
      dockerfile: nginx/Dockerfile
    image: localhost:32000/nginx
    ports:
      - 1980:80
  phpfpm:
    build:
      context: .
      dockerfile: phpfpm/Dockerfile
    image: localhost:32000/phpfpm

This has three components – the MariaDB database, the nginx web server and the PHP-FPM CGI service that nginx consumes. The database service exposes a port (3306) to other containers, with a set of hard-coded credentials (yep, that’s not great… working on that one!), while the nginx service opens port 1980 to port 80 in the container. So far, so … well, kinda sensible :)

If we run kompose convert against this docker-compose file, we get five files created; db-deployment.yaml, nginx-deployment.yaml, phpfpm-deployment.yaml, db-service.yaml and nginx-service.yaml. If we were to run kompose up on these, we get an error message…

Well, actually, first, we get a whole load of “INFO” and “WARN” lines up while kompose builds and pushes the containers into the MicroK8S local registry (a registry is a like a package repository, for containers), which is served by localhost:32000 (hence all the image: localhost:3200/someimage lines in the docker-compose.yml file), but at the end, we get (today) this error:

INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.

FATA Error while deploying application: Get http://localhost:8080/api: dial tcp 127.0.0.1:8080: connect: connection refused

Uh oh! Well, this is a known issue at least! Kubernetes used to use, by default, http on port 8080 for it’s service, but now it uses https on port 6443. Well, that’s what I thought! In this issue on the MicroK8S repo, it says that it uses a different port, and you should use microk8s.kubectl cluster-info to find the port… and yep… Kubernetes master is running at https://127.0.0.1:16443. Bah.

root@microk8s-a:~/glowing-adventure# microk8s.kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:16443
Heapster is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/heapster/proxy
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Grafana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/monitoring-influxdb:http/proxy

So, we export the KUBERNETES_MASTER environment variable, which was explained in that known issue I mentioned before, and now we get a credential prompt:

Please enter Username:

Oh no, again! I don’t have credentials!! Fortunately the MicroK8S issue also tells us how to find those! You run microk8s.config and it tells you the username!

roo@microk8s-a:~/glowing-adventure# microk8s.config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <base64-data>
    server: https://10.0.2.15:16443
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
  user:
    username: admin
    password: QXdUVmN3c3AvWlJ3bnRmZVJmdFhpNkJ3cDdkR3dGaVdxREhuWWo0MmUvTT0K

So, our username is “admin” and our password is … well, in this case a string starting QX and ending 0K but yours will be different!

We run kompose up again, and put in the credentials… ARGH!

FATA Error while deploying application: Get https://127.0.0.1:16443/api: x509: certificate signed by unknown authority

Well, now, that’s no good! Fortunately, a quick Google later, and up pops this Stack Overflow suggestion (mildly amended for my circumstances):

openssl s_client -showcerts -connect 127.0.0.1:16443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' | sudo tee /usr/local/share/ca-certificates/k8s.crt
update-ca-certificates
systemctl restart snap.docker.dockerd

Right then. Let’s run that kompose up statement again…

INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.

Please enter Username: 
Please enter Password: 
INFO Deploying application in "default" namespace
INFO Successfully created Service: nginx
FATA Error while deploying application: the server could not find the requested resource

Bah! What resource do I need? Well, actually, there’s a bug in 1.20.0 of Kompose, and it should be fixed in 1.21.0. The “resource” it’s talking about is, I think, that one of the APIs refuses to process the converted YAML files. As a result, the “resource” is the service that won’t start. So, instead, let’s convert the file into the output YAML files, and then take a peak at what’s going wrong.

root@microk8s-a:~/glowing-adventure# kompose convert
INFO Kubernetes file "nginx-service.yaml" created
INFO Kubernetes file "db-deployment.yaml" created
INFO Kubernetes file "nginx-deployment.yaml" created
INFO Kubernetes file "phpfpm-deployment.yaml" created

So far, so good! Now let’s run kubectl apply with each of these files.

root@microk8s-a:~/glowing-adventure# kubectl apply -f nginx-service.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/nginx configured
root@microk8s-a:~# kubectl apply -f nginx-deployment.yaml
error: unable to recognize "nginx-deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"

Apparently the service files are all OK, the problem is in the deployment files. Hmm OK, let’s have a look at what could be wrong. Here’s the output file:

root@microk8s-a:~/glowing-adventure# cat nginx-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.20.0 (f3d54d784)
  creationTimestamp: null
  labels:
    io.kompose.service: nginx
  name: nginx
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      annotations:
        kompose.cmd: kompose convert
        kompose.version: 1.20.0 (f3d54d784)
      creationTimestamp: null
      labels:
        io.kompose.service: nginx
    spec:
      containers:
      - image: localhost:32000/nginx
        name: nginx
        ports:
        - containerPort: 80
        resources: {}
      restartPolicy: Always
status: {}

Well, the extensions/v1beta1 API version doesn’t seem to support “Deployment” options any more, so let’s edit it to change that to what the official documentation example shows today. We need to switch to using the apiVersion: apps/v1 value. Let’s see what happens when we make that change!

root@microk8s-a:~/glowing-adventure# kubectl apply -f nginx-deployment.yaml
error: error validating "nginx-deployment.yaml": error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec; if you choose to ignore these errors, turn validation off with --validate=false

Hmm this seems to be a fairly critical issue. A selector basically tells the orchestration engine which images we want to be deployed. Let’s go back to the official example. So, we need to add the “selector” value in the “spec” block, at the same level as “template”, and it needs to match the labels we’ve specified. It also looks like we don’t need most of the metadata that kompose has given us. So, let’s adjust the deployment to look a bit more like that example.

root@microk8s-a:~/glowing-adventure# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: localhost:32000/nginx
        name: nginx
        ports:
        - containerPort: 80
        resources: {}
      restartPolicy: Always

Fab. And what happens when we run it?

root@microk8s-a:~/glowing-adventure# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx created

Woohoo! Let’s apply all of these now.

root@microk8s-a:~/glowing-adventure# for i in db-deployment.yaml nginx-deployment.yaml nginx-service.yaml phpfpm-deployment.yaml; do kubectl apply -f $i ; done
deployment.apps/db created
deployment.apps/nginx unchanged
service/nginx unchanged
deployment.apps/phpfpm created

Oh, hang on a second, that service (service/nginx) is unchanged, but we changed the label from io.kompose.service: nginx to app: nginx, so we need to fix that. Let’s open it up and edit it!

apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.20.0 (f3d54d784)
  creationTimestamp: null
  labels:
    io.kompose.service: nginx
  name: nginx
spec:
  ports:
  - name: "1980"
    port: 1980
    targetPort: 80
  selector:
    io.kompose.service: nginx
status:
  loadBalancer: {}

Ah, so this has the “annotations” field too, in the metadata, and, as suspected, it’s got the io.kompose.service label as the selector. Hmm OK, let’s fix that.

root@microk8s-a:~/glowing-adventure# cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ports:
  - name: "1980"
    port: 1980
    targetPort: 80
  selector:
    app: nginx
status:
  loadBalancer: {}

Much better. And let’s apply it…

root@microk8s-a:~/glowing-adventure# kubectl apply -f nginx-service.yaml
service/nginx configured

Fab! So, let’s review the state of the deployments, the services, the pods and the replication sets.

root@microk8s-a:~/glowing-adventure# kubectl get deploy
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
db       1/1     1            1           8m54s
nginx    0/1     1            0           8m53s
phpfpm   1/1     1            1           8m48s

Hmm. That doesn’t look right.

root@microk8s-a:~/glowing-adventure# kubectl get pod
NAME                      READY   STATUS             RESTARTS   AGE
db-f78f9f69b-grqfz        1/1     Running            0          9m9s
nginx-7774fcb84c-cxk4v    0/1     CrashLoopBackOff   6          9m8s
phpfpm-66945b7767-vb8km   1/1     Running            0          9m3s
root@microk8s-a:~# kubectl get rs
NAME                DESIRED   CURRENT   READY   AGE
db-f78f9f69b        1         1         1       9m18s
nginx-7774fcb84c    1         1         0       9m17s
phpfpm-66945b7767   1         1         1       9m12s

Yep. What does “CrashLoopBackOff” even mean?! Let’s check the logs. We need to ask the pod itself, not the deployment, so let’s use the kubectl logs command to ask.

root@microk8s-a:~/glowing-adventure# kubectl logs nginx-7774fcb84c-cxk4v
2020/01/17 08:08:50 [emerg] 1#1: host not found in upstream "phpfpm" in /etc/nginx/conf.d/default.conf:10
nginx: [emerg] host not found in upstream "phpfpm" in /etc/nginx/conf.d/default.conf:10

Hmm. That’s not good. We were using the fact that Docker just named everything for us in the docker-compose file, but now in Kubernetes, we need to do something different. At this point I ran out of ideas. I asked on the McrTech slack for advice. I was asked to run this command, and would you look at that, there’s nothing for nginx to connect to.

root@microk8s-a:~/glowing-adventure# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP    24h
nginx        ClusterIP   10.152.183.62   <none>        1980/TCP   9m1s

It turns out that I need to create a service for each of the deployments. So, now I have a separate service for each one. I copied the nginx-service.yaml file into db-service.yaml and phpfpm-service.yaml, edited the files and now… tada!

root@microk8s-a:~/glowing-adventure# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
db           ClusterIP   10.152.183.61   <none>        3306/TCP   5m37s
kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP    30h
nginx        ClusterIP   10.152.183.62   <none>        1980/TCP   5h54m
phpfpm       ClusterIP   10.152.183.69   <none>        9000/TCP   5m41s

But wait… How do I actually address nginx now? Huh. No external-ip (not even “pending”, which is what I ended up with), no ports to talk to. Uh oh. Now I need to understand how to hook this service up to the public IP of this node. Ahh, see up there it says “ClusterIP”? That means “this service is only available INSIDE the cluster”. If I change this to “NodePort” or “LoadBalancer”, it’ll attach that port to the external interface.

What’s the difference between “NodePort” and “LoadBalancer”? Well, according to this page, if you are using a managed Public Cloud service that supports an external load balancer, then putting this to “LoadBalancer” should attach your “NodePort” to the provider’s Load Balancer automatically. Otherwise, you need to define the “NodePort” value in your config (which must be a value between 30000 and 32767, although that is configurable for the node). Once you’ve done that, you can hook your load balancer up to that port, for example Client -> Load Balancer IP (TCP/80) -> K8S Cluster IP (e.g. TCP/31234)

So, how does this actually look. I’m going to use the “LoadBalancer” option, because if I ever deploy this to “live”, I want it to integrate with the load balancer, but for now, I can cope with addressing a “high port”. Right, well, let me open back up that nginx-service.yaml, and make the changes.

root@microk8s-a:~/glowing-adventure# cat nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  type: LoadBalancer
  ports:
  - name: nginx
    nodePort: 30000
    port: 1980
    targetPort: 80
  selector:
    app: nginx
status:
  loadBalancer: {}

The key parts here are the lines type: LoadBalancer and nodePort: 30000 under spec: and ports: respectively. Note that I can use, at this point type: LoadBalancer and type: NodePort interchangably, but, as I said, if you were using this in something like AWS or Azure, you might want to do it differently!

So, now I can curl http://192.0.2.100:30000 (where 192.0.2.100 is the address of my “bridged interface” of K8S environment) and get a response from my PHP application, behind nginx, and I know (from poking at it a bit) that it works with my Database.

OK, one last thing. I don’t really want lots of little files which have got config items in. I quite liked the docker-compose file as it was, because it had all the services in as one block, and I could run “docker-compose up”, but the kompose script split it out into lots of pieces. In Kubernetes, if the YAML file it loads has got a divider in it (a line like this: ---) then it stops parsing it at that point, and starts reading the file after that as a new file. Like this I could have the following layout:

apiVersion: apps/v1
kind: Deployment
more: stuff
---
apiVersion: v1
kind: Service
more: stuff
---
apiVersion: apps/v1
kind: Deployment
more: stuff
---
apiVersion: v1
kind: Service
more: stuff

But, thinking about it, I quite like having each piece logically together, so I really want db.yaml, nginx.yaml and phpfpm.yaml, where each of those files contains both the deployment and the service. So, let’s do that. I’ll do one file, so it makes more sense, and then show you the output.

root@microk8s-a:~/glowing-adventure# mkdir -p k8s
root@microk8s-a:~/glowing-adventure# mv db-deployment.yaml k8s/db.yaml
root@microk8s-a:~/glowing-adventure# echo "---" >> k8s/db.yaml
root@microk8s-a:~/glowing-adventure# cat db-service.yaml >> k8s/db.yaml
root@microk8s-a:~/glowing-adventure# rm db-service.yaml
root@microk8s-a:~/glowing-adventure# cat k8s/db.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: db
  name: db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
      - env:
        - name: MYSQL_DATABASE
          value: a_db
        - name: MYSQL_PASSWORD
          value: a_password
        - name: MYSQL_ROOT_PASSWORD
          value: a_root_pw
        - name: MYSQL_USER
          value: a_user
        image: localhost:32000/db
        name: db
        resources: {}
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: db
  name: db
spec:
  ports:
  - name: mariadb
    port: 3306
    targetPort: 3306
  selector:
    app: db
status:
  loadBalancer: {}

So, now, if I do kubectl apply -f k8s/db.yaml I’ll get this output:

root@microk8s-a:~/glowing-adventure# kubectl apply -f k8s/db.yaml
deployment.apps/db unchanged
service/db unchanged

You can see the final files in the git repo for this set of tests.

Next episode, I’ll start looking at making my application scale (as that’s the thing that Kubernetes is known for) and having more than one K8S node to house my K8S pods!

Featured image is “So many coats…” by “Scott Griggs” on Flickr and is released under a CC-BY-ND license.

"Shipping Containers" by "asgw" on Flickr

Creating my first Docker containerized LEMP (Linux, nginx, MariaDB, PHP) application

Want to see what I built without reading the why’s and wherefore’s? The git repository with all the docker-compose goodness is here!

Late edit 2020-01-16: The fantastic Jerry Steel, my co-host on The Admin Admin podcast looked at what I wrote, and made a few suggestions. I’ve updated the code in the git repo, and I’ll try to annotate below when I’ve changed something. If I miss it, it’s right in the Git repo!

One of the challenges I set myself this Christmas was to learn enough about Docker to put an arbitrary PHP application, that I would previously have misused Vagrant to contain.

Just before I started down this rabbit hole, I spoke to my Aunt about some family tree research my father had left behind after he died, and how I wished I could easily share the old tree with her (I organised getting her a Chromebook a couple of years ago, after fighting with doing remote support for years on Linux and Windows laptops). In the end, I found a web application for genealogical research called HuMo-gen, that is a perfect match for both projects I wanted to look at.

HuMo-gen was first created in 1999, with a PHP version being released in 2005. It used MySQL or MariaDB as the Database engine. I was reasonably confident that I could have created a Vagrantfile to deliver this on my home server, but I wanted to try something new. I wanted to use the “standard” building blocks of Docker and Docker-Compose, and some common containers to make my way around learning Docker.

I started by looking for some resources on how to build a Docker container. Much of the guidance I’d found was to use Docker-Compose, as this allows you to stand several components up at the same time!

In contrast to how Vagrant works (which is basically a CLI wrapper to many virtual machine services), Docker isolates resources for a single process that runs on a machine. Where in Vagrant, you might run several processes on one machine (perhaps, in this instance, nginx, PHP-FPM and MariaDB), with Docker, you’re encouraged to run each “service” as their own containers, and link them together with an overlay network. It’s possible to also do the same with Vagrant, but you’ll end up with an awful lot of VM overhead to separate out each piece.

So, I first needed to select my services. My initial line-up was:

  • MariaDB
  • PHP-FPM
  • Apache’s httpd2 (replaced by nginx)

I was able to find official Docker images for PHP, MariaDB and httpd, but after extensive tweaking, I couldn’t make the httpd image talk the way I wanted it to with the PHP image. Bowing to what now seems to be conventional wisdom, I swapped out the httpd service for nginx.

One of the stumbling blocks for me, particularly early on, was how to build several different Dockerfiles (these are basically the instructions for the container you’re constructing). Here is the basic outline of how to do this:

version: '3'
services:
  yourservice:
    build:
      context: .
      dockerfile: relative/path/to/Dockerfile

In this docker-compose.yml file, I tell it that to create the yourservice service, it needs to build the docker container, using the file in ./relative/path/to/Dockerfile. This file in turn contains an instruction to import an image.

Each service stacks on top of each other in that docker-compose.yml file, like this:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2

Late edit 2020-01-16: This previously listed Dockerfile/service1, however, much of the documentation suggested that Docker gets quite opinionated about the file being called Dockerfile. While docker-compose can work around this, it’s better to stick to tradition :) The docker-compose.yml files below have also been adjusted accordingly. I’ve also added an image: somehost:1234/image_name line to help with tagging the images for later use. It’s not critical to what’s going on here, but I found it useful with some later projects.

To allow containers to see ports between themselves, you add the expose: command in your docker-compose.yml, and to allow that port to be visible from the “outside” (i.e. to the host and upwards), use the ports: command listing the “host” port (the one on the host OS), then a colon and then the “target” port (the one in the container), like these:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
    expose:
    - 12345
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2
    ports:
    - 8000:80

Now, let’s take a quick look into the Dockerfiles. Each “statement” in a Dockerfile adds a new “layer” to the image. For local operations, this probably isn’t a problem, but when you’re storing these images on a hosted provider, you want to keep these images as small as possible.

I built a Database Dockerfile, which is about as small as you can make it!

FROM mariadb:10.4.10

Yep, one line. How cool is that? In the docker-compose.yml file, I invoke this, like this:

version: '3'
services:
  db:
    build:
      context: .
      dockerfile: mariadb/Dockerfile
    image: localhost:32000/db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: a_root_pw
      MYSQL_USER: a_user
      MYSQL_PASSWORD: a_password
      MYSQL_DATABASE: a_db
    expose:
      - 3306

OK, so this one is a bit more complex! I wanted it to build my Dockerfile, which is “mariadb/Dockerfile“. I wanted it to restart the container whenever it failed (which hopefully isn’t that often!), and I wanted to inject some specific environment variables into the file – the root and user passwords, a user account and a database name. Initially I was having some issues where it wasn’t building the database with these credentials, but I think that’s because I wasn’t “building” the new database, I was just using it. I also expose the MariaDB (MySQL) port, 3306 to the other containers in the docker-compose.yml file.

Let’s take a look at the next part! PHP-FPM. Here’s the Dockerfile:

FROM php:7.4-fpm
RUN docker-php-ext-install pdo pdo_mysql
ADD --chown=www-data:www-data public /var/www/html

There’s a bit more to this, but not loads. We build our image from a named version of PHP, and install two extensions to PHP, pdo and pdo_mysql. Lastly, we copy the content of the “public” directory into the /var/www/html path, and make sure it “belongs” to the right user (www-data).

I’d previously tried to do a lot more complicated things with this Dockerfile, but it wasn’t working, so instead I slimmed it right down to just this, and the docker-compose.yml is a lot simpler too.

  phpfpm:
    build:
      context: .
      dockerfile: phpfpm/Dockerfile
    image: localhost:32000/phpfpm

See! Loads simpler! Now we need the complicated bit! :) This is the Dockerfile for nginx.

FROM nginx:1.17.7
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

COPY public /var/www/html

Weirdly, even though I’ve added version numbers for MariaDB and PHP, I’ve not done the same for nginx, perhaps I should! Late edit 2020-01-16: I’ve put a version number on there now, previously where it said nginx:1.17.7 it actually said nginx:latest.

I’ve created the configuration block for nginx in a single “RUN” line. Late edit 2020-01-16: This Dockerfile now doesn’t have a giant echo 'stuff' > file block either, following Jerry’s advice, and I’m using COPY instead of ADD on his advice too. I’ll show that config file below. There’s a couple of high points for me here!

server {
  index index.php index.html;
  server_name _;
  error_log /proc/self/fd/2;
  access_log /proc/self/fd/1;
  root /var/www/html;
  location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass phpfpm:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}
  • server_name _; means “use this block for all unnamed requests”.
  • access_log /proc/self/fd/1; and error_log /proc/self/fd/2;These are links to the “stdout” and “stderr” file descriptors (or pointers to other parts of the filesystem), and basically means that when you do docker-compose logs, you’ll see the HTTP logs for the server! These two files are guaranteed to be there, while /dev/stderr isn’t!

Because nginx is “just” caching the web content, and I know the content doesn’t need to be written to from nginx, I knew I didn’t need to do the chown action, like I did with the PHP-FPM block.

Lastly, I need to configure the docker-compose.yml file for nginx:

  nginx:
    build:
      context: .
      dockerfile: Dockerfile/nginx
    image: localhost:32000/nginx
    ports:
      - 127.0.0.1:1980:80

I’ve gone for a slightly unusual ports configuration when I deployed this to my web server… you see, I already have the HTTP port (TCP/80) configured for use on my home server – for running the rest of my web services. During development, on my home machine, the ports line instead showed “1980:80” because I was running this on Instead, I’m running this application bound to “localhost” (127.0.0.1) on a different port number (1980 selected because it could, conceivably, be a birthday of someone on this system), and then in my local web server configuration, I’m proxying connections to this service, with HTTPS encryption as well. That’s all outside the scope of this article (as I probably should be using something like Traefik, anyway) but it shows you how you could bind to a separate port too.

Anyway, that was my Docker journey over Christmas, and I look forward to using it more, going forward!

Featured image is “Shipping Containers” by “asgw” on Flickr and is released under a CC-BY license.