"Captain" by "The Laddie" on Flickr

Trying out Kubernetes (K8S) with MicroK8S in Vagrant

I’m going on a bit of a containers kick at the moment, and just recently I wanted to give Kubernetes (sometimes abbreviated to “K8S”) a try.

Kubernetes is an orchestration engine for Containers, like Docker. It’s designed to take the images that Docker (and other similar tools) produce, and run them across multiple nodes. You need to have a handle on how Docker works before giving K8S a try, but once you do, it’s well worth a shot to understand K8S.

Unlike Docker, K8S is a bit more in-depth on it’s requirements, and often people are pointed at Minikube as their introduction to K8S, however, my colleague and friend Nick suggested I might be better off with MicroK8S.

MicroK8S is an application released by Canonical as a Snap. A Snap is a Linux packaging format, similar to FlatPak and AppImage. It’s mostly used on Ubuntu based operating systems, but can also work on other Linux distributions.

I had an initial, failed, punt with the recommended advice for using MicroK8S on Windows (short story, Hyper-V did not work for me, and the VirtualBox back-end doesn’t expose any network ports, or at least, if it does, I couldn’t see how to make it work), and as I’m reasonably confident in using Vagrant work in Windows, I built a Vagrantfile to deliver MicroK8S.

To use this, you need Vagrant and VirtualBox, and then get the Vagrantfile from repo… then run vagrant up (it will ask you what interface you want to “bridge” to – this will be how you access the Kubernetes pods and Docker containers). Once the machine has finished building, you can run vagrant ssh to connect into it. From here, you can run your kubectl commands, as well as docker commands.

If you want to experiment with a multi-node environment, then I also built a Vagrantfile to deliver two virtual machines, both running MicroK8S, and used the shared storage element of Vagrant to transfer the “join” instruction from the first node to the second.

Of course, now I just need to work out how the hell I do Kubernetes 🤣

Featured image is “Captain” by “The Laddie” on Flickr and is released under a CC-BY-ND license.

apt update && apt full-upgrade -y && apt autoremove -y && apt autoclean -y

Apt Updates with Ansible

I’ve got a small Ansible script that I bundle up on Ubuntu boxes to do apt updates. This was originally a one-statement job, but I’ve added a few lines to it, so I thought I’d explain what I’m doing (more for myself, for later!)

Initally, I just had a task to do apt: upgrade=full update_cache=yes autoremove=yes autoclean-yes but if you’re running the script over and over again, well, this gets slow… So I added a tweak!

Here it is folks, in all it’s glory!

- hosts: all
  tasks:
  - name: Get stat of last run apt
    stat:
      path: /var/cache/apt/pkgcache.bin
    register: apt_run

  - name: "Apt update, Full-upgrade, autoremove, autoclean check"
    debug:
      msg: "Skipping apt-update, etc. actions as apt update was run today"
    when: "'%Y-%m-%d' | strftime(apt_run.stat.mtime) in ansible_date_time.date"

  - name: "Apt update, Full-upgrade, autoremove, autoclean"
    apt:
      upgrade: full
      update_cache: yes
      autoremove: yes
      autoclean: yes
    when: "'%Y-%m-%d' | strftime(apt_run.stat.mtime) not in ansible_date_time.date"

What does this do? Well, according to this AskUbuntu post, the best file to check if an update has been performed is /var/cache/apt/pkgcache.bin, so we check the status of that file. Most file systems available to Linux distributions provide the mtime – or “last modified time”. This is returned in the number of seconds since UTC 00:00:00 on the Unix Epoch (1970-01-01), so we need to convert that to a date., which we return as YYYY-MM-DD (e.g. today is 2020-01-06) and then compare that to what the system thinks today is. If the dates don’t equate (so one string doesn’t match the other – in other words, apt update wasn’t run today), it runs the update. If the dates do match up, we get a statement saying that apt update was already run.

Fun times!

"Shipping Containers" by "asgw" on Flickr

Creating my first Docker containerized LEMP (Linux, nginx, MariaDB, PHP) application

Want to see what I built without reading the why’s and wherefore’s? The git repository with all the docker-compose goodness is here!

Late edit 2020-01-16: The fantastic Jerry Steel, my co-host on The Admin Admin podcast looked at what I wrote, and made a few suggestions. I’ve updated the code in the git repo, and I’ll try to annotate below when I’ve changed something. If I miss it, it’s right in the Git repo!

One of the challenges I set myself this Christmas was to learn enough about Docker to put an arbitrary PHP application, that I would previously have misused Vagrant to contain.

Just before I started down this rabbit hole, I spoke to my Aunt about some family tree research my father had left behind after he died, and how I wished I could easily share the old tree with her (I organised getting her a Chromebook a couple of years ago, after fighting with doing remote support for years on Linux and Windows laptops). In the end, I found a web application for genealogical research called HuMo-gen, that is a perfect match for both projects I wanted to look at.

HuMo-gen was first created in 1999, with a PHP version being released in 2005. It used MySQL or MariaDB as the Database engine. I was reasonably confident that I could have created a Vagrantfile to deliver this on my home server, but I wanted to try something new. I wanted to use the “standard” building blocks of Docker and Docker-Compose, and some common containers to make my way around learning Docker.

I started by looking for some resources on how to build a Docker container. Much of the guidance I’d found was to use Docker-Compose, as this allows you to stand several components up at the same time!

In contrast to how Vagrant works (which is basically a CLI wrapper to many virtual machine services), Docker isolates resources for a single process that runs on a machine. Where in Vagrant, you might run several processes on one machine (perhaps, in this instance, nginx, PHP-FPM and MariaDB), with Docker, you’re encouraged to run each “service” as their own containers, and link them together with an overlay network. It’s possible to also do the same with Vagrant, but you’ll end up with an awful lot of VM overhead to separate out each piece.

So, I first needed to select my services. My initial line-up was:

  • MariaDB
  • PHP-FPM
  • Apache’s httpd2 (replaced by nginx)

I was able to find official Docker images for PHP, MariaDB and httpd, but after extensive tweaking, I couldn’t make the httpd image talk the way I wanted it to with the PHP image. Bowing to what now seems to be conventional wisdom, I swapped out the httpd service for nginx.

One of the stumbling blocks for me, particularly early on, was how to build several different Dockerfiles (these are basically the instructions for the container you’re constructing). Here is the basic outline of how to do this:

version: '3'
services:
  yourservice:
    build:
      context: .
      dockerfile: relative/path/to/Dockerfile

In this docker-compose.yml file, I tell it that to create the yourservice service, it needs to build the docker container, using the file in ./relative/path/to/Dockerfile. This file in turn contains an instruction to import an image.

Each service stacks on top of each other in that docker-compose.yml file, like this:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2

Late edit 2020-01-16: This previously listed Dockerfile/service1, however, much of the documentation suggested that Docker gets quite opinionated about the file being called Dockerfile. While docker-compose can work around this, it’s better to stick to tradition :) The docker-compose.yml files below have also been adjusted accordingly. I’ve also added an image: somehost:1234/image_name line to help with tagging the images for later use. It’s not critical to what’s going on here, but I found it useful with some later projects.

To allow containers to see ports between themselves, you add the expose: command in your docker-compose.yml, and to allow that port to be visible from the “outside” (i.e. to the host and upwards), use the ports: command listing the “host” port (the one on the host OS), then a colon and then the “target” port (the one in the container), like these:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
    expose:
    - 12345
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2
    ports:
    - 8000:80

Now, let’s take a quick look into the Dockerfiles. Each “statement” in a Dockerfile adds a new “layer” to the image. For local operations, this probably isn’t a problem, but when you’re storing these images on a hosted provider, you want to keep these images as small as possible.

I built a Database Dockerfile, which is about as small as you can make it!

FROM mariadb:10.4.10

Yep, one line. How cool is that? In the docker-compose.yml file, I invoke this, like this:

version: '3'
services:
  db:
    build:
      context: .
      dockerfile: mariadb/Dockerfile
    image: localhost:32000/db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: a_root_pw
      MYSQL_USER: a_user
      MYSQL_PASSWORD: a_password
      MYSQL_DATABASE: a_db
    expose:
      - 3306

OK, so this one is a bit more complex! I wanted it to build my Dockerfile, which is “mariadb/Dockerfile“. I wanted it to restart the container whenever it failed (which hopefully isn’t that often!), and I wanted to inject some specific environment variables into the file – the root and user passwords, a user account and a database name. Initially I was having some issues where it wasn’t building the database with these credentials, but I think that’s because I wasn’t “building” the new database, I was just using it. I also expose the MariaDB (MySQL) port, 3306 to the other containers in the docker-compose.yml file.

Let’s take a look at the next part! PHP-FPM. Here’s the Dockerfile:

FROM php:7.4-fpm
RUN docker-php-ext-install pdo pdo_mysql
ADD --chown=www-data:www-data public /var/www/html

There’s a bit more to this, but not loads. We build our image from a named version of PHP, and install two extensions to PHP, pdo and pdo_mysql. Lastly, we copy the content of the “public” directory into the /var/www/html path, and make sure it “belongs” to the right user (www-data).

I’d previously tried to do a lot more complicated things with this Dockerfile, but it wasn’t working, so instead I slimmed it right down to just this, and the docker-compose.yml is a lot simpler too.

  phpfpm:
    build:
      context: .
      dockerfile: phpfpm/Dockerfile
    image: localhost:32000/phpfpm

See! Loads simpler! Now we need the complicated bit! :) This is the Dockerfile for nginx.

FROM nginx:1.17.7
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

COPY public /var/www/html

Weirdly, even though I’ve added version numbers for MariaDB and PHP, I’ve not done the same for nginx, perhaps I should! Late edit 2020-01-16: I’ve put a version number on there now, previously where it said nginx:1.17.7 it actually said nginx:latest.

I’ve created the configuration block for nginx in a single “RUN” line. Late edit 2020-01-16: This Dockerfile now doesn’t have a giant echo 'stuff' > file block either, following Jerry’s advice, and I’m using COPY instead of ADD on his advice too. I’ll show that config file below. There’s a couple of high points for me here!

server {
  index index.php index.html;
  server_name _;
  error_log /proc/self/fd/2;
  access_log /proc/self/fd/1;
  root /var/www/html;
  location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass phpfpm:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}
  • server_name _; means “use this block for all unnamed requests”.
  • access_log /proc/self/fd/1; and error_log /proc/self/fd/2;These are links to the “stdout” and “stderr” file descriptors (or pointers to other parts of the filesystem), and basically means that when you do docker-compose logs, you’ll see the HTTP logs for the server! These two files are guaranteed to be there, while /dev/stderr isn’t!

Because nginx is “just” caching the web content, and I know the content doesn’t need to be written to from nginx, I knew I didn’t need to do the chown action, like I did with the PHP-FPM block.

Lastly, I need to configure the docker-compose.yml file for nginx:

  nginx:
    build:
      context: .
      dockerfile: Dockerfile/nginx
    image: localhost:32000/nginx
    ports:
      - 127.0.0.1:1980:80

I’ve gone for a slightly unusual ports configuration when I deployed this to my web server… you see, I already have the HTTP port (TCP/80) configured for use on my home server – for running the rest of my web services. During development, on my home machine, the ports line instead showed “1980:80” because I was running this on Instead, I’m running this application bound to “localhost” (127.0.0.1) on a different port number (1980 selected because it could, conceivably, be a birthday of someone on this system), and then in my local web server configuration, I’m proxying connections to this service, with HTTPS encryption as well. That’s all outside the scope of this article (as I probably should be using something like Traefik, anyway) but it shows you how you could bind to a separate port too.

Anyway, that was my Docker journey over Christmas, and I look forward to using it more, going forward!

Featured image is “Shipping Containers” by “asgw” on Flickr and is released under a CC-BY license.

"Mesh Facade" by "Pedro Ângelo" on Flickr

Looking at the Nebula Overlay Meshed VPN Network from Slack

Around 2-3 years ago, Slack– the company who produces Slack the IM client, started working on a meshed overlay network product, called Nebula, to manage their environment. After two years of running their production network on the back of it, they decided to open source it. I found out about Nebula via a Medium Post that was mentioned in the HangOps Slack Group. I got interested in it, asked a few questions about Nebula in the Slack, and then in the Github Issues for it, and then recently raised a Pull Request to add more complete documentation than their single (heavily) commented config file.

So, let’s go into some details on why this is interesting to me.

1. Nebula uses a flat IPv4 network to identify all hosts in the network, no matter where in the network those hosts reside.

This means that I can address any host in my (self allocated) 198.18.0.0/16 network, and I don’t need to worry about routing tables, production/DR sites, network tromboneing and so on… it’s just… Flat.

[Note: Yes, that IP address “looks” like a public subnet – but it’s actually a testing network, allocated by IANA for network testing!]

2. Nebula has host-based firewalling built into the configuration file.

This means that once I know how my network should operate (yes, I know that’s a big ask), I can restrict my servers from being able to reach my laptops, or I can stop my web server from being able to talk to my database server, except for on the database ports. Lateral movement becomes a LOT harder.

This firewalling also looks a lot like (Network) Security Groups (for the AWS and Azure familiar), so you have a default “Deny” rule, and then layer “Allow” rules on top. You also have Inbound and Outbound rules, so if you want to stop your laptops from talking anything but DNS, SSH, HTTPS and ICMP over Nebula…. well, yep, you can do that :)

3. Nebula uses a PKI environment. Where you can have multiple Certificate Authorities.

This means that I have a central server running my certificate authority (CA), with a “backup” CA, stored offline – in case of dire disaster with my primary CA, and push both CA’s to all my nodes. If I suddenly need to replace all the certificates that my current CA signed, I can do that with minimal interruption to my nodes. Good stuff.

Nebula also created their own PKI attributes to identify the roles of each device in the Nebula environment. By signing that as part of the certificate on each node too, means your CA asserts that the role that certificate holds is valid for that node in the network.

Creating a node’s certificate is a simple command:

nebula-cert sign -name jon-laptop -ip 198.18.0.1/16 -groups admin,laptop,support

This certificate has the IP address of the node baked in (it’s 198.18.0.1) and the groups it’s part of (admin, laptop and support), as well as the host name (jon-laptop). I can use any of these three items in my firewall rules I mentioned above.

4. It creates a peer-to-peer, meshed VPN.

While it’s possible to create a peer-to-peer meshed VPN with commercial products, I’ve not seen any which are as light-weight to deploy as this. Each node finds all the other nodes in the network by using a collection of “Lighthouses” (similar to Torrent Seeds or Skype Super Nodes) which tells all the connecting nodes where all the other machines in the network are located. These then initiate UDP connections to the other nodes they want to talk to. If they are struggling (because of NAT or Double NAT), then there’s a NAT Punching process (called, humourously, “punchy”) which lets you signal via the Lighthouse that you’re trying to reach another node that can’t see your connection, and asks it to also connect out to you over UDP… thereby fixing the connection issue. All good.

5. Nebula has clients for Windows, Mac and Linux. Apparently there are clients for iOS in the works (meh, I’m not on Apple… but I know some are) and I’ve heard nothing about Android as yet, but as it’s on Linux, I’m sure some enterprising soul can take a look at it (the client is written in Go).

If you want to have a look at Nebula for your own testing, I’ve created a Terraform based environment on AWS and Azure to show how you’d manage it all using Ansible Tower, which builds:

2 VPCs (AWS) and 1 VNet (Azure)
6 subnets (3 public, 3 private)
1 public AWX (the upstream project from Ansible Tower) Server
1 private Nebula Certificate Authority
2 public Web Servers (one in AWS, one in Azure)
2 private Database Servers (one in AWS, one in Azure)
2 public Bastion Servers (one in AWS, one in Azure) – that lets AWX reach into the Private sections of the network, without exposing SSH from all the hosts.

If you don’t want to provision the Azure side, just remove load_web2_module.tf from the Terraform directory in that repo… Job’s a good’n!

I have plans to look at a couple of variables, like Nebula’s closest rival, ZeroTier, and to look at using SaltStack instead of Ansible, to reduce the need for the extra Bastion servers.

Featured image is “Mesh Facade” by “Pedro Ângelo” on Flickr and is released under a CC-BY-SA license.

"Infinite Mirror" by "Luca Sartoni" on Flickr

My Blogging Workflow

Hello bold traveller! I recently got into a discussion with someone about how I put content into my blog and I realised that people don’t write this stuff down enough!

At the end of this post, I’d expect you to be able to use this information to be able to create your own, similar stack. If you don’t have the technical knowledge to be able to do so, you should at least have the pieces of information to ask someone else to help you build it. While I’d be flattered if you asked me for similar advice, I’m afraid I have limited time for new projects, even consultancy ones, and all I can do is point you back to this blog!

That said, consider this a bit of the “making the sausage” that most people don’t tend to share! 😀

Read More
The OggCamp '19 grid on Saturday

#OggCamp ’19 – A review and Talk Summary

Firstly, an apology! It’s more than a week after OggCamp. I’m quite aware that this is very very late for me!

About OggCamp for those who weren’t there!

OggCamp is an annual semi-scheduled Unconference. An Unconference (sometimes known as a “BarCamp”) is where when you arrive on the first day, the schedule (also known as the “Grid”) is blank, with a stack of post-it notes next to the grid. You’re encouraged to put talks on the grid, and keep checking the grid to see what’s up next.

OggCamp is a conference which encourages people to talk about Free Culture (Free and Open Source Software, Open Hardware, Creative Commons Content) and other permissively licensed works. It’s also a “Geeky” conference, so games will often appear, they encourage hardware makers to attend, and this year the event also contained “FlawCon”, a security conference, so the event also had a higher-than-usual proportion of Infosec people there!

OggCamp was started by podcasters in 2009, and so there’s usually at least one or two podcasts being recorded. This year, there was a panel session, Linux Outlaws “rode for one last time”, Hacker Public Radio (HPR) were out and about to talk to people at the event, and the podcast I co-host, The Admin Admin Podcast, found a quiet spot to record a show too. Sadly, with the exception of my own podcast recording, I didn’t make it to any of the other recordings I mentioned, as I was attending talks by other people at those times.

Differences, for me, from previous years

Since OggCamp ’10, I was either not at the event (on the years each of my children were born), was running the Talk Scheduling Software; CampFireManager, crewing, or organising the event. This was the first year I managed to get to see talks all day since the very first OggCamp, so that was a big change for me.

This year, Lorna organised the grid, from right in front of it. Except for the welcome and closing talks, I don’t think she left the grid for the entire day both days. In previous years, when we weren’t using CampFireManager, the grid was left unattended, with an occasional drive-by crew member transferring the grid to Joind.In. Talking of which, here’s the Joind.In view of Saturday…

Saturday

A screen shot of the grid from Saturday. Talks marked with a * are talks I attended.

I went to the “Opening Talk” first. This is your usual “Here’s how to get on the Wi-fi, here’s how to participate, here’s the sort of things we want from you” talk, and was run by Dan and Lorna.

Next up, I saw Terrence and Elizabeth Eden talking about OpenBenches.org.

OpenBenches is a project that records what is on the plaques on benches that people arrange for their relatives, sometimes when they die. I’ve been aware of this project for some time, but never contributed. Until now I thought you had to manually type in what was on each plaque (and I think, at the beginning you had to), but NO, they’re now doing Optical Character Recognition (OCR) to copy the text out of the photos.

The talk discussed the statistics of the project, the technology stack and why the project was started. It was just lovely and really well delivered.

Next I went to see Jeroen talking about Self Publishing.

Jeroen first attended OggCamp last year, giving a talk about Mainframes. This year he was back, talking about running a project with a very small community. Before he got to that though, he wanted to talk about self publishing. He endorsed Lulu for paper printing, AsciiDoc and AsciiDoctor to produce the content (PanDoc to convert between formats, if you started with something that isn’t AsciiDoc(tor)) and then Inkscape to create the cover. I asked him if he would suggest anything for eBooks, but he doesn’t create eBooks so couldn’t make any suggestions.

We got a demo of publishing a finished book on Lulu, with a running translation from Jeroen’s native language :) It was a great talk, and very well delivered in 25 minutes!

The front cover of the book Analogue Network Security by Winn Schwartau
The book which inspired my first talk

After that, I gave a late-pitched talk on Time Based Security (TBS). I made a few mistakes here – not least of which was failing to charge my laptop having used it while I was travelling in – so my laptop wouldn’t actually boot… I couldn’t even put up a single slide with my details! Trying to explain the maths around TBS without something to show it is hard, and involves walking around and waving your hands about. I had about 20 people in the room and I felt woefully underprepared.

Because I ended up running much shorter than I expected, I also started to bring in other material from the Analogue Network Security book (pictured above, with post-it-note reference markers for my review) that I’m currently writing a review on. This was my next mistake. So, I mentioned about feedback loops (which about 1/3 of the book is about) and that in the later sections of the book it’s mentioned that this can improve workflow where you need sign-off to complete changes. I mixed up a few terms and it sounded like I was endorsing having changes made without approvals. I tried to pull it back, but not having brought the book with me or having enough experience in vocalising the material… yehr, it was never going to go well. Oh well, I’m hoping to get the review nailed down and then start writing proper presentations on the matter, so I can try and deliver it better next year!

Then… Lunch. Phil, my father-in-law, plus Kian and Cat went to a Chinese Bakery for lunch.

Neil’s talk was my next talk to see; an ad-hoc review of web pages about Repair Day

After I gave my talk, I headed to see Neil give an ad-hoc talk about Repair Day. Neil had a collection of pages he wanted to show off. Neil works with The Restart Project to help people fix their own broken things, not just computers (which is Neil’s area of interest) but also white goods, radios, home electronics, clothes and furniture.

In the audience was Stuart Ward (featured later) who also mentioned about running Repair Cafes. After the talk was complete, Stuart posted a collection of links to the Joind.In page for people to find out more for themselves later.

This was my stand-out talk for Day 1. Anna had come to OggCamp last year, and thought there wasn’t sufficient content for people new to Linux, so she proposed, wrote and delivered a blinder!

I went to Anna’s talk next. I went in, amongst other reasons, because thought I would be going in to support someone “new to Ubuntu”, and came out stunned at how well the talk was delivered!

Someone wise* wrote on twitter a few months ago something like “The point when someone new joins your team is when you get to challenge implied knowledge. If they ask ‘Why’ and you have to say ‘I don’t know’ it means you need to justify why you do something, and perhaps stop doing it.”

* Someone in this case means I can’t find the tweet!

In this case, I wanted to know what being “New” to Ubuntu (my preferred desktop Linux distribution right now) meant to people. Anna’s talk was fantastic, and got right to the heart of what someone new to Linux would feel like. She mentions downloading “things” from the Internet, setting them to be executable by everyone, and then running them. She also mentions running everything under “sudo” or as root, and then went into where she found she should put things. This was sprinkled with a lot of appropriate emojis. It was a really great talk.

As an event organiser, I’m always interested in what other groups are doing!

After Anna’s talk, I went to a round-table session about meetup and event organisers. This was inspired by something new that Lorna had organised this year for the unconference schedule. Next to the board, showing what talks were going to be given, was another board asking for talks to be given. Someone had asked for a talk about organising meet-ups, and so several of the attendees who are organisers of local groups came together to give their views on how to start a group, how to motivate attendees to come to your groups, and how to keep the momentum going.

I’m sorry to say that this was one of the weaker sessions I went to over the weekend. Because no-one had really planned anything in this slot, and none of the people running the session were really comfortable in what they were delivering, it was hard to get any points out of the speakers, and there was very little interaction with the audience. This could have been run as a Q&A session from experienced group organisers, or even a round-table… but never mind!

Towards the end of the session, I stood up and asked about whether any groups like TechNW.UK existed in their regions, and asked people who organised groups like this to put pull requests to get their groups added to that website. I hope to see something come out of that!

After I left this session, I went to look at the exhibition hall and the Kids Track room.

In the exhibition hall was the Merch Stand, the grid, two stands that were apparently about musical things – one of which basically had a guitar and amp constantly being used by a very good musician. After that was Matrix.org, The FSFE, Hacker Public Radio. Along the other wall was a lock picking stand from FlawCon, Manchester Grey Hats and InfoSec Hoppers, a telepresence bot and more!

In the kids room were computers, micro:bits and willing instructors! It looked like a lot of fun for kids, but there wasn’t much room! I had a bit of a chat with a few friends I met along the way, before I went to see my co-host, Al, talking about Wireguard.

Al hadn’t expected to be giving this talk today!

Al has been talking about Wireguard a few times over the past year-or-so, and wanted to give a talk about it. He’d planned to propose it for Sunday, but was encouraged by Lorna to talk about it on Saturday. As a result, he hadn’t had a chance to run though the demo he’d planned to give, and it tripped him up at the end of his demo, when the notes he was following mixed up private and public keys at each end… Aside from that, it was a great talk, and made me want to look at Wireguard again!

My final talk for the day was one I didn’t expect to be in!

Kian is a friend of mine from days of old, and when he walked into the room I’d just been in for Al’s talk… I decided to sit in whatever he was talking about. Kian spoke to a small audience about hardware builds he’d done over the years, and the mishaps that had occurred on them. A very entertaining talk, albeit one that I couldn’t really empathise with, as I’ve not done any hardware builds since I did my Radio Amateur Exam. Hearing the story of the halloween pumpkin with eyes that were supposed to look at you was very funny though, and the videos really completed the story!

After the talks were done, I went to get dinner with my co-hosts from the Admin Admin podcast, and a few of the other attendees. After we were done, I went back to the venue, but couldn’t settle as I’d had a headache coming on.

While I was gearing up to leave, I ended up having a good chat with Ben Grubert, who changed my view somewhat on how to deliver a talk. He said that people, particularly those who are very process focused, struggle to explain something that links back to the goal, for example, explaining how to win at a board game. It made me completely re-think how my talk I wanted to give on Sunday would go, and I left soon after that conversation so I could re-write my talk. I’ve since gone on to share that advice with several other people!

Sunday

A screen shot of the Sunday Schedule. Again, starred talks are the ones I attended.
My hands-down favourite talk from the entire weekend!

At Barcamp Manchester 9, which I attended a few weeks before OggCamp, I missed a talk by Rachel. I saw a picture of one of her slides, and I think I might even have caught the last slide of it… Either way, I was desperately sad that I’d missed the talk, and so encouraged her to attend OggCamp to deliver it. Once I saw she was on the grid, I knew exactly where I was going!

Rachel’s talk did not fail to deliver. I’ve heard from lots and lots of people that they were moved by this talk. Rachel was talking about her life, mostly undiagnosed with Autism, ADHD and depression. She enriched the talk with fun comments, including asking someone to play the part of Romeo from Romeo and Juliet, and then asking him, without having seen the book, why he didn’t know his lines. It sounds quite brutal, but actually, it sets the scene quite well on her life. There’s a fantastic photo of the spectrum of issues related to autism that just keeps having more and more artefacts being added to it.

I’ve heard that she wants to take this talk to more people, businesses and conferences, so I won’t spoil any more of the surprises, but it’s a really powerful talk and I’d strongly encourage anyone to bring Rachel into their environment to hear her talk.

While sitting in Kian’s talk the day before, I missed a session on Ansible Security. I’d made the point, in the morning, of finding Michael from the Matrix Project who gave the talk, and they said that they’d planned to host a “Birds of A Feather” (BOF) session on the Sunday following the feedback from the talk.

I managed to make it to this session, but unfortunately, I didn’t get any photos.

Having been to the meet-up session the day before, I was partially dreading this session, as Ansible is something I’m still very keen on. I needn’t have worried, as Michael managed to control several very chatty people (myself very much included). He managed to engage people but then stop them from going on too much. I wish there was somewhere the people who attended this talk to join to catch up and share knowledge, but… oh well.

Next I went to a talk on the Java Open Street Map editor, JOSM. It was very much a show-and-tell “This is how I use the tool”, but I struggled to follow it, and, sadly left early.

LATE EDIT 2019-11-04: Stuart contacted my on Twitter to apologise for making his talk hard to follow. I wanted to add some extra notes. The problem I had was not with Stuart’s talk per-sey, but more that I couldn’t focus on the subject, and wasn’t sure if I wasn’t in the right head-space for the talk or perhaps I was just hungry. I wanted to become more involved in Open Street Map, and thought I could get a better idea on how to contribute from this talk, but as I said, I wasn’t tracking the content. I walked out more to clear my head than because I didn’t enjoy the talk.

I realised I was getting hungry, so went to Subway for my lunch, and came back refreshed in time to give my second talk.

A screen shot from the talk “Here’s how you win: Secure Scuttlebutt”

This talk was on Secure Scuttlebutt (SSB), a decentralised social media platform. There were about 20 people in the audience, and I had some very sensible questions about the project. At the end of the talk, I’d encouraged three people to give it a try, two of whom fell at the first hurdle, and the third persisted in the bar at the end of the day, and has since connected with me on there. Woohoo!

The talk was a stark contrast to the talk I felt I’d not done justice to the day before, and I felt like I’d really nailed this talk. I’m still exceptionally grateful to Ben who’d pointed me in the right direction for the talk layout the night before.

At the end of my talk, I wandered around a bit – I wasn’t really sure what I wanted to see next, so instead I caught up with friends who also weren’t in talks. I bumped into Rachel, and recorded a quick promo for her speaking career and then saw some friends start a Dungeons and Dragons (D&D) game up in the exhibition area!

The first talk at OggCamp about a technology I’d not seen the likes of before.

I made my way to Roger’s talk about Stream Sheets, an Internet Of Things (IoT) connected tool like Google Sheets. It can read content from MQTT, REST APIs and other similar data sources, tweak and convert them, and then publish them back again. All very interesting, although I’m unlikely to use it somewhere any time soon! I was glad though to popularise it with colleagues when I got back to work on Monday!

My last talk attended of the day – Jamie Tanner

Jamie had talked at OggCamp ’18, and I was very glad to see him back at OggCamp this year – particularly on the main stage!

His talk was about self hosting and the Indie Web movement. He talked about why he self hosts, and what sort of content he “owns” when he can (spoiler: all of it!) He not only stores bookmarks in a public blog, but his Google Fit step counter results, his RSVPs to events and … yes, even blog posts. He talked about why he felt that you too should be part of the Indie Web.

After Jamie’s talk, was the annual rafflecast. A laptop was given away, but not to me (boo!) And then I went to record the Admin Admin Podcast.

From left to right, Jerry, Gary, Al, and then Me (with my red hat from Red Hat). Out of shot is Mr Joe Ressington, who let us use his recording gear. Because he’s lovely.

On the way to Joe’s hotel (where we did this recording), I got us a bit lost, and ended up walking us clear across to the Northern Quarter of Manchester. We then had to walk back to just near Piccadilly station, where his hotel was! Oops. The show has since been released, if you want to hear us talking about OggCamp, and guest host Gary.

We went to the Lass O’Gowry pub for a drink before I had to catch my rail replacement bus home, and catch up on some sleep!

And that was OggCamp ’19. The featured image is of the OggCamp Grid on Saturday.

OggCamp are looking for someone to take over the organising in 2020 (supported by past organisers, like me!) so if you’re interested, please get in touch!

A screen shot of the LHS Podcast Website at 2019-10-16

#Podcast Summary – “LHS Episode 307: #Ansible Deep Dive”

Recently, I was a guest of the Linux In The Ham Shack Podcast (LHS), talking about Ansible.

The episode I recorded was: LHS Episode #307: Ansible Deep Dive

About a year ago, Bill (NE4RD) and Russ (K5TUX) were talking about Ansible, and I spotted a couple of… maybe misunderstandings, maybe mistakes, or perhaps just where they misspoke about it? Whatever it was at the time, I offered to talk about Ansible, and then scheduling became a major issue. We tried to meet up online a few times over the year and this was the first opportunity I had to actually talk to them.

In this conversation, I talk about what Ansible is and how it works. I go into quite a bit of depth on how you would install packages, make file changes, and then explain how to use, obtain and create Ansible Roles. I also go into Handlers, local and remote Inventories, Ansible Tower and AWX.

When I was talking about Ansible playbooks and tasks, particularly towards the beginning of the podcast, I was looking at the code samples I put together for the Admin Admin Podcast Deep Dive into Ansible. The custom OpenStack modules I referenced in the show were written by my friend and colleague, Nick Cross.

I get a bit of stick for my pronunciation of Inventories, which is … fun :) As usual, I “Um” and “Ah” quite a bit. It becomes exceptional apparent how much work Dave Lee does for me in my usual Admin Admin Podcast!

nobodys perfect nbc GIF by The Good Place from Giphy

Talk Summary – FDE Conference “Automation in an Infrastructure as Code World”

Format: Theatre Style room. ~70 attendees.

Slides: Available to view (Firefox/Chrome recommended – press “S” to see the required speaker notes), Code referenced in the slides also available to view.

Video: Not on the day, but I recorded a take of it at home after the event. The delivery on the day was better, but the content is there at least! :)

Slot: Slot 2 Wednesday 14:15-15:00

Notes: FDE is the abbreviation of “Fujitsu Distinguished Engineer”, an internal program at Fujitsu. Each year they hold a conference for all the FDEs to attend. This is my second year as an FDE, and the first where I’m presenting.

This slide deck was massively re-worked, following some excellent feedback at BCMcr9. I then, unusually for me, gave the deck two separate run through sessions with colleagues, and tweaked it following each run.

This deck includes Creative Commons licensed images (which is fairly common for my slide decks), but also, in a new and unusual step for me, includes meme gifs from Giphy. I’m not really sure about whether this is step forward or back for me, as I do prefer permissive licenses. That said, the memes seem to be more engaging – particularly as they’re animated. I’ve never had someone comment on the images in my slide deck until I did the first run through with the memes in with a colleague, and then again when I ran it a second time they particularly brought up the animated images… so the memes are staying for now.

I’m also slightly disappointed with myself that I couldn’t stick to the “One Bold Word” style of presentations (the format preferred by Jono Bacon), and found myself littering more and more content into the screen. I was, however, proud of myself for including the “Tweetable content” slide, as recommended, I think, by Lorna Mitchell (@LornaJane). I also included a “Your next steps” slide, as recommended by Andy Bounds (although I suspect he’d be disappointed with the “Questions?” slide at the end!)

This deck required quite a bit of research on my part. I’d never written CloudFormations (CF) before, and I’d only really copied-and-pasted Terraform (I refer to it as TF which probably isn’t right) before. I wrote a full stack of machines in CF, Azure Resource Manager (ARM) for the native technologies, as well as the same stacks in both TF and Ansible for both Azure and AWS. I also looked into how to deploy the CF and ARM templates with both Terraform and Ansible, and finally how to use TF from Ansible. I already knew how to run Ansible from within userdata/customdata arguments in AWS and Azure, but I included it and tested it as part of the deck too.

I had some amazing feedback from the audience and some great questions asked of me. I loved the response from the audience to some of my GIFs (although one comment that was made was that I need to stop the animations after the first run!)

Following the session, as I’d hoped, it brought a few of the fellow attendees to the forefront to ask if we can talk further about the subject and I would encourage you, if you are someone who uses these tools to give me a shout – I want to do more and find out about your projects, processes and tools!

My intention is to start using this slide deck at meet-ups in the Greater Manchester area, hopefully without having to re-write it that much!

BarCamp Manchester Logo, from barcampmanchester.co.uk

My talk summaries from BarCamp Manchester 9

BarCamp Manchester 9 (#BcMcr9) is a BarCamp style Unconference. It was held in the offices of Auto Trader in the centre of Manchester. It was a two day event, however, I was unable to attend the Saturday. Sundays are usually quieter days, and apparently the numbers were approximately half of the peak of Saturday on the Sunday.

Lunch was provided by Auto Trader. The day was split into 7 slots, or sessions, running for 25 minutes each, with 5 minutes between slots to change rooms. There were three theatre layout rooms, each with a projector, and one room with soft chairs around the edges.

There and Back Again/How The Internet Works

Format: Presentation with slides. 30ish attendees.

Slot: Slot 1 Sunday 11:00-11:25

Notes: This slide deck was reused from when I delivered it in 2012. Some stuff had changed (the prevalence of WiFi being one, CAT5e being referenced raised some giggles), but most had not.

There were some comments raised during the talk about the slides, but nothing significant (mostly by network engineers, commenting on things like routing a local network. Ugh.)

Following the talk, someone came up to suggest some changes (primarily that the slides need to link back to the graphics created). Someone else noted that there were too many acronyms that should probably have been explained. As such, this deck is likely to change and be published here at some point soon.

I sent a tweet, following this talk:

At #BCMCR9? See my talk on “How The Internet Works” and looking for the slides? See here: https://www.slideshare.net/JonTheNiceGuy/there-and-back-again-15506394 And feel free to message me if you’ve got any questions!

Jon Spriggs (@jontheniceguy) @ Sep 22 12:03pm

Automation in an Infrastructure as Code World

Format: Presentation with slides. 8 attendees, reduced to 4 half way through.

Slot: Slots 5 and 6 Sunday 14:00-14:55

Notes: This was a trial run of my talk for the Fujitsu FDE Conference I’m attending in a couple of weeks. The audience were notified as such. I took two slots on the “grid”, and half way through my session, half the audience walked out.

Following the talk, someone came and suggested some changes, which I’ll be implementing.

The slides for this talk are still being developed and will be shared after the FDE conference on this site.

Decentralised Social Media? – Secure Scuttlebutt

Format: Conversation with a desktop client application (Patchwork) loaded on the projector, and the Google Play entry for Manyverse on a browser tab. 3 attendees.

Slot: Slot 7 (last slot of the day) Sunday 15:00-15:25

Notes: This was an unplanned session, and probably should have been run earlier in the day. The audience members were very interactive, and asked lots of sensible questions.

I sent a tweet, following this talk:

Did you come to my talk at #BcMcr9 about #SecureScuttleButt? If you run a SSB client (patchwork, patchbay, patchfox or manyverse) and want to follow me, I’m @p3gu8eLHxXC0cuvZ0yXSC05ZROB4X7dpxGCEydIHZ0o=.ed25519 and @3SEA7qNZQPiYFCzY6K57f0LTc9l+Bk6cewQc6lbs/Ek=.ed25519

Jon Spriggs (@jontheniceguy) @ Sep 22 4:46pm

And if you want to know more about #SecureScuttlebutt, take a look at http://Scuttlebutt.nz! It’s fun!

Jon Spriggs (@jontheniceguy) @ Sep 22 4:49pm
Fujitsu AWS Game Day Attendees

AWS Game Day

I was invited, through work, to participate in an AWS tradition – the AWS Game Day. This event was organised by my employer for our internal staff to experience a day in the life of a fully deployed AWS environment… and have some fun with it too. The AWS Game Day is a common scenario, and if you’re lucky enough to join one, you’ll probably be doing this one… As such, there will be… #NoSpoilers.

A Game Day (sometimes disambiguated as an “Adversarial Game Day”, because of sporting events) is a day where you either have a dummy environment, or, if you have the scale, a portion of your live network is removed from live service and used as a training ground. In this case, AWS provided a specific dummy environment “Unicorn.Rentals”, and all the attendees are the new recruits to the DevOps Team… Oh, and all the previous DevOps team members had just left the company… all at once.

Attendees were split into teams of four, and each team had a disparate background.

We’re given access to;

  • Our login panel. This gives us our score, our trending increase or decrease in score over the last “period” (I think it was 5 minutes), our access to the AWS console, and a panel to update the CNAME for the DNS records.
  • AWS Console. This is a mostly unrestricted account in AWS. There are some things we don’t get access to – for example, we didn’t get the CloudFormation Template for setting up the game day, and we couldn’t make changes to the IAM environment at all. Oh, and what was particularly frustrating was not being able to … Oh yes, I forgot, #NoSpoilers ;)
  • A central scoreboard of all the teams
  • A running tally of how we were scored
    • Each web request served under X seconds received one score
    • Each request served between X and Y seconds received another score,
    • Each request served over Y seconds received a third score.
    • Failing to respond to a request received a negative score.
    • Infrastructure costs deducted points from the score (to stop you just putting stuff at ALL THE SERVERS, ALL THE TIME).
  • The outgoing DevOps team’s “runbook”. Not too dissimilar to the sort of documentation you write before you go on leave. “If this thing break, run this or just reboot the box”, “You might see this fail with something like this message if the server can’t keep up with the load”. Enough to give you a pointer on where to look, not quite enough to give you the answer :)

The environment we were working on was, well, relatively simple. An auto-scaling web service, running a simple binary on an EC2 instance behind a load balancer. We extended the reach of services we could use (#NoSpoilers!) to give us greater up-time, improved responsiveness and broader scope of access. We were also able to monitor … um, things :) and change the way we viewed the application.

I don’t want to give too many details, because it will spoil the surprises, but I will say that we learned a lot about the services in AWS we had access to, which wasn’t the full product set (just “basic” AWS IaaS tooling).

When the event finished, everyone I spoke to agreed that having a game day is a really good idea! One person said “You only really learn something when you fix it! This is like being called out, without the actual impact to a customer” and another said “I’ve done more with AWS in this day than I have the past couple of months since I’ve been looking at it.”

And, as you can probably tell, I agree! I’d love to see more games days like this! I can see how running something like this, on technology you use in your customer estate, can be unbelievably powerful – especially if you’ve got a mildly nefarious GM running some background processes to break things (#NoSpoilers). If you can make it time-sensitive too (“you’ve got one day to restore service”, or like in this case, “every minute we’re not selling product, we’re losing points”), then that makes it feel like you’ve been called out, but without the stress of feeling like you’re actually going to lose your job at the end of the day (not that I’ve ever actually felt like that when I’ve been called out!!)

Anyway, massive kudos to our AWS SE team for delivering the training, and a huge cheer of support to Sara for getting the event organised. I look forward to getting invited to a new scenario sometime soon! ;)

Here are some pictures from the event!

The teams get to know each other, and we find out about the day ahead! Picture by @Fujitsu_FDE.
Our team, becoming a team by changing the table layout! It made a difference, we went to the top of the leader board for at least 5 minutes! Picture by @Fujitsu_FDE.
The final scores. Picture by @Fujitsu_FDE
Our lucky attendees got to win some of these items! Picture by @Fujitsu_FDE
“Well Done” (ha, yehr, right!) to the winning team (“FIX!”) “UnicornsRUs”. Picture by @Fujitsu_FDE.

The featured image is “AWS Game Day Attendees” by @Fujitsu_FDE.