"presentation structure" by "Sean MacEntee" on Flickr

One to read: “The Art of Slide Design”

This is a little different from my usual posts, but I heard about this from the User Error podcast this morning. In 2018 Melinda Seckington gave a talk at DevRelCon Tokyo which she then reposted in full detail on her blog. This set of posts is well worth a read, particularly if you’re someone who enjoys writing and delivering presentations, or if it’s part of your job.

While I don’t adhere to her advice exactly, I can see a lot of benefits to the way that she’s advising to create your decks.

It’s worth mentioning that if you follow the links on the blog posts, post 4 of 5 links to the wrong page for the last page (post 5/5), but there is a “next post” button at the bottom of the page… or just follow the links from this page :)

Featured image is “presentation structure” by “Sean MacEntee” on Flickr and is released under a CC-BY license.

"Captain" by "The Laddie" on Flickr

Trying out Kubernetes (K8S) with MicroK8S in Vagrant

I’m going on a bit of a containers kick at the moment, and just recently I wanted to give Kubernetes (sometimes abbreviated to “K8S”) a try.

Kubernetes is an orchestration engine for Containers, like Docker. It’s designed to take the images that Docker (and other similar tools) produce, and run them across multiple nodes. You need to have a handle on how Docker works before giving K8S a try, but once you do, it’s well worth a shot to understand K8S.

Unlike Docker, K8S is a bit more in-depth on it’s requirements, and often people are pointed at Minikube as their introduction to K8S, however, my colleague and friend Nick suggested I might be better off with MicroK8S.

MicroK8S is an application released by Canonical as a Snap. A Snap is a Linux packaging format, similar to FlatPak and AppImage. It’s mostly used on Ubuntu based operating systems, but can also work on other Linux distributions.

I had an initial, failed, punt with the recommended advice for using MicroK8S on Windows (short story, Hyper-V did not work for me, and the VirtualBox back-end doesn’t expose any network ports, or at least, if it does, I couldn’t see how to make it work), and as I’m reasonably confident in using Vagrant work in Windows, I built a Vagrantfile to deliver MicroK8S.

To use this, you need Vagrant and VirtualBox, and then get the Vagrantfile from repo… then run vagrant up (it will ask you what interface you want to “bridge” to – this will be how you access the Kubernetes pods and Docker containers). Once the machine has finished building, you can run vagrant ssh to connect into it. From here, you can run your kubectl commands, as well as docker commands.

If you want to experiment with a multi-node environment, then I also built a Vagrantfile to deliver two virtual machines, both running MicroK8S, and used the shared storage element of Vagrant to transfer the “join” instruction from the first node to the second.

Of course, now I just need to work out how the hell I do Kubernetes 🤣

Featured image is “Captain” by “The Laddie” on Flickr and is released under a CC-BY-ND license.

apt update && apt full-upgrade -y && apt autoremove -y && apt autoclean -y

Apt Updates with Ansible

I’ve got a small Ansible script that I bundle up on Ubuntu boxes to do apt updates. This was originally a one-statement job, but I’ve added a few lines to it, so I thought I’d explain what I’m doing (more for myself, for later!)

Initally, I just had a task to do apt: upgrade=full update_cache=yes autoremove=yes autoclean-yes but if you’re running the script over and over again, well, this gets slow… So I added a tweak!

Here it is folks, in all it’s glory!

- hosts: all
  tasks:
  - name: Get stat of last run apt
    stat:
      path: /var/cache/apt/pkgcache.bin
    register: apt_run

  - name: "Apt update, Full-upgrade, autoremove, autoclean check"
    debug:
      msg: "Skipping apt-update, etc. actions as apt update was run today"
    when: "'%Y-%m-%d' | strftime(apt_run.stat.mtime) in ansible_date_time.date"

  - name: "Apt update, Full-upgrade, autoremove, autoclean"
    apt:
      upgrade: full
      update_cache: yes
      autoremove: yes
      autoclean: yes
    when: "'%Y-%m-%d' | strftime(apt_run.stat.mtime) not in ansible_date_time.date"

What does this do? Well, according to this AskUbuntu post, the best file to check if an update has been performed is /var/cache/apt/pkgcache.bin, so we check the status of that file. Most file systems available to Linux distributions provide the mtime – or “last modified time”. This is returned in the number of seconds since UTC 00:00:00 on the Unix Epoch (1970-01-01), so we need to convert that to a date., which we return as YYYY-MM-DD (e.g. today is 2020-01-06) and then compare that to what the system thinks today is. If the dates don’t equate (so one string doesn’t match the other – in other words, apt update wasn’t run today), it runs the update. If the dates do match up, we get a statement saying that apt update was already run.

Fun times!

"Shipping Containers" by "asgw" on Flickr

Creating my first Docker containerized LEMP (Linux, nginx, MariaDB, PHP) application

Want to see what I built without reading the why’s and wherefore’s? The git repository with all the docker-compose goodness is here!

Late edit 2020-01-16: The fantastic Jerry Steel, my co-host on The Admin Admin podcast looked at what I wrote, and made a few suggestions. I’ve updated the code in the git repo, and I’ll try to annotate below when I’ve changed something. If I miss it, it’s right in the Git repo!

One of the challenges I set myself this Christmas was to learn enough about Docker to put an arbitrary PHP application, that I would previously have misused Vagrant to contain.

Just before I started down this rabbit hole, I spoke to my Aunt about some family tree research my father had left behind after he died, and how I wished I could easily share the old tree with her (I organised getting her a Chromebook a couple of years ago, after fighting with doing remote support for years on Linux and Windows laptops). In the end, I found a web application for genealogical research called HuMo-gen, that is a perfect match for both projects I wanted to look at.

HuMo-gen was first created in 1999, with a PHP version being released in 2005. It used MySQL or MariaDB as the Database engine. I was reasonably confident that I could have created a Vagrantfile to deliver this on my home server, but I wanted to try something new. I wanted to use the “standard” building blocks of Docker and Docker-Compose, and some common containers to make my way around learning Docker.

I started by looking for some resources on how to build a Docker container. Much of the guidance I’d found was to use Docker-Compose, as this allows you to stand several components up at the same time!

In contrast to how Vagrant works (which is basically a CLI wrapper to many virtual machine services), Docker isolates resources for a single process that runs on a machine. Where in Vagrant, you might run several processes on one machine (perhaps, in this instance, nginx, PHP-FPM and MariaDB), with Docker, you’re encouraged to run each “service” as their own containers, and link them together with an overlay network. It’s possible to also do the same with Vagrant, but you’ll end up with an awful lot of VM overhead to separate out each piece.

So, I first needed to select my services. My initial line-up was:

  • MariaDB
  • PHP-FPM
  • Apache’s httpd2 (replaced by nginx)

I was able to find official Docker images for PHP, MariaDB and httpd, but after extensive tweaking, I couldn’t make the httpd image talk the way I wanted it to with the PHP image. Bowing to what now seems to be conventional wisdom, I swapped out the httpd service for nginx.

One of the stumbling blocks for me, particularly early on, was how to build several different Dockerfiles (these are basically the instructions for the container you’re constructing). Here is the basic outline of how to do this:

version: '3'
services:
  yourservice:
    build:
      context: .
      dockerfile: relative/path/to/Dockerfile

In this docker-compose.yml file, I tell it that to create the yourservice service, it needs to build the docker container, using the file in ./relative/path/to/Dockerfile. This file in turn contains an instruction to import an image.

Each service stacks on top of each other in that docker-compose.yml file, like this:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2

Late edit 2020-01-16: This previously listed Dockerfile/service1, however, much of the documentation suggested that Docker gets quite opinionated about the file being called Dockerfile. While docker-compose can work around this, it’s better to stick to tradition :) The docker-compose.yml files below have also been adjusted accordingly. I’ve also added an image: somehost:1234/image_name line to help with tagging the images for later use. It’s not critical to what’s going on here, but I found it useful with some later projects.

To allow containers to see ports between themselves, you add the expose: command in your docker-compose.yml, and to allow that port to be visible from the “outside” (i.e. to the host and upwards), use the ports: command listing the “host” port (the one on the host OS), then a colon and then the “target” port (the one in the container), like these:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
    expose:
    - 12345
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2
    ports:
    - 8000:80

Now, let’s take a quick look into the Dockerfiles. Each “statement” in a Dockerfile adds a new “layer” to the image. For local operations, this probably isn’t a problem, but when you’re storing these images on a hosted provider, you want to keep these images as small as possible.

I built a Database Dockerfile, which is about as small as you can make it!

FROM mariadb:10.4.10

Yep, one line. How cool is that? In the docker-compose.yml file, I invoke this, like this:

version: '3'
services:
  db:
    build:
      context: .
      dockerfile: mariadb/Dockerfile
    image: localhost:32000/db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: a_root_pw
      MYSQL_USER: a_user
      MYSQL_PASSWORD: a_password
      MYSQL_DATABASE: a_db
    expose:
      - 3306

OK, so this one is a bit more complex! I wanted it to build my Dockerfile, which is “mariadb/Dockerfile“. I wanted it to restart the container whenever it failed (which hopefully isn’t that often!), and I wanted to inject some specific environment variables into the file – the root and user passwords, a user account and a database name. Initially I was having some issues where it wasn’t building the database with these credentials, but I think that’s because I wasn’t “building” the new database, I was just using it. I also expose the MariaDB (MySQL) port, 3306 to the other containers in the docker-compose.yml file.

Let’s take a look at the next part! PHP-FPM. Here’s the Dockerfile:

FROM php:7.4-fpm
RUN docker-php-ext-install pdo pdo_mysql
ADD --chown=www-data:www-data public /var/www/html

There’s a bit more to this, but not loads. We build our image from a named version of PHP, and install two extensions to PHP, pdo and pdo_mysql. Lastly, we copy the content of the “public” directory into the /var/www/html path, and make sure it “belongs” to the right user (www-data).

I’d previously tried to do a lot more complicated things with this Dockerfile, but it wasn’t working, so instead I slimmed it right down to just this, and the docker-compose.yml is a lot simpler too.

  phpfpm:
    build:
      context: .
      dockerfile: phpfpm/Dockerfile
    image: localhost:32000/phpfpm

See! Loads simpler! Now we need the complicated bit! :) This is the Dockerfile for nginx.

FROM nginx:1.17.7
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

COPY public /var/www/html

Weirdly, even though I’ve added version numbers for MariaDB and PHP, I’ve not done the same for nginx, perhaps I should! Late edit 2020-01-16: I’ve put a version number on there now, previously where it said nginx:1.17.7 it actually said nginx:latest.

I’ve created the configuration block for nginx in a single “RUN” line. Late edit 2020-01-16: This Dockerfile now doesn’t have a giant echo 'stuff' > file block either, following Jerry’s advice, and I’m using COPY instead of ADD on his advice too. I’ll show that config file below. There’s a couple of high points for me here!

server {
  index index.php index.html;
  server_name _;
  error_log /proc/self/fd/2;
  access_log /proc/self/fd/1;
  root /var/www/html;
  location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass phpfpm:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}
  • server_name _; means “use this block for all unnamed requests”.
  • access_log /proc/self/fd/1; and error_log /proc/self/fd/2;These are links to the “stdout” and “stderr” file descriptors (or pointers to other parts of the filesystem), and basically means that when you do docker-compose logs, you’ll see the HTTP logs for the server! These two files are guaranteed to be there, while /dev/stderr isn’t!

Because nginx is “just” caching the web content, and I know the content doesn’t need to be written to from nginx, I knew I didn’t need to do the chown action, like I did with the PHP-FPM block.

Lastly, I need to configure the docker-compose.yml file for nginx:

  nginx:
    build:
      context: .
      dockerfile: Dockerfile/nginx
    image: localhost:32000/nginx
    ports:
      - 127.0.0.1:1980:80

I’ve gone for a slightly unusual ports configuration when I deployed this to my web server… you see, I already have the HTTP port (TCP/80) configured for use on my home server – for running the rest of my web services. During development, on my home machine, the ports line instead showed “1980:80” because I was running this on Instead, I’m running this application bound to “localhost” (127.0.0.1) on a different port number (1980 selected because it could, conceivably, be a birthday of someone on this system), and then in my local web server configuration, I’m proxying connections to this service, with HTTPS encryption as well. That’s all outside the scope of this article (as I probably should be using something like Traefik, anyway) but it shows you how you could bind to a separate port too.

Anyway, that was my Docker journey over Christmas, and I look forward to using it more, going forward!

Featured image is “Shipping Containers” by “asgw” on Flickr and is released under a CC-BY license.

"Mesh Facade" by "Pedro Ângelo" on Flickr

Looking at the Nebula Overlay Meshed VPN Network from Slack

Around 2-3 years ago, Slack– the company who produces Slack the IM client, started working on a meshed overlay network product, called Nebula, to manage their environment. After two years of running their production network on the back of it, they decided to open source it. I found out about Nebula via a Medium Post that was mentioned in the HangOps Slack Group. I got interested in it, asked a few questions about Nebula in the Slack, and then in the Github Issues for it, and then recently raised a Pull Request to add more complete documentation than their single (heavily) commented config file.

So, let’s go into some details on why this is interesting to me.

1. Nebula uses a flat IPv4 network to identify all hosts in the network, no matter where in the network those hosts reside.

This means that I can address any host in my (self allocated) 198.18.0.0/16 network, and I don’t need to worry about routing tables, production/DR sites, network tromboneing and so on… it’s just… Flat.

[Note: Yes, that IP address “looks” like a public subnet – but it’s actually a testing network, allocated by IANA for network testing!]

2. Nebula has host-based firewalling built into the configuration file.

This means that once I know how my network should operate (yes, I know that’s a big ask), I can restrict my servers from being able to reach my laptops, or I can stop my web server from being able to talk to my database server, except for on the database ports. Lateral movement becomes a LOT harder.

This firewalling also looks a lot like (Network) Security Groups (for the AWS and Azure familiar), so you have a default “Deny” rule, and then layer “Allow” rules on top. You also have Inbound and Outbound rules, so if you want to stop your laptops from talking anything but DNS, SSH, HTTPS and ICMP over Nebula…. well, yep, you can do that :)

3. Nebula uses a PKI environment. Where you can have multiple Certificate Authorities.

This means that I have a central server running my certificate authority (CA), with a “backup” CA, stored offline – in case of dire disaster with my primary CA, and push both CA’s to all my nodes. If I suddenly need to replace all the certificates that my current CA signed, I can do that with minimal interruption to my nodes. Good stuff.

Nebula also created their own PKI attributes to identify the roles of each device in the Nebula environment. By signing that as part of the certificate on each node too, means your CA asserts that the role that certificate holds is valid for that node in the network.

Creating a node’s certificate is a simple command:

nebula-cert sign -name jon-laptop -ip 198.18.0.1/16 -groups admin,laptop,support

This certificate has the IP address of the node baked in (it’s 198.18.0.1) and the groups it’s part of (admin, laptop and support), as well as the host name (jon-laptop). I can use any of these three items in my firewall rules I mentioned above.

4. It creates a peer-to-peer, meshed VPN.

While it’s possible to create a peer-to-peer meshed VPN with commercial products, I’ve not seen any which are as light-weight to deploy as this. Each node finds all the other nodes in the network by using a collection of “Lighthouses” (similar to Torrent Seeds or Skype Super Nodes) which tells all the connecting nodes where all the other machines in the network are located. These then initiate UDP connections to the other nodes they want to talk to. If they are struggling (because of NAT or Double NAT), then there’s a NAT Punching process (called, humourously, “punchy”) which lets you signal via the Lighthouse that you’re trying to reach another node that can’t see your connection, and asks it to also connect out to you over UDP… thereby fixing the connection issue. All good.

5. Nebula has clients for Windows, Mac and Linux. Apparently there are clients for iOS in the works (meh, I’m not on Apple… but I know some are) and I’ve heard nothing about Android as yet, but as it’s on Linux, I’m sure some enterprising soul can take a look at it (the client is written in Go).

If you want to have a look at Nebula for your own testing, I’ve created a Terraform based environment on AWS and Azure to show how you’d manage it all using Ansible Tower, which builds:

2 VPCs (AWS) and 1 VNet (Azure)
6 subnets (3 public, 3 private)
1 public AWX (the upstream project from Ansible Tower) Server
1 private Nebula Certificate Authority
2 public Web Servers (one in AWS, one in Azure)
2 private Database Servers (one in AWS, one in Azure)
2 public Bastion Servers (one in AWS, one in Azure) – that lets AWX reach into the Private sections of the network, without exposing SSH from all the hosts.

If you don’t want to provision the Azure side, just remove load_web2_module.tf from the Terraform directory in that repo… Job’s a good’n!

I have plans to look at a couple of variables, like Nebula’s closest rival, ZeroTier, and to look at using SaltStack instead of Ansible, to reduce the need for the extra Bastion servers.

Featured image is “Mesh Facade” by “Pedro Ângelo” on Flickr and is released under a CC-BY-SA license.

"Infinite Mirror" by "Luca Sartoni" on Flickr

My Blogging Workflow

Hello bold traveller! I recently got into a discussion with someone about how I put content into my blog and I realised that people don’t write this stuff down enough!

At the end of this post, I’d expect you to be able to use this information to be able to create your own, similar stack. If you don’t have the technical knowledge to be able to do so, you should at least have the pieces of information to ask someone else to help you build it. While I’d be flattered if you asked me for similar advice, I’m afraid I have limited time for new projects, even consultancy ones, and all I can do is point you back to this blog!

That said, consider this a bit of the “making the sausage” that most people don’t tend to share! 😀

Read More
The OggCamp '19 grid on Saturday

#OggCamp ’19 – A review and Talk Summary

Firstly, an apology! It’s more than a week after OggCamp. I’m quite aware that this is very very late for me!

About OggCamp for those who weren’t there!

OggCamp is an annual semi-scheduled Unconference. An Unconference (sometimes known as a “BarCamp”) is where when you arrive on the first day, the schedule (also known as the “Grid”) is blank, with a stack of post-it notes next to the grid. You’re encouraged to put talks on the grid, and keep checking the grid to see what’s up next.

OggCamp is a conference which encourages people to talk about Free Culture (Free and Open Source Software, Open Hardware, Creative Commons Content) and other permissively licensed works. It’s also a “Geeky” conference, so games will often appear, they encourage hardware makers to attend, and this year the event also contained “FlawCon”, a security conference, so the event also had a higher-than-usual proportion of Infosec people there!

OggCamp was started by podcasters in 2009, and so there’s usually at least one or two podcasts being recorded. This year, there was a panel session, Linux Outlaws “rode for one last time”, Hacker Public Radio (HPR) were out and about to talk to people at the event, and the podcast I co-host, The Admin Admin Podcast, found a quiet spot to record a show too. Sadly, with the exception of my own podcast recording, I didn’t make it to any of the other recordings I mentioned, as I was attending talks by other people at those times.

Differences, for me, from previous years

Since OggCamp ’10, I was either not at the event (on the years each of my children were born), was running the Talk Scheduling Software; CampFireManager, crewing, or organising the event. This was the first year I managed to get to see talks all day since the very first OggCamp, so that was a big change for me.

This year, Lorna organised the grid, from right in front of it. Except for the welcome and closing talks, I don’t think she left the grid for the entire day both days. In previous years, when we weren’t using CampFireManager, the grid was left unattended, with an occasional drive-by crew member transferring the grid to Joind.In. Talking of which, here’s the Joind.In view of Saturday…

Saturday

A screen shot of the grid from Saturday. Talks marked with a * are talks I attended.

I went to the “Opening Talk” first. This is your usual “Here’s how to get on the Wi-fi, here’s how to participate, here’s the sort of things we want from you” talk, and was run by Dan and Lorna.

Next up, I saw Terrence and Elizabeth Eden talking about OpenBenches.org.

OpenBenches is a project that records what is on the plaques on benches that people arrange for their relatives, sometimes when they die. I’ve been aware of this project for some time, but never contributed. Until now I thought you had to manually type in what was on each plaque (and I think, at the beginning you had to), but NO, they’re now doing Optical Character Recognition (OCR) to copy the text out of the photos.

The talk discussed the statistics of the project, the technology stack and why the project was started. It was just lovely and really well delivered.

Next I went to see Jeroen talking about Self Publishing.

Jeroen first attended OggCamp last year, giving a talk about Mainframes. This year he was back, talking about running a project with a very small community. Before he got to that though, he wanted to talk about self publishing. He endorsed Lulu for paper printing, AsciiDoc and AsciiDoctor to produce the content (PanDoc to convert between formats, if you started with something that isn’t AsciiDoc(tor)) and then Inkscape to create the cover. I asked him if he would suggest anything for eBooks, but he doesn’t create eBooks so couldn’t make any suggestions.

We got a demo of publishing a finished book on Lulu, with a running translation from Jeroen’s native language :) It was a great talk, and very well delivered in 25 minutes!

The front cover of the book Analogue Network Security by Winn Schwartau
The book which inspired my first talk

After that, I gave a late-pitched talk on Time Based Security (TBS). I made a few mistakes here – not least of which was failing to charge my laptop having used it while I was travelling in – so my laptop wouldn’t actually boot… I couldn’t even put up a single slide with my details! Trying to explain the maths around TBS without something to show it is hard, and involves walking around and waving your hands about. I had about 20 people in the room and I felt woefully underprepared.

Because I ended up running much shorter than I expected, I also started to bring in other material from the Analogue Network Security book (pictured above, with post-it-note reference markers for my review) that I’m currently writing a review on. This was my next mistake. So, I mentioned about feedback loops (which about 1/3 of the book is about) and that in the later sections of the book it’s mentioned that this can improve workflow where you need sign-off to complete changes. I mixed up a few terms and it sounded like I was endorsing having changes made without approvals. I tried to pull it back, but not having brought the book with me or having enough experience in vocalising the material… yehr, it was never going to go well. Oh well, I’m hoping to get the review nailed down and then start writing proper presentations on the matter, so I can try and deliver it better next year!

Then… Lunch. Phil, my father-in-law, plus Kian and Cat went to a Chinese Bakery for lunch.

Neil’s talk was my next talk to see; an ad-hoc review of web pages about Repair Day

After I gave my talk, I headed to see Neil give an ad-hoc talk about Repair Day. Neil had a collection of pages he wanted to show off. Neil works with The Restart Project to help people fix their own broken things, not just computers (which is Neil’s area of interest) but also white goods, radios, home electronics, clothes and furniture.

In the audience was Stuart Ward (featured later) who also mentioned about running Repair Cafes. After the talk was complete, Stuart posted a collection of links to the Joind.In page for people to find out more for themselves later.

This was my stand-out talk for Day 1. Anna had come to OggCamp last year, and thought there wasn’t sufficient content for people new to Linux, so she proposed, wrote and delivered a blinder!

I went to Anna’s talk next. I went in, amongst other reasons, because thought I would be going in to support someone “new to Ubuntu”, and came out stunned at how well the talk was delivered!

Someone wise* wrote on twitter a few months ago something like “The point when someone new joins your team is when you get to challenge implied knowledge. If they ask ‘Why’ and you have to say ‘I don’t know’ it means you need to justify why you do something, and perhaps stop doing it.”

* Someone in this case means I can’t find the tweet!

In this case, I wanted to know what being “New” to Ubuntu (my preferred desktop Linux distribution right now) meant to people. Anna’s talk was fantastic, and got right to the heart of what someone new to Linux would feel like. She mentions downloading “things” from the Internet, setting them to be executable by everyone, and then running them. She also mentions running everything under “sudo” or as root, and then went into where she found she should put things. This was sprinkled with a lot of appropriate emojis. It was a really great talk.

As an event organiser, I’m always interested in what other groups are doing!

After Anna’s talk, I went to a round-table session about meetup and event organisers. This was inspired by something new that Lorna had organised this year for the unconference schedule. Next to the board, showing what talks were going to be given, was another board asking for talks to be given. Someone had asked for a talk about organising meet-ups, and so several of the attendees who are organisers of local groups came together to give their views on how to start a group, how to motivate attendees to come to your groups, and how to keep the momentum going.

I’m sorry to say that this was one of the weaker sessions I went to over the weekend. Because no-one had really planned anything in this slot, and none of the people running the session were really comfortable in what they were delivering, it was hard to get any points out of the speakers, and there was very little interaction with the audience. This could have been run as a Q&A session from experienced group organisers, or even a round-table… but never mind!

Towards the end of the session, I stood up and asked about whether any groups like TechNW.UK existed in their regions, and asked people who organised groups like this to put pull requests to get their groups added to that website. I hope to see something come out of that!

After I left this session, I went to look at the exhibition hall and the Kids Track room.

In the exhibition hall was the Merch Stand, the grid, two stands that were apparently about musical things – one of which basically had a guitar and amp constantly being used by a very good musician. After that was Matrix.org, The FSFE, Hacker Public Radio. Along the other wall was a lock picking stand from FlawCon, Manchester Grey Hats and InfoSec Hoppers, a telepresence bot and more!

In the kids room were computers, micro:bits and willing instructors! It looked like a lot of fun for kids, but there wasn’t much room! I had a bit of a chat with a few friends I met along the way, before I went to see my co-host, Al, talking about Wireguard.

Al hadn’t expected to be giving this talk today!

Al has been talking about Wireguard a few times over the past year-or-so, and wanted to give a talk about it. He’d planned to propose it for Sunday, but was encouraged by Lorna to talk about it on Saturday. As a result, he hadn’t had a chance to run though the demo he’d planned to give, and it tripped him up at the end of his demo, when the notes he was following mixed up private and public keys at each end… Aside from that, it was a great talk, and made me want to look at Wireguard again!

My final talk for the day was one I didn’t expect to be in!

Kian is a friend of mine from days of old, and when he walked into the room I’d just been in for Al’s talk… I decided to sit in whatever he was talking about. Kian spoke to a small audience about hardware builds he’d done over the years, and the mishaps that had occurred on them. A very entertaining talk, albeit one that I couldn’t really empathise with, as I’ve not done any hardware builds since I did my Radio Amateur Exam. Hearing the story of the halloween pumpkin with eyes that were supposed to look at you was very funny though, and the videos really completed the story!

After the talks were done, I went to get dinner with my co-hosts from the Admin Admin podcast, and a few of the other attendees. After we were done, I went back to the venue, but couldn’t settle as I’d had a headache coming on.

While I was gearing up to leave, I ended up having a good chat with Ben Grubert, who changed my view somewhat on how to deliver a talk. He said that people, particularly those who are very process focused, struggle to explain something that links back to the goal, for example, explaining how to win at a board game. It made me completely re-think how my talk I wanted to give on Sunday would go, and I left soon after that conversation so I could re-write my talk. I’ve since gone on to share that advice with several other people!

Sunday

A screen shot of the Sunday Schedule. Again, starred talks are the ones I attended.
My hands-down favourite talk from the entire weekend!

At Barcamp Manchester 9, which I attended a few weeks before OggCamp, I missed a talk by Rachel. I saw a picture of one of her slides, and I think I might even have caught the last slide of it… Either way, I was desperately sad that I’d missed the talk, and so encouraged her to attend OggCamp to deliver it. Once I saw she was on the grid, I knew exactly where I was going!

Rachel’s talk did not fail to deliver. I’ve heard from lots and lots of people that they were moved by this talk. Rachel was talking about her life, mostly undiagnosed with Autism, ADHD and depression. She enriched the talk with fun comments, including asking someone to play the part of Romeo from Romeo and Juliet, and then asking him, without having seen the book, why he didn’t know his lines. It sounds quite brutal, but actually, it sets the scene quite well on her life. There’s a fantastic photo of the spectrum of issues related to autism that just keeps having more and more artefacts being added to it.

I’ve heard that she wants to take this talk to more people, businesses and conferences, so I won’t spoil any more of the surprises, but it’s a really powerful talk and I’d strongly encourage anyone to bring Rachel into their environment to hear her talk.

While sitting in Kian’s talk the day before, I missed a session on Ansible Security. I’d made the point, in the morning, of finding Michael from the Matrix Project who gave the talk, and they said that they’d planned to host a “Birds of A Feather” (BOF) session on the Sunday following the feedback from the talk.

I managed to make it to this session, but unfortunately, I didn’t get any photos.

Having been to the meet-up session the day before, I was partially dreading this session, as Ansible is something I’m still very keen on. I needn’t have worried, as Michael managed to control several very chatty people (myself very much included). He managed to engage people but then stop them from going on too much. I wish there was somewhere the people who attended this talk to join to catch up and share knowledge, but… oh well.

Next I went to a talk on the Java Open Street Map editor, JOSM. It was very much a show-and-tell “This is how I use the tool”, but I struggled to follow it, and, sadly left early.

LATE EDIT 2019-11-04: Stuart contacted my on Twitter to apologise for making his talk hard to follow. I wanted to add some extra notes. The problem I had was not with Stuart’s talk per-sey, but more that I couldn’t focus on the subject, and wasn’t sure if I wasn’t in the right head-space for the talk or perhaps I was just hungry. I wanted to become more involved in Open Street Map, and thought I could get a better idea on how to contribute from this talk, but as I said, I wasn’t tracking the content. I walked out more to clear my head than because I didn’t enjoy the talk.

I realised I was getting hungry, so went to Subway for my lunch, and came back refreshed in time to give my second talk.

A screen shot from the talk “Here’s how you win: Secure Scuttlebutt”

This talk was on Secure Scuttlebutt (SSB), a decentralised social media platform. There were about 20 people in the audience, and I had some very sensible questions about the project. At the end of the talk, I’d encouraged three people to give it a try, two of whom fell at the first hurdle, and the third persisted in the bar at the end of the day, and has since connected with me on there. Woohoo!

The talk was a stark contrast to the talk I felt I’d not done justice to the day before, and I felt like I’d really nailed this talk. I’m still exceptionally grateful to Ben who’d pointed me in the right direction for the talk layout the night before.

At the end of my talk, I wandered around a bit – I wasn’t really sure what I wanted to see next, so instead I caught up with friends who also weren’t in talks. I bumped into Rachel, and recorded a quick promo for her speaking career and then saw some friends start a Dungeons and Dragons (D&D) game up in the exhibition area!

The first talk at OggCamp about a technology I’d not seen the likes of before.

I made my way to Roger’s talk about Stream Sheets, an Internet Of Things (IoT) connected tool like Google Sheets. It can read content from MQTT, REST APIs and other similar data sources, tweak and convert them, and then publish them back again. All very interesting, although I’m unlikely to use it somewhere any time soon! I was glad though to popularise it with colleagues when I got back to work on Monday!

My last talk attended of the day – Jamie Tanner

Jamie had talked at OggCamp ’18, and I was very glad to see him back at OggCamp this year – particularly on the main stage!

His talk was about self hosting and the Indie Web movement. He talked about why he self hosts, and what sort of content he “owns” when he can (spoiler: all of it!) He not only stores bookmarks in a public blog, but his Google Fit step counter results, his RSVPs to events and … yes, even blog posts. He talked about why he felt that you too should be part of the Indie Web.

After Jamie’s talk, was the annual rafflecast. A laptop was given away, but not to me (boo!) And then I went to record the Admin Admin Podcast.

From left to right, Jerry, Gary, Al, and then Me (with my red hat from Red Hat). Out of shot is Mr Joe Ressington, who let us use his recording gear. Because he’s lovely.

On the way to Joe’s hotel (where we did this recording), I got us a bit lost, and ended up walking us clear across to the Northern Quarter of Manchester. We then had to walk back to just near Piccadilly station, where his hotel was! Oops. The show has since been released, if you want to hear us talking about OggCamp, and guest host Gary.

We went to the Lass O’Gowry pub for a drink before I had to catch my rail replacement bus home, and catch up on some sleep!

And that was OggCamp ’19. The featured image is of the OggCamp Grid on Saturday.

OggCamp are looking for someone to take over the organising in 2020 (supported by past organisers, like me!) so if you’re interested, please get in touch!