A few months ago, I was working on a personal project that needed a separate, offline linux environment. I tried various different schemes to run what I was doing in the confines of my laptop and I couldn’t make what I was working on actually achieve my goals. So… I bought a Raspberry Pi Zero W and a “Solderless Zero Dongle“, with the intention of running Docker containers on it… unfortunately, while Docker runs on a Pi Zero, it’s really hard to find base images for the ARMv6/armhf platform that the Pi Zero W… so I put it back in the drawer, and left it there.
Roll forwards a month or so, and I was doing some experiments with Nebula, and only had an old Chromebook to test it on… except, I couldn’t install the Nebula client for Linux on there, and the Android client wouldn’t give me some features I wanted… so I broke out that old Pi Zero W again…
Now, while the tests with Nebula I was working towards will be documented later, I found that a lot of the documentation about using a Raspberry Pi Zero as a USB gadget were rough and unexplained. So, this post breaks down much of the content of what I found, what I tried, and what did and didn’t work.
Late Edit 2021-06-04: I spotted some typos around providing specific DHCP options for interfaces, based on work I’m doing elsewhere with this script. I’ve updated these values accordingly. I’ve also created a specific branch for this revision.
Late Edit 2021-06-06: I’ve noticed this document doesn’t cover IPv6 at all right now. I started to perform some tweaks to cover IPv6, but as my ISP has decided not to bother with IPv6, and won’t support Hurricane Electric‘s Tunnelbroker system, I can’t test any of it, without building out an IPv6 test environment… maybe soon, eh?
I have been playing again, recently, with Nebula, an Open Source Peer-to-Peer VPN product which boasts speed, simplicity and in-built firewalling. Although I only have a few nodes to play with (my VPS, my NAS, my home server and my laptop), I still wanted to simplify, for me, the process of onboarding devices. So, naturally, I spent a few evenings writing a bash script that helps me to automate the creation of my Nebula nodes.
Nebula Certificates
Nebula have implemented their own certificate structure. It’s similar to an x509 “TLS Certificate” (like you’d use to access an HTTPS website, or to establish an OpenVPN connection), but has a few custom fields.
In this context, I’ve created a nebula Certificate Authority (CA), using this command:
nebula-cert ca -name nebula.example.org -ips 192.0.2.0/24,198.51.100.0/24,203.0.113.0/24 -groups Mobile,Workstation,Server,Lighthouse,db
So, what does this do?
Well, it creates the certificate and private key files, storing the name for the CA as “nebula.example.org” (there’s a reason for this!) and limiting the subnets and groups (like AWS or Azure Tags) the CA can issue certificates with.
Here, I’ve limited the CA to only issue IP addresses in the RFC5737 “Documentation” ranges, which are 192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24, but this can easily be expanded to 10.0.0.0/8 or lots of individual subnets (I tested, and proved 1026 separate subnets which worked fine).
Groups, in Nebula parlance, are building blocks of the Security product, and can act like source or destination filters. In this case, I limited the CA to only being allowed to issue certificates with the groups of “Mobile”, “Workstation”, “Server”, “Lighthouse” and “db”.
As this certificate authority requires no internet access, and only enough access to read and write files, I have created my Nebula CA server on a separate Micro SD card to use with a Raspberry Pi device, and this is used only to generate a new CA certificate each 6 months (in theory, I’ve not done this part yet!), and to sign keys for all the client devices as they come on board.
I copy the ca.crt file to my target machines, and then move on to creating my client certificates
Client Certificates
When you generate key materials for Public Key Cryptographic activities (like this one), you’re supposed to generate the private key on the source device, and the private key should never leave the device on which it’s generated. Nebula allows you to do this, using the nebula-cert command again. That command looks like this:
If you notice, there’s a key difference at this point between Nebula’s key signing routine, and an x509 TLS style certificate, you see, this stage would be called a “Certificate Signing Request” or CSR in TLS parlance, and it usually would specify the record details for the certificate (normally things like “region”, “organisational unit”, “subject name” and so on) before sending it to the CA for signing (marking it as trusted).
In the Nebula world, you create a key, and send the public part of that (in this case, “host.pub” but it can have any name you like) to the CA, at which point the CA defines what IP addresses it will have, what groups it is in, and so on, so let’s do that.
Let’s pick apart these options, shall we? The first four flags “-ca-crt“, “-ca-key“, “-in-pub” and “-out-crt” all refer to the CSR process – it’s reading the CA certificate and key, as well as the public part of the keypair created for the process, and then defines what the output certificate will be called. The next switch, -groups, identifies the tags we’re assigning to this node, then (the mandatory flag) -ip sets the IP address allocated to the node. Note that the certificate is using one of the valid group names, and has been allocated a valid IP address address in the ranges defined above. If you provide a value for the certificate which isn’t valid, you’ll get a warning message.
In the above screenshot, I’ve bypassed the key generation and asked for the CA to sign with values which don’t match the constraints.
The last part is the name of the certificate. This is relevant because Nebula has a DNS service which can resolve the Nebula IPs to the hostnames assigned on the Certificates.
Anyway… Now that we know how to generate certificates the “hard” way, let’s make life a bit easier for you. I wrote a little script – Nebula Cert Maker, also known as certmaker.sh.
certmaker.sh
So, what does certmaker.sh do that is special?
It auto-assigns an IP address, based on the MD5SUM of the FQDN of the node. It uses (by default) the first CIDR mask (the IP range, written as something like 192.0.2.0/24) specified in the CA certificate. If multiple CIDR masks are specified in the certificate, there’s a flag you can use to select which one to use. You can override this to get a specific increment from the network address.
It takes the provided name (perhaps webserver) and adds, as a suffix, the name of the CA Certificate (like nebula.example.org) to the short name, to make the FQDN. This means that you don’t need to run a DNS service for support staff to access machines (perhaps you’ll have webserver1.nebula.example.org and webserver2.nebula.example.org as well as database.nebula.example.org).
Three “standard” roles have been defined for groups, these are “Server”, “Workstation” and “Lighthouse” [1] (the latter because you can configure Lighthouses to be the DNS servers mentioned in step 2.) Additional groups can also be specified on the command line.
[1] A lighthouse, in Nebula terms, is a publically accessible node, either with a static IP, or a DNS name which resolves to a known host, that can help other nodes find each other. Because all the nodes connect to it (or a couple of “it”s) this is a prime place to run the DNS server, as, well, it knows where all the nodes are!
So, given these three benefits, let’s see these in a script. This script is (at least currently) at the end of the README file in that repo.
# Create the CA
mkdir -p /tmp/nebula_ca
nebula-cert ca -out-crt /tmp/nebula_ca/ca.crt -out-key /tmp/nebula_ca/ca.key -ips 192.0.2.0/24,198.51.100.0/24 -name nebula.example.org
# First lighthouse, lighthouse1.nebula.example.org - 192.0.2.1, group "Lighthouse"
./certmaker.sh --cert_path /tmp/nebula_ca --name lighthouse1 --ip 1 --lighthouse
# Second lighthouse, lighthouse2.nebula.example.org - 192.0.2.2, group "Lighthouse"
./certmaker.sh -c /tmp/nebula_ca -n lighthouse2 -i 2 -l
# First webserver, webserver1.nebula.example.org - 192.0.2.168, groups "Server" and "web"
./certmaker.sh --cert_path /tmp/nebula_ca --name webserver1 --server --group web
# Second webserver, webserver2.nebula.example.org - 192.0.2.191, groups "Server" and "web"
./certmaker.sh -c /tmp/nebula_ca -n webserver2 -s -g web
# Database Server, db.nebula.example.org - 192.0.2.182, groups "Server" and "db"
./certmaker.sh --cert_path /tmp/nebula_ca --name db --server --group db
# First workstation, admin1.nebula.example.org - 198.51.100.205, group "Workstation"
./certmaker.sh --cert_path /tmp/nebula_ca --index 1 --name admin1 --workstation
# Second workstation, admin2.nebula.example.org - 198.51.100.77, group "Workstation"
./certmaker.sh -c /tmp/nebula_ca -d 1 -n admin2 -w
# First Mobile device - Create the private/public key pairing first
nebula-cert keygen -out-key mobile1.key -out-pub mobile1.pub
# Then sign it, mobile1.nebula.example.org - 198.51.100.217, group "mobile"
./certmaker.sh --cert_path /tmp/nebula_ca --index 1 --name mobile1 --group mobile --public mobile1.pub
# Second Mobile device - Create the private/public key pairing first
nebula-cert keygen -out-key mobile2.key -out-pub mobile2.pub
# Then sign it, mobile2.nebula.example.org - 198.51.100.22, group "mobile"
./certmaker.sh -c /tmp/nebula_ca -d 1 -n mobile2 -g mobile -p mobile2.pub
Technically, the mobile devices are simulating the local creation of the private key, and the sharing of the public part of that key. It also simulates what might happen in a more controlled environment – not where everything is run locally.
So, let’s pick out some spots where this content might be confusing. I’ve run each type of invocation twice, once with the short version of all the flags (e.g. -c instead of --cert_path, -n instead of --name) and so on, and one with the longer versions. Before each ./certmaker.sh command, I’ve added a comment, showing what the hostname would be, the IP address, and the Nebula Groups assigned to that node.
It is also possible to override the FQDN with your own FQDN, but this command option isn’t in here. Also, if the CA doesn’t provide a CIDR mask, one will be selected for you (10.44.88.0/24), or you can provide one with the -b/--subnet flag.
If the CA has multiple names (e.g. nebula.example.org and nebula.example.com), then the name for the host certificates will be host.nebula.example.org and also host.nebula.example.com.
Using Bash
So, if you’ve looked at, well, almost anything on my site, you’ll see that I like to use tools like Ansible and Terraform to deploy things, but for something which is going to be run on this machine, I’d like to keep things as simple as possible… and there’s not much in this script that needed more than what Bash offers us.
For those who don’t know, bash is the default shell for most modern Linux distributions and Docker containers. It can perform regular expression parsing (checking that strings, or specific collections of characters appear in a variable), mathematics, and perform extensive loop and checks on values.
So, take a look at the internals of the script, if you want to know some options on writing bash scripts that manipulate IP addresses and read the output of files!
If you’re looking for some simple tasks to start your portfolio of work, there are some “good first issue” tasks in the “issues” of the repo, and I’d be glad to help you work through them.
Wrap up
I hope you enjoy using this script, and I hope, if you’re planning on writing some bash scripts any time soon, that you take a look over the code and consider using some of the templates I reference.
On Wednesday, 21st April, I saw a link to a blog post in a chat group for the Linux Lads podcast. This blog post included a discount code to make the GitLab Certified Associate course and exam free. I signed up, and then shared the post to colleagues.
Free GitLab certification course and exam – until 30th April 2021.
GitLab has created a “Certified Associate” certification course which normally costs $650, but is available for free until 30th April using the discount code listed on this blog post and is available for one year after purchase (or free purchase).
I’ve signed up for the course today, and will be taking the 6 hour course, which covers:
Section 1: Self-Study – Introduction to GitLab
* GitLab Overview * GitLab Comparison * GitLab Components and Navigation * Demos and Hands On Exercises
You don’t need your own GitLab environment – you get one provided to you as part of the course.
Another benefit to this course is that you’ll learn about Git as part of the course, so if you’re looking to do any code development, infrastructure as code, documentation as code, or just learning how to store any content in a version control system – this will teach you how 😀
Good luck to everyone participating in the course!
After sharing this post, the GitLab team amended the post to remove the discount code as they were significantly oversubscribed! I’ve heard rumours that it’s possible to find the code, either on Gitlab’s own source code repository, or perhaps using Archive.org’s wayback machine, but I’ve not tried!
On Friday I started the course and completed it yesterday. The rest of this post will be my thoughts on the course itself, and the exam.
Signing up for the course and getting started
Signing up was pretty straightforward. It wasn’t clear that you had a year between when you enrolled for the course and until you first opened the content, but that once you’d opened the link to use the Gitlab demo environment, you had 21 days to use it. You’re encouraged to sign up for the demo environment on the first stage, thereby limiting you to the 21 days from that point. I suspect that if you re-visit that link on a second or third time, you’d get fresh credentials, so no real disaster there, but it does make you feel a bit under pressure to use the environment.
First impressions
The training environment is pretty standard, as far as corporate training goes. You have a side-bar showing the modules you need to complete before the end of the course, and as you scroll down through each module, you get various different media-types arriving, including youtube videos, fade-in text, flashcards which require clicking on and side-scrolling presentation cards. (Honestly, I do wonder whether this is particularly accessible to those with visual or motor impairments… I hope so, but I don’t know how I’d check!)
As you progress through each module, in the sidebar to the left, a circle outline is slowly turned from grey to purple, and when you finish a module the outline is replaced by a filled circle with a white tick in it. At the bottom of each module is a link to the next module.
The content
You have a series of 3 sections:
“Introduction to Gitlab” (aka, “Corporate Propaganda” 😉) which includes the history of the GitLab project and product, how many contributors it has, what it’s primary objective is, and so on. There’s even an “Infotainment” QVC-like advert about how amazing GitLab is in this section, which is quite cute. At the end of this first section, you get a “Hands On” section, where you’re encouraged to use GitLab to create a new Project. I’ll come back to the Hands on sections after this.
“Using Git and Gitlab”, which you’d expect to be more hands-on but is largely more flashcards and presentation cards, each with a hands on section at the end.
“Certification Assessments” has two modules to explain what needs to happen (one before, one after) and then two parts to the “assessment” – a multiple-choice section which has to be answered 100% correctly to proceed, and a “hands on” exam, which is basically a collection of “perform this task” questions, which you are expected to perform in the demo environment.
Hands-on sections focus on a specific task – “create a project”, “commit code”, “create an issue”, “create a merge request” and so-on. There are no tasks which will stretch even the freshest Git user, and seeing the sorts of things that the “Auto DevOps” function can enable might interest someone who wants to use GitLab. I was somewhat disappointed that there was barely any focus on the fact that GitLab can be self-hosted, and what it takes to set something like that up.
We also get to witness the entire power (apparently) of upgrading to the “Premium” and “Ultimate” packages of GitLab’s proprietary add-ons… Epics. I jest of course, I’ve looked and there’s loads more to that upgrade!
The final exams (No Spoilers)
This is in two parts, a multiple-choice selection on a fixed set of 14 questions, with 100% accuracy required to move on to the next stage that can be retaken indefinitely, and a hands-on set of… from memory… 14ish tasks which must be completed on a project you create.
The exam is generally things about GitLab which you’ve covered in the course, but included two questions about using Git that were not covered in any of the modules. For this reason, I’d suggest when you get to those questions, open a git environment, and try each of the commands offered given the specific scenario.
Once you’ve finished the hands-on section, using the credentials you were given, you’re asked to complete a Google Forms page which includes the URL of the GitLab Project you’ve performed your work in, and the username for your GitLab Demo Environment. You submit this form, and in 7 days (apparently, although, given the take-up of the course, I’m not convinced this is an accurate number) you’ll get your result. If you fail, apparently, you’ll be invited to re-try your hands-on exam again.
At least some of the hands-on section tasks are a bit ambiguous, suggesting you should make this change on the first question, and then “merge that change into this branch” (again, from memory) in the next task.
My final thoughts
So, was it worth $650 to take this course? No, absolutely not. I realise that people have put time and effort into the content and there will be people within GitLab Inc checking the results at the end… but at most it’s worth maybe $200, and even that is probably a stretch.
If this course was listed at any price (other than free) would I have taken it? …. Probably not. It’s useful to show you can drive a GitLab environment, but if I were going for a job that needed to use Git, I’d probably point them at a project I’ve created on GitHub or GitLab, as the basics of Git are more likely to be what I’d need to show capabilities in.
Does this course teach you anything new about Git or GitLab that just using the products wouldn’t have done? Tentatively, yes. I didn’t know anything about the “Auto DevOps” feature of GitLab, I’d never used the “Quick Actions” in either issues or merge requests, and there were a couple of git command lines that were new to me… but on the whole, the course is about using a web based version control system, which I’ve been doing for >10 years.
Would this course have taught you anything about Git and GitLab if you were new to both? Yes! But I wouldn’t have considered paying $650… or even $65 for this, when YouTube has this sort of content for free!
What changes would you make to this course? For me, I’d probably introduce more content about the CI/CD elements of GitLab, I might introduce a couple of questions or a module about self-hosting and differences about the tiers (to explain why it would be worth paying $99/user/month for the additional features in the software). I’d probably also split the course up into several pieces, where each of those pieces goes towards a larger target… so perhaps there might be a “basic user” track, which is just “GitLab inc history”, “using git” and “using Gitlab for issues and changes”, then an advanced user, covering “GitLab tiers”, “GitLab CI/CD”, “Auto DevOps”, running “Git Runners”, and perhaps a Self Hosting course which adds running the service yourself, integrating GitLab with other services, and so on. You might also (as GitLab are a very open company) have a “marketing GitLab” course (for TAMs, Pre-Sales and Sales) which could also be consumed externally.
If you’re already interested in electronics or computer hardware, you might also be interested in Amateur (or “Ham”) Radio 📻.
Amateur Radio operators can transmit on radio frequencies across short distances (in your local village, town or city), medium distances (from one city to the next), long distances (from one country to another) or even very long distances (to the International Space Station, or bouncing signals off the moon, meteors or asteroids). You can send Morse Code, computer data, “normal” voice signals, or even TV style signals.
There are competitions, to see who can make the most contacts on various bands, or in specific operating conditions. You can exchange contact (QSO) cards with operators in other countries, or you may train for, and later support authorities during emergency situations. There are also radio clubs, rallies and there’s lobby groups for operators to preserve and extend transmission rights for operators. Oh, and you might even talk to someone in your town when you can’t speak to them face-to-face 😀!
How do you become an Amateur Radio Operator?
To be an Amateur Radio operator you are issued a license from your local licensing body (OFCOM in the UK, FCC in the US, etc) which includes an internationally recognised callsign. You don’t have to know Morse Code (although, you may wish to – it’s a particularly effective mode for long-distance communications) – you don’t even have to talk (there are computer encoded transmissions), but the range and bredth of options for operation are quite large and well worth a poke at, particularly if you have an outgoing personality, or are just interested in how radios work!
What do you have to do to get issued a license?
To become a licensed Radio Amateur in the UK you must first pass the “Foundation” exam to prove you have knowledge of basic technologies and conditions of your license. You can then buy and operate radio equipment. If you want to create your own equipment, you need to at least pass the “Intermediate” exam (with a practical assessment, where you must build a circuit, and another paper exam on your knowledge of technologies and conditions). Once you have passed the “Full” exam, you can then operate on a wider range of bands, modes and power levels. You may wish to talk several levels of these exams at once, for example, if you’re particularly confident and competent with electronics already – you may sit both the Foundation and Intermediate exams at the same time.
Other countries have similar exam levels and stages, but usually with different titles, rules and restrictions.
Want to know more?
If you’re in the UK, there’s a page maintained by the Radio Society of Great Britian (RSGB) (our representative body) on where to find clubs that can train you, or perform your examination.
In Australia, there’s the Wireless Institute of Australia (WIA) who also represent Amateur Radio Operators. They don’t appear to have a page of classes, however.
For the first time in quite a while, I went out for a walk around my town of Glossop (IO93ak) by myself, with just my little Baofeng UV-5R on Saturday night. I was walking for over 1h, at various elevations and was calling CQ on 2m and 70cm, as well as trying to open my local repeaters (GB3WP, GB3PZ, GB3MN, GB3MR) with no luck (of course, the fact that GB3WP was offline, and I didn’t know it, didn’t help!)
I’ve resigned myself to the fact that the radio is probably dead (which at ~8 years old, isn’t going too badly!) I’m considering replacing it with a RD-5R which supports DMR, although there’s only one DMR repeater near here, but I don’t know whether it’s worth getting this, or instead just getting a new UV-5RTP (affiliate)?
Amateur Radio doesn’t have a high Partner Acceptance Factor in my house (it’s just not something she can understand why I’m interested), so I can’t really put up an antenna outdoors, nor can I run a mobile rig on the bench in my shared office… I’m half tempted to ditch actual RF operations, and just try using Echolink (or DroidStar) or similar from my phone when I’m out and about.
In some work environments, you may find that a “Man In The Middle” (also known as MITM) proxy may have been configured to inspect HTTPS traffic. If you work in a predominantly Windows based environment, you may have had some TLS certificates deployed to your computer when you logged in, or by group policy.
I’ve previously mentioned that if you’re using Firefox on your work machines where you’ve had these certificates pushed to your machine, then you’ll need to enable a configuration flag to make those work under Firefox (“security.enterprise_roots.enabled“), but this is talking about Linux (like Ubuntu, Fedora, CentOS, etc.) and Linux-like environments (like WSL, MSYS2)
Late edit 2021-05-06: Following a conversation with SiDoyle, I added some notes at the end of the post about using the System CA path with the Python Requests library. These notes were initially based on a post by Mohclips from several years ago!
Start with Windows
From your web browser of choice, visit any HTTPS web page that you know will be inspected by your proxy.
If you’re using Mozilla Firefox
In Firefox, click on this part of the address bar and click on the right arrow next to “Connection secure”:
Click on “More Information” to take you to the “Page info” screen
In recent versions of Firefox, clicking on “View Certificate” takes you to a new page which looks like this:
Click on the right-most tab of this screen, and navigate down to where it says “Miscellaneous”. Click on the link to download the “PEM (cert)”.
Save this certificate somewhere sensible, we’ll need it in a bit!
Note that if you’ve got multiple proxies (perhaps for different network paths, or perhaps for a cloud proxy and an on-premises proxy) you might need to force yourself in into several situations to get these.
If you’re using Google Chrome / Microsoft Edge
In Chrome or Edge, click on the same area, and select “Certificate”:
This will take you to a screen listing the “Certification Path”. This is the chain of trust between the “Root” certificate for the proxy to the certificate they issue so I can visit my website:
Click on the topmost line of the list, and then click “View Certificate” to see the root certificate. Click on “Details”:
Click on “Copy to File” to open the “Certificate Export Wizard”:
Once you’ve saved this file, rename it to have the extension .pem. You may need to do this from a command line!
Copy the certificate into the environment and add it to the system keychain
Ubuntu or Debian based systems as an OS, or as a WSL environment
As root, copy the proxy’s root key into /usr/local/share/ca-certificates/<your_proxy_name>.crt (for example, /usr/local/share/ca-certificates/proxy.my.corp.crt) and then run update-ca-certificates to update the system-wide certificate store.
RHEL/CentOS as an OS, or as a WSL environment
As root, copy the proxy’s root key into /etc/pki/ca-trust/source/anchors/<your_proxy_name>.pem (for example, /etc/pki/ca-trust/source/anchors/proxy.my.corp.pem) and then run update-ca-trust to update the system-wide certificate store.
MSYS2 or the Ruby Installer
Open the path to your MSYS2 environment (e.g. C:\Ruby30-x64\msys64) using your file manager (Explorer) and run msys2.exe. Then paste the proxy’s root key into the etc/pki/ca-trust/source/anchors subdirectory, naming it <your_proxy_name>.pem. In the MSYS2 window, run update-ca-trust to update the environment-wide certificate store.
If you’ve obtained the Ruby Installer from https://rubyinstaller.org/ and installed it from there, assuming you accepted the default path of C:\Ruby<VERSION>-x64 (e.g. C:\Ruby30-x64) you need to perform the above step (running update-ca-trust) and then copy the file from C:\Ruby30-x64\mysys64\etc\pki\ca-trust\extracted\pem\tls-ca-bundle.pem to C:\Ruby30-x64\ssl\cert.pem
Using the keychain
Most of your Linux and Linux-Like environments will operate fine with this keychain, but for some reason, Python needs an environment variable to be passed to it for this. As I encounter more environments, I’ll update this post!
The path to the system keychain varies between releases, but under Debian based systems, it is: /etc/ssl/certs/ca-certificates.crt while under RedHat based systems, it is: /etc/pki/tls/certs/ca-bundle.crt.
Python “Requests” library
If you’re getting TLS errors in your Python applications, you need the REQUESTS_CA_BUNDLE environment variable set to the path for the system-wide keychain. You may want to add this line to your /etc/profile to include this path.
I want to use Vagrant-Docker to try standing up some environments. There’s no reasonable justification, it’s just a thing I wanted to do. Normally, I’d go into this long and rambling story about why… but on this occasion, the reason was “Because it’s possible”…
On Windows or Mac there are downloads you can get from the Docker Hub. The Windows Version requires WSL2. I don’t have a Mac, so I don’t know what the requirements are there! Installing WSL2 has a whole host of extra steps that I can’t really do justice to. See this Microsoft article for details.
Installing Vagrant
On Debian and Ubuntu you can add the HashiCorp Apt Repo and then install Vagrant, using these commands:
There are similar instructions for RHEL, CentOS and Fedora users there too.
Windows and Mac users will have to get the application from the download page.
Creating your Dockerfile
A Dockerfile is a simple text file which has a series of line prefixes which instruct the Docker image processor to add certain instructions to the Docker Image. I found two pages which helped me with what to add for this; “Ansible. Docker. Vagrant. Bringing together” and the git repo “AkihiroSuda/containerized-systemd“.
You see, while a Dockerfile is great at starting single binary files or scripts, it’s not very good at running SystemD… and I needed SystemD to be able to run the SSH service that Vagrant requires, and to also run the scripts and commands I needed for the image I wanted to build…
Sooooo…. here’s the Dockerfile I created:
# Based on https://vtorosyan.github.io/ansible-docker-vagrant/
# and https://github.com/AkihiroSuda/containerized-systemd/
FROM debian:buster AS debian_with_systemd
# This stuff enables SystemD on Debian based systems
STOPSIGNAL SIGRTMIN+3
RUN DEBIAN_FRONTEND=noninteractive apt update && DEBIAN_FRONTEND=noninteractive apt install -y --no-install-recommends systemd systemd-sysv dbus dbus-user-session
COPY docker-entrypoint.sh /
RUN chmod 755 /docker-entrypoint.sh
ENTRYPOINT [ "/docker-entrypoint.sh" ]
CMD [ "/bin/bash" ]
# This part installs an SSH Server (required for Vagrant)
RUN DEBIAN_FRONTEND=noninteractive apt install -y sudo openssh-server
RUN mkdir /var/run/sshd
# We enable SSH here, but don't start it with "now" as the build stage doesn't run anything long-lived.
RUN systemctl enable ssh
EXPOSE 22
# This part creates the vagrant user, sets the password to "vagrant", adds the insecure key and sets up password-less sudo.
RUN useradd -G sudo -m -U -s /bin/bash vagrant
# chpasswd takes a colon delimited list of username/password pairs.
RUN echo 'vagrant:vagrant' | chpasswd
RUN mkdir -m 700 /home/vagrant/.ssh
# This key from https://github.com/hashicorp/vagrant/tree/main/keys. It will be replaced on first run.
RUN echo 'ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key' > /home/vagrant/.ssh/authorized_keys
RUN chmod 600 /home/vagrant/.ssh/authorized_keys
RUN chown -R vagrant:vagrant /home/vagrant
RUN echo 'vagrant ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers
This Dockerfile calls out to a separate script, called docker-entrypoint.sh, taken verbatim from AkihiroSuda’s repo, so here’s that file:
#!/bin/bash
set -ex
container=docker
export container
if [ $# -eq 0 ]; then
echo >&2 'ERROR: No command specified. You probably want to run `journalctl -f`, or maybe `bash`?'
exit 1
fi
if [ ! -t 0 ]; then
echo >&2 'ERROR: TTY needs to be enabled (`docker run -t ...`).'
exit 1
fi
env >/etc/docker-entrypoint-env
cat >/etc/systemd/system/docker-entrypoint.target <<EOF
[Unit]
Description=the target for docker-entrypoint.service
Requires=docker-entrypoint.service systemd-logind.service systemd-user-sessions.service
EOF
cat /etc/systemd/system/docker-entrypoint.target
quoted_args="$(printf " %q" "${@}")"
echo "${quoted_args}" >/etc/docker-entrypoint-cmd
cat /etc/docker-entrypoint-cmd
cat >/etc/systemd/system/docker-entrypoint.service <<EOF
[Unit]
Description=docker-entrypoint.service
[Service]
ExecStart=/bin/bash -exc "source /etc/docker-entrypoint-cmd"
# EXIT_STATUS is either an exit code integer or a signal name string, see systemd.exec(5)
ExecStopPost=/bin/bash -ec "if echo \${EXIT_STATUS} | grep [A-Z] > /dev/null; then echo >&2 \"got signal \${EXIT_STATUS}\"; systemctl exit \$(( 128 + \$( kill -l \${EXIT_STATUS} ) )); else systemctl exit \${EXIT_STATUS}; fi"
StandardInput=tty-force
StandardOutput=inherit
StandardError=inherit
WorkingDirectory=$(pwd)
EnvironmentFile=/etc/docker-entrypoint-env
[Install]
WantedBy=multi-user.target
EOF
cat /etc/systemd/system/docker-entrypoint.service
systemctl mask systemd-firstboot.service systemd-udevd.service
systemctl unmask systemd-logind
systemctl enable docker-entrypoint.service
systemd=
if [ -x /lib/systemd/systemd ]; then
systemd=/lib/systemd/systemd
elif [ -x /usr/lib/systemd/systemd ]; then
systemd=/usr/lib/systemd/systemd
elif [ -x /sbin/init ]; then
systemd=/sbin/init
else
echo >&2 'ERROR: systemd is not installed'
exit 1
fi
systemd_args="--show-status=false --unit=multi-user.target"
echo "$0: starting $systemd $systemd_args"
exec $systemd $systemd_args
Now, if you were to run this straight in Docker, it will fail, because you must pass certain flags to Docker to get this to run. These flags are:
-t : pass a “TTY” to the shell
--tmpfs /tmp : Create a temporary filesystem in /tmp
--tmpfs /run : Create another temporary filesystem in /run
--tmpfs /run/lock : Apparently having a tmpfs in /run isn’t enough – you ALSO need one in /run/lock
-v /sys/fs/cgroup:/sys/fs/cgroup:ro : Mount the CGroup kernel configuration values into the container
Blimey, what a long set of text! Perhaps we could hide that behind something a bit more legible? Enter Vagrant.
Creating your Vagrantfile
Vagrant is an abstraction tool, designed to hide complicated virtualisation scripts into a simple command. In this case, we’re hiding a containerisation script into a simple command.
Like with the Dockerfile, I made extensive use of the two pages I mentioned before, as well as the two pages to get the flags to run this.
# Based on https://vtorosyan.github.io/ansible-docker-vagrant/
# and https://github.com/AkihiroSuda/containerized-systemd/
# and https://developers.redhat.com/blog/2016/09/13/running-systemd-in-a-non-privileged-container/
# with tweaks indicated by https://github.com/containers/podman/issues/3295
ENV['VAGRANT_DEFAULT_PROVIDER'] = 'docker'
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "."
d.has_ssh = true
d.remains_running = false
d.create_args = ['--tmpfs', '/tmp', '--tmpfs', '/run', '--tmpfs', '/run/lock', '-v', '/sys/fs/cgroup:/sys/fs/cgroup:ro', '-t']
end
end
If you create that file, and run vagrant up you’ll get a working Vagrant boot… But if you try and execute any shell scripts, they’ll fail to run, as the they aren’t passed in with execute permissions… so I want to use Ansible to execute things, as these don’t require execute permissions on the /vagrant directory (also, as the thing I’m building in there requires Ansible… so it’s helpful either way 😁)
Executing Ansible scripts
Ansible still expects to find python in /usr/bin/python but current systems don’t make the symlink to /usr/bin/python3, as python was typically a symlink to /usr/bin/python2… and also I wanted to put the PPA for Ansible in the sources, which is what the Ansible team recommend in their documentation. I’ve done this as part of the Dockerfile, as again, I can’t run scripts from Vagrant. So, here’s the addition I made to the Dockerfile.
FROM debian_with_systemd AS debian_with_systemd_and_ansible
RUN apt install -y gnupg2 lsb-release software-properties-common
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 93C4A3FD7BB9C367
RUN add-apt-repository "deb http://ppa.launchpad.net/ansible/ansible/ubuntu trusty main"
RUN apt install -y ansible
# Yes, I know. Trusty? On Debian Buster?? But, that's what the Ansible Docs say!
In the Vagrantfile, I’ve added this block:
config.vm.provision "ansible_local" do |ansible|
ansible.playbook = "test.yml"
end
And I created a test.yml, which looks like this:
---
- hosts: all
tasks:
- debug:
msg: "Hello from Docker"
Running it
So how does this look on Windows when I run it?
PS C:\Dev\VagrantDockerBuster> vagrant up
==> default: Creating and configuring docker networks...
==> default: Building the container from a Dockerfile...
<SNIP A LOAD OF DOCKER STUFF>
default: #20 DONE 0.1s
default:
default: Image: 190ffdeaeed0b7ed206097e6c1d4b5cc796a428700c9bd3e27eedacce47fb63b
==> default: Creating the container...
default: Name: 2021-02-13DockerBusterWithSSH_default_1613469604
default: Image: 190ffdeaeed0b7ed206097e6c1d4b5cc796a428700c9bd3e27eedacce47fb63b
default: Volume: C:/Users/SPRIGGSJ/OneDrive - FUJITSU/Documents/95 My Projects/2021-02-13 Docker Buster With SSH:/vagrant
default: Port: 127.0.0.1:2222:22
default:
default: Container created: b64ed264d8949b12
==> default: Enabling network interfaces...
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
==> default: Machine booted and ready!
==> default: Running provisioner: ansible_local...
default: Running ansible-playbook...
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host default is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [default]
TASK [debug] *******************************************************************
ok: [default] => {
"msg": "Hello from Docker"
}
PLAY RECAP *********************************************************************
default : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
PS C:\Dev\VagrantDockerBuster>
And on Linux?
Bringing machine 'default' up with 'docker' provider...
==> default: Creating and configuring docker networks...
==> default: Building the container from a Dockerfile...
<SNIP A LOAD OF DOCKER STUFF>
default: Removing intermediate container e56bed4f7be9
default: ---> cef749c205bf
default: Successfully built cef749c205bf
default:
default: Image: cef749c205bf
==> default: Creating the container...
default: Name: 2021-02-13DockerBusterWithSSH_default_1613470091
default: Image: cef749c205bf
default: Volume: /home/spriggsj/Projects/2021-02-13 Docker Buster With SSH:/vagrant
default: Port: 127.0.0.1:2222:22
default:
default: Container created: 3fe46b02d7ad10ab
==> default: Enabling network interfaces...
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Running provisioner: ansible_local...
default: Running ansible-playbook...
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host default is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [default]
TASK [debug] *******************************************************************
ok: [default] => {
"msg": "Hello from Docker"
}
PLAY RECAP *********************************************************************
default : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
So, if you’re crazy and want to do Vagrant using Docker with Debian Buster and Ansible, this is how to do it. I don’t know how much I’m likely to be using this in the future, but if you use it, let me know what you’re doing with it! 😀
Featured image is “Family” by “Ivan” on Flickr and is released under a CC-BY license.
Hello, welcome to my personal knowledgebase article 😁
I think you only get this if you have some tool or service which hooks WinSock to perform content inspection, but if you do, you need to tell WinSock to reject attempts to hook WSL2.
According to this post on the Github WSL Issues list, you need to add a key into your registry, in the path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WinSock2\Parameters\AppId_Catalog and they mention that the vendor of “proxifier” have released a tool which creates this key. The screen shot in the very next post shows this registry key having been created.
I don’t know if the hex ID of the “AppId_Catalog” path created is relevant, but it was what was in the screenshot, so I copied it, and created this registry export file. Feel free to create your own version of this file, and run it to fix your own issue.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WinSock2\Parameters\AppId_Catalog\0408F7A3]
"AppFullPath"="C:\\Windows\\System32\\wsl.exe"
"PermittedLspCategories"=dword:80000000
As soon as I’d included this registry entry, I was able to access WSL OK again.
Over the last week, I discovered a new tool for my arsenal called Architectural Decision Records (ADR). They were first written about in 2011, in a post called “Documenting Architecture Decisions“, where the author, Michael Nygard, advocates for short documents explaining each decision that influences the architecture of an environment.
I found this via a Github repository, created by the team at gov.uk, which includes their ADR library, and references the tool they use to manage these documents – adr-tools.
Late edit 2021-12-14: I released (v0.0.1) my own rust-based application for making Decision Records. Yes, Decision Records – not Architecture Decision Records… because I think you should be able to apply the same logic to all decisions, not just architectural ones.
Installing adr-tools on Linux
Currently adr-tools are easier to install under OSX rather than Linux or Windows Subsystem for Linux (WSL) (I’m working on this – bear with me! 😃 ).
The current installation notes suggest for Linux (which would also work on WSL) is to download the latest release tar.gz or zip file and unpack it into your path somewhere. This isn’t exactly the best way to deploy anything on Linux, but… I guess it works right now?
For me, I downloaded the file, and unpacked the whole tar.gz file (as root) into /usr/local/bin/, giving me a directory of /usr/local/bin/adr-tools-3.0.0/. There’s a subdirectory in here, called src which contains a large number of files – mostly starting _adr or adr- and two additional files, init.md and template.md.
Rather than putting all of these files into /usr/local/bin directly, instead I leave them in the adr-tools-3.0.0 directory, and create a symbolic link (symlink) to the /usr/local/bin directory with this command:
cd /usr/local/bin
ln -s adr-tools-3.0.0/src/* .
This gives me all those files in one place, so I can refer to them later.
An aside – why link everything in that src directory? (Feel free to skip this block!)
Now, why, you might ask, do all of these unrelated files need to be in the same place? Well…. the author of the script has put this in at the top of almost all the files:
#!/bin/bash
set -e
eval "$($(dirname $0)/adr-config)"
There are, technically, good reasons for this! This is designed to be run in, what in the Windows world, you might call as a “Portable Script”. So, you bung adr-tools into some directory somewhere, and then just call adr somecommand and it knows that all the files are where they need to be. The (somewhat) down side to this is that if you just want to call adr somecommand rather than path/to/my/adr somecommand then all those files need to be there
I’m currently looking to see if I can improve this somewhat, so that it’s not quite so complex to install, but for now, that’s what you need.
Anyway…
Using adr-tools to document your decisions
I’ll start documenting a fictional hosted web service project, and note down some of the decisions which have been made.
Initializing your ADR directory
Start by running adr init. You may want to specify a directory where you want to put these records, so instead use: adr init path/to/adr, like this:
You’ll notice that when I run this command, it creates a new entry, called 0001-record-architecture-decisions.md. Let’s open this up, and see what’s in here.
In here we have the record ID (1.), the title of the record Record architecture decisions, the date the choice was made Date: 2021-01-19, a status of Accepted, the context on why we made this choice, the decision, and the consequences of making this decision. Make changes, if needed, and save it. Let’s move on.
Creating our first own record
This all is quite straightforward thus far. Let’s create our next record.
Let’s open up that record.
Like the first record, we have a title, a status, a context, decision and consequences. Let’s define these.
This document shouldn’t be very long! It just describes why a choice was made and what that entails.
Changing decisions – completely replacing (superseding) a decision
Of course, over time, decisions will be replaced due to various decisions elsewhere.
Let’s look at how that works on the second ADR record.
So, under the “Status”, where is previously said “Approved”, it now says “Superceded by [3. Use Azure](0003-use-azure.md)“. This is a markdown statement which indicates where the superseded document is located. As I mentioned in the comment below the above image, there is an open Pull Request to fix this on the adr-tools, so hopefully that typo won’t last long!
We’ve got our new ADR too – let’s take a look at that one?
Other references
Of course, you don’t always completely overrule a decision. Sometimes your decision is influenced by, or has a dependency on something else, like this one.
The command here is:
adr new -l '3:Dependency:Influences' Use Region UK South and UK West
I’m just going to crop from the “Status” block on both the referenced ADR (3) and the ADR which references it (4):
And of course, you can also use the same switch to mark documents as partially obsoleted, like this:
adr new -l '4:Partially obsoletes:Partially obsoleted by' Use West Europe region instead of UK West region
If you forget to add the referencing in, you can also use the adr link command, like this:
adr link 3 Influences 5 Dependency
To be clear, that command adds a (complete) line to ADR 0003 saying “Influences [5. ADR Title](link)” and a separate (complete) line to ADR 0005 saying “Dependency [3. ADR Title](link)“.
What else can we do?
There are four other “things” that it’s worth doing at this point.
Note that you can change the template per-ADR directory.
Create a directory called “templates” in the ADR directory, and put a file in there called “template.md“. Tweak this as you need. Ensure you have AT LEAST the line ## Status and # NUMBER. TITLE as these are required by the script.
As you can see, it’s possible to add all sorts of content in this template as a result. Bear in mind, before your template turns into something like this, that it’s supposed to be a short document explaining why each decision was made, not a funding proposal, or a complex epic of your user stories!
Note that you can automatically open an editor, by setting the EDITOR (where the process is expected to finish before returning control, like using nano, emacs or vim, for example) or VISUAL (where the process is expected to “fork”, like for example, gedit or vscode) environment variable, and then running adr new A Title, like this:
We can create “Table of Contents” files, using the adr generate toc command, like this:
This can be included into your various other markdown files. There are switches, so you can set the link path, but your best bet is to find that using adr help generate toc.
We can also generate graphviz files of the link maps between elements of the various ADRs, like this: adr generate graph | dot -Tjpg > graph.jpg
If you omit the “| dot -Tjpg > graph.jpg” part, then you’ll see the graphviz output, which looks like this: (I’ve removed the documents 6 and 7).
digraph {
node [shape=plaintext];
subgraph {
_1 [label="1. Record architecture decisions"; URL="0001-record-architecture-decisions.html"];
_2 [label="2. Use AWS"; URL="0002-use-aws.html"];
_1 -> _2 [style="dotted", weight=1];
_3 [label="3. Use Azure"; URL="0003-use-azure.html"];
_2 -> _3 [style="dotted", weight=1];
_4 [label="4. Use Region UK South and UK West"; URL="0004-use-region-uk-south-and-uk-west.html"];
_3 -> _4 [style="dotted", weight=1];
_5 [label="5. Use West Europe region instead of UK West region"; URL="0005-use-west-europe-region-instead-of-uk-west-region.html"];
_4 -> _5 [style="dotted", weight=1];
}
_3 -> _2 [label="Supercedes", weight=0]
_3 -> _5 [label="Influences", weight=0]
_4 -> _3 [label="Dependency", weight=0]
_5 -> _4 [label="Partially obsoletes", weight=0]
_5 -> _3 [label="Dependency", weight=0]
}
To make the graphviz part work, you’ll need to install graphviz, which is just an apt get away.
Any caveats?
adr-tools is not actively maintained. I’ve contacted the author, about seeing if I can help out with the maintenance, but… we’ll see, and given some fairly high profile malware takeovers of projects like this sort of thing on Github, Docker, NPM, and more… I can see why there might be some reluctance to consider it! Also, I’m an unknown entity, I’ve just dropped in on the project and offered to help, with no previous exposure to the lead dev or the project… so, we’ll see. Worst case, I’ll fork it!
Working with this also requires an understanding of markdown files, and why these might be a useful document format for records like this. There was a PR submitted to support multiple file formats (like asciidoc and rst) but these were not approved by the author.
There is no current intention to support languages other than English. The tool is hard-coded to look for strings like “status” and “superceded” which is hard. Part of the reason I raised the PRs I did was to let me fix some of these sorts of issues. Again, we’ll see what happens.
Lastly, it can be overwhelming to see a lot of documents in one place, particularly if they’re as granular as the documents I produced in this demo. If the project supported categories, or could be broken down into components (like doc/adr/networking and doc/adr/server_builds and doc/adr/applications) then this might help, but it’s not on the roadmap right now!
Late edit 2021-01-25: If you don’t think these templates have enough context or content, there are lots of others listed on Joel Parker Henderson’s repo of examples and templates. If you want a python based viewer of ADR records, take a look at adr-viewer.
For something internal at work, I decided to sketch out how I got to doing the job I do today. And, because there’s nothing hugely secretive in that document (or, at least, nothing you wouldn’t already find out on something like Linked In), I figured I’d also put this on my blog… and I think it might be interesting if you’ve written something similar, if you’d share your document too.
I intend to make that a “Living Document” (like I do with my “What am I doing now” and my “What do I use” pages) that I update every time I think about it, and think they need a tweak. So, as a result, I’ve put them over on my “Career Path” page, which is not a traditional “blog post” and is in my sidebar.
Featured image is “map” by “Jason Grote” on Flickr and is released under a CC-BY-SA license.