Setting UK keyboards in Vagrant Ubuntu Machines, using Ansible

Wow, now there’s a specific post title…

I use Ansible… quite a bit :) and one of the things I do with Ansible is to have a standard build desktop that I can create using Vagrant. Recently I upgraded the base to Ubuntu 18.04, and it annoyed me that I still didn’t have a working keyboard combination, so I kept getting US keyboards. I spent 20 minutes sorting it out, and here’s how to do it.

- name: Set keyboard layout
  debconf:
    name: "keyboard-configuration"
    question: "keyboard-configuration/{{ item.key }}"
    value: "{{ item.value }}"
    vtype: "{{ item.type|default('string') }}"
  with_items:
  - { key: "altgr", value: "The default for the keyboard layout", vtype: "select" }
  - { key: "compose", value: "No compose key", vtype: "select" }
  - { key: "ctrl_alt_bksp", value: "false", type: "boolean" }
  - { key: "variant", value: "English (UK)", vtype: "select" }
  - { key: "layout", value: "English (UK)", vtype: "select" }
  - { key: "model", value: "Generic 105-key PC (intl.)", vtype: "select" }

I posted how I got to this point over at the Server Fault post that got me most of the way. https://serverfault.com/a/912342/14832

Ansible Behaviour Change

For those of you who are working with #Ansible… Ansible 2.5 is out, and has an unusual documentation change around a key Ansible concept – `with_` loops Where you previously had:

with_dict: "{{ your_fact }}"
or
with_subelements:
- "{{ your_fact }}"
- some_subkey

This now should be written like this:

loop: "{{ lookup('dict', your_fact) }}"
and
loop: "{{ lookup('subelements', your_fact, 'some_subkey') }}"

Fear not, I hear you say, It’s fine, of course the documentation suggests that this is “how it’s always been”…… HA HA HA Nope. This behaviour is new as of 2.5, and needs ansible to be updated to the latest version. As far as I can tell, there’s no way to indicate to Ansible “Oh, BTW, this needs to be running on 2.5 or later”… so I wrote a role that does that for you.

ansible-galaxy install JonTheNiceGuy.version-check

You’re welcome :)

More useful URLs:

Creating OpenStack “Allowed Address Pairs” for Clusters with Ansible

This post came about after a couple of hours of iterations, so I can’t necessarily quote all the sources I worked from, but I’ll do my best!

In OpenStack (particularly in the “Kilo” release I’m working with), if you have a networking device that will pass traffic on behalf of something else (e.g. Firewall, IDS, Router, Transparent Proxy) you need to tell the virtual NIC that the interface is allowed to pass traffic for other IP addresses, as OpenStack applies by default a “Same Origin” firewall rule to the interface. Defining this in OpenStack is more complex than it could be, because for some reason, you can’t define 0.0.0.0/0 as this allowed address pair, so instead you have to define 0.0.0.0/1 and 128.0.0.0/1.

Here’s how you define those allowed address pairs (note, this assumes you’ve got some scaffolding in place to define things like “network_appliance”):

allowed_address_pairs: "{% if (item.0.network_appliance|default('false')|lower() == 'true') or (item.1.network_appliance|default('false')|lower() == 'true') %}[{'ip_address': '0.0.0.0/1'}, {'ip_address': '128.0.0.0/1'}]{% else %}{{ item.0.allowed_address_pairs|default(omit) }}{% endif %}"

OK, so we’ve defined the allowed address pairs! We can pass traffic across our firewall. But (and there’s always a but), the product I’m working with at the moment has a floating MAC address in a cluster, when you define an HA pair. They have a standard schedule for how each port’s floating MAC is assigned… so here’s what I’ve ended up with (and yes, I know it’s a mess!)

allowed_address_pairs: "{% if (item.0.network_appliance|default('false')|lower() == 'true') or (item.1.network_appliance|default('false')|lower() == 'true') %}[{'ip_address': '0.0.0.0/1'},{'ip_address': '128.0.0.0/1'}{% if item.0.ha is defined and item.0.ha != '' %}{% for vdom in range(0,40, 10) %},{'ip_address': '0.0.0.0/1','mac_address': '{{ item.0.floating_mac_prefix|default(item.0.image.floating_mac_prefix|default(floating_mac_prefix)) }}:{% if item.0.ha.group_id|default(0) < 16 %}0{% endif %}{{ '%0x' | format(item.0.ha.group_id|default(0)|int) }}:{% if vdom+(item.1.interface|default('1')|replace('port', '')|int)-1 < 16 %}0{% endif %}{{ '%0x' | format(vdom+(item.1.interface|default('1')|replace('port', '')|int)-1) }}'}, {'ip_address': '128.0.0.0/1','mac_address': '{{ item.0.floating_mac_prefix|default(item.0.image.floating_mac_prefix|default(floating_mac_prefix)) }}:{% if item.0.ha.group_id|default(0) < 16 %}0{% endif %}{{ '%0x' | format(item.0.ha.group_id|default(0)|int) }}:{% if vdom+(item.1.interface|default('0')|replace('port', '')|int)-1 < 16 %}0{% endif %}{{ '%0x' | format(vdom+(item.1.interface|default('1')|replace('port', '')|int)-1) }}'}{% endfor %}{% endif %}]{% else %}{{ item.0.allowed_address_pairs|default(omit) }}{% endif %}"

Let's break this down a bit. The vendor says that each port gets a standard prefix, (e.g. DE:CA:FB:AD) then the penultimate octet is the "Cluster ID" number in hex, and then the last octet is the sum of the port number (zero-indexed) added to a VDOM number, which increments in 10's. We're only allowed to assign 10 "allowed address pairs" to an interface, so I've got the two originals (which are assigned to "whatever" the defined mac address is of the interface), and four passes around. Other vendors (e.g. this one) do things differently, so I'll probably need to revisit this once I know how the next one (and the next one... etc.) works!

So, we have here a few parts to make that happen.

The penultimate octet, which is the group ID in hex needs to be two hex digits long, and without adding more python modules to our default machines, we can't use a "pad" filter (to add 0's to the beginning of the mac octets), so we do that by hand:

{% if item.0.ha.group_id|default(0) < 16 %}0{% endif %}

And here's how to convert the group ID into a hex number:

{{ '%0x' | format(item.0.ha.group_id|default(0)|int) }}

Then the next octet is the sum of the VDOM and PortID. First we need to loop around the VDOMs. We don't always know whether we're going to be adding VDOMs until after deployment has started, so here we will assume we've got 3 VDOMs (plus VDOM "0" for management) as it doesn't really matter if we don't end up using them. We create the vdom variable like this:

{% for vdom in range(0, 40, 10) %} STUFF {% endfor %}

We need to put the actual port ID in there too. As we're using a with_subelement loop we can't create an increment, but what we can do is ensure we're recording the interface number. This only works here because the vendor has a sequential port number (port1, port2, etc). We'll need to experiment further with other vendors! So, here's how we're doing this. We already know how to create a hex number, but we do need to use some other Jinja2 filters here:

{{ '%0x' | format(vdom+(item.1.interface|default('1')|replace('port', '')|int)-1) }}

Let's pull this apart a bit further. item.1.interface is the name of the interface, and if it doesn't exist (using the |default('1') part) we replace it with the string "1". So, let's replace that variable with a "normal" value.

{{ '%0x' | format(vdom+("port1"|replace('port', '')|int)-1) }}

Next, we need to remove the word "port" from the string "port1" to make it just "1", so we use the replace filter to strip part of that value out. Let's do that:

{{ '%0x' | format(vdom+("1"|int)-1) }}

After that, we need to turn the string "1" into the literal number 1:

{{ '%0x' | format(vdom+1-1) }}

We loop through vdom several times, but let's pick one instance of that at random - 30 (the fourth iteration of the vdom for-loop):

{{ '%0x' | format(30+1-1) }}

And then we resolve the maths:

{{ '%0x' | format(30) }}

And then the |format(30) turns the '%0x' into the value "1e"

Assuming the vendor prefix is, as I mentioned, 'de:ca:fb:ad:' and the cluster ID is 0, this gives us the following resulting allowed address pairs:

[
{"ip_address": "0.0.0.0/1"},
{"ip_address": "128.0.0.0/1"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:00"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:00"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:0a"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:0a"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:14"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:14"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:1e"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:1e"}
]

I hope this has helped you!

Sources of information:

Defining Networks with Ansible

In my day job, I’m using Ansible to provision networks in OpenStack. One of the complaints I’ve had about the way I now define them is that the person implementing the network has to spell out all the network elements – the subnet size, DHCP pool, the addresses of the firewalls and names of those items. This works for a manual implementation process, but is seriously broken when you try to hand that over to someone else to implement. Most people just want something which says “Here is the network I want to implement – 192.0.2.0/24″… and let the system make it for you.

So, I wrote some code to make that happen. It’s not perfect, and it’s not what’s in production (we have lots more things I need to add for that!) but it should do OK with an IPv4 network.

Hope this makes sense!

---
- hosts: localhost
  vars:
  - networks:
      # Defined as a subnet with specific router and firewall addressing
      external:
        subnet: "192.0.2.0/24"
        firewall: "192.0.2.1"
        router: "192.0.2.254"
      # Defined as an IP address and CIDR prefix, rather than a proper network address and CIDR prefix
      internal_1:
        subnet: "198.51.100.64/24"
      # A valid smaller network and CIDR prefix
      internal_2:
        subnet: "203.0.113.0/27"
      # A tiny CIDR network
      internal_3:
        subnet: "203.0.113.64/30"
      # These two CIDR networks are unusable for this environment
      internal_4:
        subnet: "203.0.113.128/31"
      internal_5:
        subnet: "203.0.113.192/32"
      # A massive CIDR network
      internal_6:
        subnet: "10.0.0.0/8"
  tasks:
  # Based on https://stackoverflow.com/a/47631963/5738 with serious help from mgedmin and apollo13 via #ansible on Freenode
  - name: Add router and firewall addressing for CIDR prefixes < 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4),
            'dhcp_start': item.value.dhcp_start | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 2) | ipv4),
            'dhcp_end': item.value.dhcp_end | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 2) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') < 30   - name: Add router and firewall addressing for CIDR prefixes = 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') == 30
  - debug:
      var: networks

"Copying and Pasting from Stack Overflow" Spoof O'Reilly Book Cover

Just a little reminder (to myself) about changing the path of a git submodule

Sometimes, it’s inevitable (maybe? :) ), you’ll add a git submodule from the wrong URL… I mean, EVERYONE’S done that, right? … right? you lot over there, am I right?… SIGH.

In my case, I’m trying to make sure I always use the https URLs with my github repo, but sometimes I add the git URL instead. When you run git remote -v in the path, you’ll get something like:

origin git@github.com:your-org/your-repo.git (fetch)

instead of

origin https://github.com/your-org/your-repo (fetch)

which means that when someone tries to clone your repo, they’ll be being asked for access to their public keys for all the submodules. Not great

Anyway, it should be easy enough – git creates a .gitmodules file in the repo root, so you should just be able to edit that file, and replace the git@ with https:// and the com: with com/… but what do you do next?

Thanks to this great Stack Overflow answer, I found you can just run these two commands after you’ve made that edit:

git submodule sync ; git submodule update --init --recursive --remote

Isn’t Stack Overflow great?

Using inspec to test your ansible

Over the past few days I’ve been binge listening to the Arrested Devops podcast. In one of the recent episodes (“Career Change Into DevOps With Michael Hedgpeth, Annie Hedgpeth, And Megan Bohl (ADO102)“) one of the interviewees mentions that she got started in DevOps by using Inspec.

Essentially, inspec is a way of explaining “this is what my server must look like”, so you can then test these statements against a built machine… effectively letting you unit test your provisioning scripts.

I’ve already built a fair bit of my current personal project using Ansible, so I wasn’t exactly keen to re-write everything from scratch, but it did make me think that maybe I should have a common set of tests to see how close my server was to the hardening “Benchmark” guides from CIS… and that’s pretty easy to script in inspec, particularly as the tests in those documents list the “how to test” and “how to remediate” commands to execute.

These are in the process of being drawn up (so far, all I have is an inspec test saying “confirm you’re running on Ubuntu 16.04″… not very complex!!) but, from the looks of things, the following playbook would work relatively well!

Experiments with USBIP on Raspberry Pi

At home, I have a server on which I run my VMs and store my content (MP3/OGG/FLAC files I have ripped from my CDs, Photos I’ve taken, etc.) and I want to record material from FreeSat to play back at home, except the server lives in my garage, and the satellite dish feeds into my Living Room. I bought a TeVii S660 USB FreeSat decoder, and tried to figure out what to do with it.

I previously stored the server near where the feed comes in, but the running fan was a bit annoying, so it got moved… but then I started thinking – what if I ran a Raspberry Pi to consume the media there.

I tried running OpenElec, and then LibreElec, and while both would see the device, and I could even occasionally get *content* out of it, I couldn’t write quick enough to the media devices attached to the RPi to actually record what I wanted to get from it. So, I resigned myself to the fact I wouldn’t be recording any of the Christmas Films… until I stumbled over usbip.

USBIP is a service which binds USB ports to a TCP port, and then lets you consume that USB port on another machine. I’ll discuss consuming the S660’s streams in another post, but the below DOES work :)

There are some caveats here. Because I’m using a Raspberry Pi, I can’t just bung on any old distribution, so I’m a bit limited here. I prefer Debian based images, so I’m going to artificially limit myself to these for now, but if I have any significant issues with these images, then I’ll have to bail on Debian based, and use something else.

  1. If I put on stock Raspbian Jessie, I can’t use usbip, because while ships its own kernel that has the right tools built-in (the usbip_host, usbip_core etc.), it doesn’t ship the right userland tools to manipulate it.
  2. If I’m using a Raspberry Pi 3, there’s no supported version of Ubuntu Server which ships for it. I can use a flavour (e.g. Ubuntu Mate), but that uses the Raspbian kernel, which, as I mentioned before, is not shipping the right userland tools.
  3. If I use a Raspberry Pi 2, then I can use Stock Ubuntu, which ships the right tooling. Now all I need to do is find a CAT5 cable, and some way to patch it through to my network…

Getting the Host stood up

I found most of my notes on this via a wiki entry at Github but essentially, it boils down to this:

On your host machine, (where the USB port is present), run

sudo apt-get install linux-tools-generic
sudo modprobe usbip_host
sudo usbipd -D

This confirms that your host can present the USB ports over the USBIP interface (there are caveats! I’ll cover them later!!). Late edit: 2020-05-21 I never did write up those caveats, and now, two years later, I don’t recall what they were. Apologies.

You now need to find which ports you want to serve. Run this command to list the ports on your system:

lsusb

You’ll get something like this back:

Bus 001 Device 004: ID 9022:d662 TeVii Technology Ltd.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And then you need to find which port the device thinks it’s attached to. Run this to see how usbip sees the world:

usbip list -l

This will return:

- busid 1-1.1 (0424:ec00)
unknown vendor : unknown product (0424:ec00)
- busid 1-1.3 (9022:d662)
unknown vendor : unknown product (9022:d662)

We want to share the TeVii device, which has the ID 9022:d662, and we can see that this is present as busid 1-1.3, so we now we need to bind it to the usbip system, with this command:

usbip bind -b 1-1.3

OK, so now we’re presenting this to the system. Perhaps you might want to make it available on a reboot?

echo "usbip_host" >> /etc/modules

I also added @reboot /usr/bin/usbipd -D ; sleep 5 ; /usr/bin/usbip bind -b 1-1.3 to root’s crontab, but it should probably go into a systemd unit.

Getting the Guest stood up

All these actions are being performed as root. As before, let’s get the modules loaded in the kernel:

apt-get install linux-tools-generic
modprobe vhci-hcd

Now, we can try to attach the module over the wire. Let’s check what’s offered to us (this code example uses 192.0.2.1 but this would be the static IP of your host):

usbip list -r 192.0.2.1

This hands up back the list of offered appliances:

Exportable USB devices
======================
- 192.0.2.1
1-1.3: TeVii Technology Ltd. : unknown product (9022:d662)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3
: (Defined at Interface level) (00/00/00)
: 0 - Vendor Specific Class / unknown subclass / unknown protocol (ff/01/01)

So, now all we need to do is attach it:

usbip attach -r 192.0.2.1 -b 1-1.3

Now I can consume the service from that device in tvheadend on my server. However, again, I need to make this persistent. So, let’s make sure the module is loaded on boot.

echo 'vhci-hcd' >> /etc/modules

And, finally, we need to attach the port on boot. Again, I’m using crontab, but should probably wrap this into a systemd service.

@reboot /usr/bin/usbip attach -r 192.0.2.1 -b 1-1.3

And then I had an attached USB device across my network!

Unfortuately, the throughput was a bit too low (due to silly ethernet-over-power adaptors) to make it work the way I wanted… but theoretically, if I had proper patching done in this house, it’d be perfect! :)

Interestingly, the day I finished this post off (after it’d sat in drafts since December), I spotted that one of the articles in Linux Magazine is “USB over the network with USB/IP”. Just typical! :D

Podcast Summary – The Admin Admin Podcast #61 – Not quite so ephemeral

January was a busy month for me – between work, helping direct developments to CCHits.net, organising OggCamp, starting Sociable Tech (more on these later), I’ve now also become a regular co-host on The Admin Admin podcast – a podcast for people who work in IT.

Originally started by Al and Andy, Jerry joined them early in the recording series, and just recently Andy has changed jobs and doesn’t have the time to record at the moment. Ever since Al and Andy approached me at OggCamp ’15, I’ve recorded shows as an occasional guest host and providing feedback on episodes where it was appropriate… so I was happy to receive the message asking if I wanted to become a regular co-host.

As I’ve joined the show, we’ve changed the format slightly, reduced our recording expectations (we’re just aiming for one show a month now) and by our good fortune, my good friend, Dave Lee from The Bugcast, has also been roped in to help with recording and producing… he did a fine job with this show!

So, expect to see more posts from me, talking about the show in here.

In this show (Episode 61) we talk about naming hosts, home labs, routers and firewalls and Jerry and I waffle on about Vagrant…. a LOT :) Oh, and we teach Al what ephemeral means.

Oh, and yes, I explain about my Streisand box in this show and get it *completely and utterly wrong*. It’s rather embarrassing. I’ll explain properly on the next show exactly what happens!!

Getting a PEM file from your OpenSSH Private Key

At work, the system used to get a Windows Administrator password in our OpenStack based system (K5) is derived from the SSH Public Key recorded in the system.

It’s really easy to use, and can be found here: https://decrypt-win-passwd.uk-1.cf-app.net

There is one downside to this though – the application needs the private key to be supplied to it (it’s OK, you regularly rotate your SSH private keys… right??) in PEM format… Now, if you’re any sort of sensible SSH user, you’ve used either OpenSSH’s ssh-keygen command, or PuTTY’s puttygen command… neither of which produce a PEM format key.

So, you need to convert it. After a bit of proding and poking, I found this command

openssl rsa -outform PEM -in ~/.ssh/id_rsa -out ~/.ssh/id_rsa.pem

Like the last post, this is more for me to find stuff in the future, but… if he helps someone else, so much the better!!