Defining Networks with Ansible

In my day job, I’m using Ansible to provision networks in OpenStack. One of the complaints I’ve had about the way I now define them is that the person implementing the network has to spell out all the network elements – the subnet size, DHCP pool, the addresses of the firewalls and names of those items. This works for a manual implementation process, but is seriously broken when you try to hand that over to someone else to implement. Most people just want something which says “Here is the network I want to implement – 192.0.2.0/24″… and let the system make it for you.

So, I wrote some code to make that happen. It’s not perfect, and it’s not what’s in production (we have lots more things I need to add for that!) but it should do OK with an IPv4 network.

Hope this makes sense!

---
- hosts: localhost
  vars:
  - networks:
      # Defined as a subnet with specific router and firewall addressing
      external:
        subnet: "192.0.2.0/24"
        firewall: "192.0.2.1"
        router: "192.0.2.254"
      # Defined as an IP address and CIDR prefix, rather than a proper network address and CIDR prefix
      internal_1:
        subnet: "198.51.100.64/24"
      # A valid smaller network and CIDR prefix
      internal_2:
        subnet: "203.0.113.0/27"
      # A tiny CIDR network
      internal_3:
        subnet: "203.0.113.64/30"
      # These two CIDR networks are unusable for this environment
      internal_4:
        subnet: "203.0.113.128/31"
      internal_5:
        subnet: "203.0.113.192/32"
      # A massive CIDR network
      internal_6:
        subnet: "10.0.0.0/8"
  tasks:
  # Based on https://stackoverflow.com/a/47631963/5738 with serious help from mgedmin and apollo13 via #ansible on Freenode
  - name: Add router and firewall addressing for CIDR prefixes < 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4),
            'dhcp_start': item.value.dhcp_start | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 2) | ipv4),
            'dhcp_end': item.value.dhcp_end | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 2) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') < 30   - name: Add router and firewall addressing for CIDR prefixes = 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') == 30
  - debug:
      var: networks

Experiments with USBIP on Raspberry Pi

At home, I have a server on which I run my VMs and store my content (MP3/OGG/FLAC files I have ripped from my CDs, Photos I’ve taken, etc.) and I want to record material from FreeSat to play back at home, except the server lives in my garage, and the satellite dish feeds into my Living Room. I bought a TeVii S660 USB FreeSat decoder, and tried to figure out what to do with it.

I previously stored the server near where the feed comes in, but the running fan was a bit annoying, so it got moved… but then I started thinking – what if I ran a Raspberry Pi to consume the media there.

I tried running OpenElec, and then LibreElec, and while both would see the device, and I could even occasionally get *content* out of it, I couldn’t write quick enough to the media devices attached to the RPi to actually record what I wanted to get from it. So, I resigned myself to the fact I wouldn’t be recording any of the Christmas Films… until I stumbled over usbip.

USBIP is a service which binds USB ports to a TCP port, and then lets you consume that USB port on another machine. I’ll discuss consuming the S660’s streams in another post, but the below DOES work :)

There are some caveats here. Because I’m using a Raspberry Pi, I can’t just bung on any old distribution, so I’m a bit limited here. I prefer Debian based images, so I’m going to artificially limit myself to these for now, but if I have any significant issues with these images, then I’ll have to bail on Debian based, and use something else.

  1. If I put on stock Raspbian Jessie, I can’t use usbip, because while ships its own kernel that has the right tools built-in (the usbip_host, usbip_core etc.), it doesn’t ship the right userland tools to manipulate it.
  2. If I’m using a Raspberry Pi 3, there’s no supported version of Ubuntu Server which ships for it. I can use a flavour (e.g. Ubuntu Mate), but that uses the Raspbian kernel, which, as I mentioned before, is not shipping the right userland tools.
  3. If I use a Raspberry Pi 2, then I can use Stock Ubuntu, which ships the right tooling. Now all I need to do is find a CAT5 cable, and some way to patch it through to my network…

Getting the Host stood up

I found most of my notes on this via a wiki entry at Github but essentially, it boils down to this:

On your host machine, (where the USB port is present), run

sudo apt-get install linux-tools-generic
sudo modprobe usbip_host
sudo usbipd -D

This confirms that your host can present the USB ports over the USBIP interface (there are caveats! I’ll cover them later!!). Late edit: 2020-05-21 I never did write up those caveats, and now, two years later, I don’t recall what they were. Apologies.

You now need to find which ports you want to serve. Run this command to list the ports on your system:

lsusb

You’ll get something like this back:

Bus 001 Device 004: ID 9022:d662 TeVii Technology Ltd.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And then you need to find which port the device thinks it’s attached to. Run this to see how usbip sees the world:

usbip list -l

This will return:

- busid 1-1.1 (0424:ec00)
unknown vendor : unknown product (0424:ec00)
- busid 1-1.3 (9022:d662)
unknown vendor : unknown product (9022:d662)

We want to share the TeVii device, which has the ID 9022:d662, and we can see that this is present as busid 1-1.3, so we now we need to bind it to the usbip system, with this command:

usbip bind -b 1-1.3

OK, so now we’re presenting this to the system. Perhaps you might want to make it available on a reboot?

echo "usbip_host" >> /etc/modules

I also added @reboot /usr/bin/usbipd -D ; sleep 5 ; /usr/bin/usbip bind -b 1-1.3 to root’s crontab, but it should probably go into a systemd unit.

Getting the Guest stood up

All these actions are being performed as root. As before, let’s get the modules loaded in the kernel:

apt-get install linux-tools-generic
modprobe vhci-hcd

Now, we can try to attach the module over the wire. Let’s check what’s offered to us (this code example uses 192.0.2.1 but this would be the static IP of your host):

usbip list -r 192.0.2.1

This hands up back the list of offered appliances:

Exportable USB devices
======================
- 192.0.2.1
1-1.3: TeVii Technology Ltd. : unknown product (9022:d662)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3
: (Defined at Interface level) (00/00/00)
: 0 - Vendor Specific Class / unknown subclass / unknown protocol (ff/01/01)

So, now all we need to do is attach it:

usbip attach -r 192.0.2.1 -b 1-1.3

Now I can consume the service from that device in tvheadend on my server. However, again, I need to make this persistent. So, let’s make sure the module is loaded on boot.

echo 'vhci-hcd' >> /etc/modules

And, finally, we need to attach the port on boot. Again, I’m using crontab, but should probably wrap this into a systemd service.

@reboot /usr/bin/usbip attach -r 192.0.2.1 -b 1-1.3

And then I had an attached USB device across my network!

Unfortuately, the throughput was a bit too low (due to silly ethernet-over-power adaptors) to make it work the way I wanted… but theoretically, if I had proper patching done in this house, it’d be perfect! :)

Interestingly, the day I finished this post off (after it’d sat in drafts since December), I spotted that one of the articles in Linux Magazine is “USB over the network with USB/IP”. Just typical! :D

Today I learned… Cloud-init doesn’t like you repeating the same things

Because of templates I was building in my post “Today I learned… Ansible Include Templates”, I thought you could repeat the same sections over again. Here’s a snippet of something like what I’d built (after combining lots of templates together):

Note this is a non-working code sample!


#cloud-config
packages:
- iperf
- git

write_files:
- content: {% include 'files/public_key.j2' %}
  path: /root/.ssh/authorized_keys
  owner: root:root
  permission: '0600'
- content: {% include 'files/private_key.j2' %}
  path: /root/.ssh/id_rsa
  owner: root:root
  permission: '0600'

packages:
- byobu

write_files:
- content: |
    #!/bin/bash
    git clone {{ test_scripts }} /root/iperf_scripts
    bash /root/iperf_scripts/run_test.sh
  path: /root/run_test
  owner: root:root
  permission: '0700'

runcmd:
- /root/run_test

I’d get *bits* of it to run – basically, the last file, the last package and the last runcmd… but not all of it.

Turns out, cloud-init doesn’t like having to rebuild all the fragments together. Instead, you need to put them all together, so the write_files items, and the packages items all live in the same area.

Which, when you think about what it’s doing, which is that the parent lines are defining a variable called… well, whatever that line is, and if you replace it, it’s only going to keep the last one, then it all makes sense really!

One to read: “Test Driven Development (TDD) for networks, using Ansible”

Thanks to my colleague Simon (@sipart on Twitter), I spotted this post (and it’s companion Github Repository) which explains how to do test-driven development in Ansible.

Essentially, you create two roles – test (the author referred to it as “validate”) and one to actually do the thing you want it to do (in the author’s case “add_vlan”).

In the testing role, you’d have the following layout:

/path/to/roles/testing/tasks/main.yml
/path/to/roles/testing/tasks/SOMEFEATUREtest.yml

In the main.yml file, you have a simple stanza:

---
- name: Include all the test files
  include: "{{ outer_item }}"
  with_fileglob:"/path/to/roles/validate/tasks/*test.yml"
  loop_control: loop_var=outer_item

I’m sure that “with_fileglob” line could be improved to not actually need a full path… anyway

Then in your YourFeature_test.yml file, you do things like this:

---
- name: "Pseudocode in here. Use real modules for your testing!!"
  get_vlan_config: filter_for=needle_vlan
  register:haystack_var

- assert: that=" {{ needle_item }} in haystack_var "

When you run the play of the role the first time, the response will be “failed” (because “needle_vlan” doesn’t exist). Next do the “real” play of the role (so, in the author’s case, add_vlan) which creates the vlan. Then re-run the test role, your response should now be “ok”.

I’d probably script this so that it goes:

      reset-environment set_testing=true (maybe create a random little network)
      test
      run-action
      test
      reset-environment set_testing=false

The benefit to doing it that way is that you “know” your tests aren’t running if the environment doesn’t have the “set_testing” thing in place, you get to run all your tests in a “clean room”, and then you clear it back down again afterwards, leaving it clear for the next pass of your automated testing suite.

Fun!

Today I learned… Ansible Include Templates

I am building Openstack Servers with the ansible os_server module. One of these fields will accept a very long string (userdata). Typically, I end up with a giant blob of unreadable build script in this field…

Today I learned that I can use this:

---
- name: "Create Server"
  os_server:
    name: "{{ item.value.name }}"
    state: present
    availability_zone: "{{ item.value.az.name }}"
    flavor: "{{ item.value.flavor }}"
    key_name: "{{ item.value.az.keypair }}"
    nics: "[{%- for nw in item.value.ports -%}{'port-name': '{{ ProjectPrefix }}{{ item.value.name }}-Port-{{nw.network.name}}'}{%- if not loop.last -%}, {%- endif -%} {%- endfor -%}]" # Ignore this line - it's complicated for a reason
    boot_volume: "{{ ProjectPrefix }}{{ item.value.name }}-OS-Volume" # Ignore this line also :)
    terminate_volume: yes
    volumes: "{%- if item.value.log_size is defined -%}[{{ ProjectPrefix }}{{ item.value.name }}-Log-Volume]{%- else -%}{{ omit }}{%- endif -%}"
    userdata: "{% include 'templates/userdata.j2' %}"
    auto_ip: no
    timeout: 65535
    cloud: "{{ cloud }}"
  with_dict: "{{ Servers }}"

This file (/path/to/ansible/playbooks/servers.yml) is referenced by my play.yml (/path/to/ansible/play.yml) via an include, so the template reference there is in my templates directory (/path/to/ansible/templates/userdata.j2).

That template can also then reference other template files itself (using {% include 'templates/some_other_file.extension' %}) so you can have nicely complex userdata fields with loads and loads of detail, and not make the actual play complicated (or at least, no more than it already needs to be!)

Using Python-OpenstackClient and Ansible with K5

Recently, I have used K5, which is an instance of OpenStack, run by Fujitsu (my employer). To do some of the automation tasks I have played with both python-openstackclient and Ansible. This post is going to cover how to get those tools to work with K5.

I have access to a Linux virtual machine (Ubuntu 16.04) and the Windows Subsystem for Linux in Windows 10 to run “Bash on Ubuntu on Windows”, and both accept the same set of commands.

In order to run these commands, you need a couple of dependencies. Your mileage might vary with other Linux distributions, but, for Ubuntu based distributions, run this command:

sudo apt install python-pip build-essential libssl-dev libffi-dev python-dev

Next, use pip to install the python modules you need:

sudo -H pip install shade==1.11.1 ansible cryptography python-openstackclient

If you’re only ever going to be working with a single project, you can define a handful of environment variables prefixed OS_, like this:

export OS_USERNAME=BloggsF
export OS_PASSWORD=MySuperSecretPasswordIsHere
export OS_REGION_NAME=uk-1
export OS_USER_DOMAIN_NAME=YourProjectName
export OS_PROJECT_NAME=YourProjectName-prj
export OS_PROJECT_ID=baddecafbaddecafbaddecafbaddecaf
export OS_AUTH_URL=https://identity.uk-1.cloud.global.fujitsu.com/v3
export OS_VOLUME_API_VERSION=2
export OS_IDENTITY_API_VERSION=3

But, if you’re working with a few projects, it’s probably worth separating these out into clouds.yml files. This would be stored in ~/.config/openstack/clouds.yml with the credentials for the environment you’re using:

---
clouds:
  root:
    identity_api_version: 3
    regions:
    - uk-1
    auth:
      auth_url: https://identity.uk-1.cloud.global.fujitsu.com/v3
      password: MySuperSecretPasswordIsHere
      project_id: baddecafbaddecafbaddecafbaddecaf
      project_name: YourProjectName-prj
      username: BloggsF
      user_domain_name: YourProjectName

Optionally, you can separate out the password, username or any other “sensitive” information into a secure.yml file stored in the same location (removing those lines from the clouds.yml file), like this:

---
clouds:
  root:
    auth:
      password: MySuperSecretPasswordIsHere

Now, you can use the Python based Openstack Client, using this invocation:

openstack --os-cloud root server list

Alternatively you can use the Ansible Openstack (and K5) modules, like this:

---
tasks:
- name: "Authenticate to K5"
  k5_auth:
    cloud: root
  register: k5_auth_reg
- name: "Create Network"
  k5_create_network:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Subnet"
  k5_create_subnet:
    name: "Public"
    network_name: "Public"
    cidr: "192.0.2.0/24"
    gateway_ip: "192.0.2.1"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Router"
  k5_create_router:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Attach private network to router"
  os_router:
    name: "Public"
    state: present
    network: "inf_az1_ext-net02"
    interfaces: "Public"
    cloud: root
- name: "Create Servers"
  os_server:
    name: "Server"
    availability_zone: "uk-1a"
    flavor: "P-1"
    state: present
    key_name: "MyFirstKey"
    network: "Public-Network"
    image: "Ubuntu Server 14.04 LTS (English) 02"
    boot_from_volume: yes
    terminate_volume: yes
    security_groups: "Default"
    auto_ip: no
    timeout: 7200
    cloud: root

One to read or watch: “Programming is Forgetting: Toward a New Hacker Ethic”

Here is a transcript of a talk by Allison Parrish at the Open Hardware Summit in Portland, OR. The talk “Programming is Forgetting: Toward a New Hacker Ethic” is a discussion about the failings of the book “Hackers” by Steven Levy. Essentially, that book proposed (in the 80’s) a set of ethics for Hackers (which is to say, creative programmers or engineers, not malicious operators). Allison suggests that many of the parables in the book do not truly reflect the “Hacker Ethic”, and revises them for today’s world.

Her new questions (not statements) are as follows:

  • Who gets to use what I make? Who am I leaving out? How does what I make facilitate or hinder access?
  • What data am I using? Whose labor produced it and what biases and assumptions are built into it? Why choose this particular phenomenon for digitization or transcription? And what do the data leave out?
  • What systems of authority am I enacting through what I make? What systems of support do I rely on? How does what I make support other people?
  • What kind of community am I assuming? What community do I invite through what I make? How are my own personal values reflected in what I make?

This is a significant re-work of the original “Hacker Ethic“, and you should really either watch or read the talk to see how she got to these from the original, especially as it’s not as punchy as the original.

I’d like to think I was thinking of things like these questions when I wrote CampFireManager and CCHits.

Development Environment Replication with Vagrant and Puppet

This week, I was fortunate enough to meet up with the Cheadle Geeks group. I got talking to a couple of people about Vagrant and Puppet, and explaining how it works, and I thought the best thing to do would be to also write that down here, so that I can point anyone who missed any of what I was saying to it.

Essentially, Vagrant is program to read a config file which defines how to initialize a pre-built virtual machine. It has several virtual machine engines which it can invoke (see [1] for more details on that), but the default virtual machine to use is VirtualBox.

To actually find a virtual box to load, there’s a big list over at vagrantbox.es which have most standard cloud servers available to you. Personally I use the Ubuntu Precise 32bit image from VagrantUp.com for my open source projects (which means more developers can get involved). Once you’ve picked an image, use the following command to get it installed on your development machine (you only need to do this step once per box!):

vagrant box add {YourBoxName} {BoxURL}

After you’ve done that, you need to set up the Vagrant configuration file.

cd /path/to/your/dev/environment
mkdir Vagrant
cd Vagrant
vagrant init {YourBoxName}

This will create a file called Vagrantfile in /path/to/your/dev/environment/Vagrant. It looks overwhelming at first, but if you trim out some of the notes (and tweak one or two of the lines), you’ll end up with a file which looks a bit like this:

Vagrant.configure("2") do |config|
  config.vm.box = "{YourBoxName}"
  config.vm.hostname = "{fqdn.of.your.host}"
  config.vm.box_url = "{BoxURL}"
  config.vm.network :forwarded_port, guest: 80, host: 8080
  # config.vm.network :public_network
  config.vm.synced_folder "../web", "/var/www"
  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
  end
end

This assumes you’ve replaced anything with {}’s in it with a real value, and that you want to forward TCP/8080 on your machine to TCP/80 on that box (there are other work arounds, using more Vagrant plugins, different network types, or other services such as pagekite, but this will do for now).

Once you’ve got this file, you could start up your machine and get a bare box, but that’s not much use to you, as you’d have to tell people how to configure your development environment every time they started up a new box. Instead, we’ll be using a Provisioning service, and we’re going to use Puppet for that.

Puppet was originally designed as a way of defining configuration across all an estate’s servers, and a lot of tutorials I’ve found online explain how to use it for that, but when we’re setting up Puppet for a development environment, we just need a simple file. This is the site.pp manifest, and in here we define the extra files and packages we need, plus any commands we need to run. So, let’s start with a basic manifest file:

node default {

}

Wow, isn’t that easy? :) We need some more detail than that though. First, let’s make sure the timezone is set. I live in the UK, so my timezone is “Europe/London”. Let’s put that in. We also need to make sure that any commands we run have the right path in them. So here’s our revised, debian based, manifest file.

node default {
    Exec {
        path => '/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/sbin:/usr/sbin'
    }

    package { "tzdata":
        ensure => "installed"
    }

    file { "/etc/timezone":
        content => "Europe/London\n",
        require => Package["tzdata"]
    }

    exec { "Set Timezone":
        unless => "diff /etc/localtime /usr/share/zoneinfo/`cat /etc/timezone`",
        command => "dpkg-reconfigure -f noninteractive tzdata",
        require => File["/etc/timezone"]
    }
}

OK, so we’ve got some pretty clear examples of code to run here. The first Exec statement must always be in there, otherwise it gets a bit confused, but after that, we’re making sure the package tzdata is installed, we then make sure that, once the tzdata package is installed, we create or update the /etc/timezone file with the value we want, and then we use the dpkg-reconfigure command to set the timezone, but only if the timezone isn’t already set to that.

Just to be clear, this file describes what the system should look like at the end of it running, not a step-by-step guide to getting it running, so you might find that some of these packages install out of sequence, or something else might run before or after when you were expecting it to run. As a result, you should make good use of the “require” and “unless” statements if you want a proper sequence of events to occur.

Now, so far, all this does is set the timezone for us, it doesn’t set up anything like Apache or MySQL… perhaps you want to install something like WordPress here? Well, let’s see how we get other packages installed.

In the following lines of code, we’ll assume you’re just adding this text above the last curled bracket (the “}” at the end).

First, we need to ensure our packages are up to date:

exec { "Update packages":
    command => "sudo apt-get update && sudo apt-get dist-upgrade -y",
}

Here’s Apache getting installed:

package { "apache2":
    ensure => "installed",
    require => Exec['Update packages']
}

And, maybe you’ll want to set up something that needs mod_rewrite and a custom site? Add this to your Vagrantfile

config.vm.synced_folder "../Apache_Site", "/etc/apache2/shared_config"

Create a directory called /path/to/your/dev/environment/Apache_Site which should contain your apache site configuration file called “default”. Then add this to your site.pp

exec { "Enable rewrite":
    command => 'a2enmod rewrite',
    onlyif => 'test ! -e /etc/apache2/mods-enabled/rewrite.load',
    require => Package['apache2']
}

file { "/etc/apache2/sites-enabled/default":
  ensure => link,
  target => "/etc/apache2/shared_config/default",
}

So, at the end of all this, we have the following file structure:

/path/to/your/dev/environment
+ -- /Apache_Site
|    + -- default
+ -- /web
|    + -- index.html
+ -- /Vagrant
     + -- /manifests
     |    + -- site.pp
     + -- Vagrantfile

And now, you can add all of this to your Git repository [2], and off you go! To bring up your Vagrant machine, type (from the Vagrant directory):

vagrant up

And then to connect into it:

vagrant ssh

And finally to halt it:

vagrant halt

Or if you just want to kill it off…

vagrant destroy

If you’re tweaking the provisioning code, you can run this instead of destroying it and bringing it back up again:

vagrant provision

You can do some funky stuff with running several machines, and using the same puppet file for all of those, but frankly, that’s a topic for another day.

[1] Vagrant is extended using plugins. There is a list of plugins on this Github Wiki Page. The plugins here can include additional virtual machine back ends (called Providers in Vagrant terminology), and methods of configuring the OS after bootup (called Provisioners), but also anything around defining where to find resources, to define network addresses, even to handle caches and proxies.

[2] If you’re not using Git, you should be! However, you might want to add some stuff to your .gitignore – in particular, Vagrant adds a directory called /path/to/your/dev/environment/Vagrant/.vagrant where it puts the VMs it creates.