"www.GetIPv6.info decal" from Phil Wolff on Flickr

Hurricane Electric IPv6 Gateway on Raspbian for Raspberry Pi

Following an off-hand remark from a colleague at work, I decided I wanted to set up a Raspberry Pi as a Hurricane Electric IPv6 6in4 tunnel router. Most of the advice around (in particular, this post about setting up IPv6 on the Raspberry Pi Forums) related to earlier version of Raspbian, so I thought I’d bring it up-to-date.

I installed the latest available version of Raspbian Stretch Lite (2018-11-13) and transferred it to a MicroSD card. I added the file ssh to the boot volume and unmounted it. I then fitted it into my Raspberry Pi, and booted it. While it was booting, I set a static IPv4 address on my router (192.168.1.252) for the Raspberry Pi, so I knew what IP address it would be on my network.

I logged into my Hurricane Electric (HE) account at tunnelbroker.net and created a new tunnel, specifying my public IP address, and selecting my closest HE endpoint. When the new tunnel was created, I went to the “Example Configurations” tab, and selected “Debian/Ubuntu” from the list of available OS options. I copied this configuration into my clipboard.

I SSH’d into the Pi, and gave it a basic config (changed the password, expanded the disk, turned off “predictable network names”, etc) and then rebooted it.

After this was done, I created a file in /etc/network/interfaces.d/he-ipv6 and pasted in the config from the HE website. I had to change the “local” line from the public IP I’d provided HE with, to the real IP address of this box. Note that any public IPs (that is, not 192.168.x.x addresses) in the config files and settings I’ve noted refer to documentation addressing (TEST-NET-2 and the IPv6 documentation address ranges)

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
        address 2001:db8:123c:abd::2
        netmask 64
        endpoint 198.51.100.100
        local 192.168.1.252
        ttl 255
        gateway 2001:db8:123c:abd::1

Next, I created a file in /etc/network/interfaces.d/eth0 and put the following configuration in, using the first IPv6 address in the “routed /64” range listed on the HE site:

auto eth0
iface eth0 inet static
    address 192.168.1.252
    gateway 192.168.1.254
    netmask 24
    dns-nameserver 8.8.8.8
    dns-nameserver 8.8.4.4

iface eth0 inet6 static
    address 2001:db8:123d:abc::1
    netmask 64

Next, I disabled the DHCPd service by issuing systemctl stop dhcpcd.service Late edit (2019-01-22): Note, a colleague mentioned that this should have actually been systemctl stop dhcpcd.service && systemctl disable dhcpcd.service – good spot! Thanks!! This ensures that if, for some crazy reason, the router stops offering the right DHCP address to me, I can still access this box on this IP. Huzzah!

I accessed another host which had IPv6 access, and performed both a ping and an SSH attempt. Both worked. Fab. However, this now needs to be blocked, as we shouldn’t permit anything to be visible downstream from this gateway.

I’m using the Uncomplicated Firewall (ufw) which is a simple wrapper around IPTables. Let’s create our policy.

# First install the software
sudo apt update && sudo apt install ufw -y

# Permits inbound IPv4 SSH to this host - which should be internal only. 
# These rules allow tailored access in to our managed services
ufw allow in on eth0 app DNS
ufw allow in on eth0 app OpenSSH

# These rules accept all broadcast and multicast traffic
ufw allow in on eth0 to 224.0.0.0/4 # Multicast addresses
ufw allow in on eth0 to 255.255.255.255 # Global broadcast
ufw allow in on eth0 to 192.168.1.255 # Local broadcast

# Alternatively, accept everything coming in on eth0
# If you do this one, you don't need the lines above
ufw allow in on eth0

# Setup the default rules - deny inbound and routed, permit outbound
ufw default deny incoming 
ufw default deny routed
ufw default allow outgoing

# Prevent inbound IPv6 to the network
# Also, log any drops so we can spot them if we have an issue
ufw route deny log from ::/0 to 2001:db8:123d:abc::/64

# Permit outbound IPv6 from the network
ufw route allow from 2001:db8:123d:abc::/64

# Start the firewall!
ufw enable

# Check the policy
ufw status verbose
ufw status numbered

Most of the documentation I found suggested running radvd for IPv6 address allocation. This basically just allocates on a random basis, and, as far as I can make out, each renewal gives the host a new IPv6 address. To make that work, I performed apt-get update && apt-get install radvd -y and then created this file as /etc/radvd.conf. If all you want is a floating IP address with no static assignment – this will do it…

interface eth0
{
    AdvSendAdvert on;
    MinRtrAdvInterval 3;
    MaxRtrAdvInterval 10;
    prefix 2001:db8:123d:abc::/64
    {
        AdvOnLink on;
        AdvAutonomous on;
    };
   route ::/0 {
   };
};

However, this doesn’t give me the ability to statically assign IPv6 addresses to hosts. I found that a different IPv6 allocation method will do static addressing, based on your MAC address called SLAAC (note there are some privacy issues with this, but I’m OK with them for now…) In this mode assuming the prefix as before – 2001:db8:123d:abc:: and a MAC address of de:ad:be:ef:01:23, your IPv6 address will be something like: 2001:db8:123d:abc:dead:beff:feef:0123and this will be repeatably so – because you’re unlikely to change your MAC address (hopefully!!).

This SLAAC allocation mode is available in DNSMasq, which I’ve consumed before (in a Pi-Hole). To use this, I installed DNSMasq with apt-get update && apt-get install dnsmasq -y and then configured it as follows:

interface=eth0
listen-address=127.0.0.1
# DHCPv6 - Hurricane Electric Resolver and Google's
dhcp-option=option6:dns-server,[2001:470:20::2],[2001:4860:4860::8888]
# IPv6 DHCP scope
dhcp-range=2001:db8:123d:abc::, slaac

I decided to move from using my router as a DHCP server, to using this same host, so expanded that config as follows, based on several posts, but mostly centred around the MAN page (I’m happy to have this DNSMasq config improved if you’ve got any suggestions ;) )

# Stuff for DNS resolution
domain-needed
bogus-priv
no-resolv
filterwin2k
expand-hosts
domain=localnet
local=/localnet/
log-queries

# Global options
interface=eth0
listen-address=127.0.0.1

# Set these hosts as the DNS server for your network
# Hurricane Electric and Google
dhcp-option=option6:dns-server,[2001:470:20::2],2001:4860:4860::8888]

# My DNS servers are:
server=1.1.1.1                # Cloudflare's DNS server
server=8.8.8.8                # Google's DNS server

# IPv4 DHCP scope
dhcp-range=192.168.1.10,192.168.1.210,12h
# IPv6 DHCP scope
dhcp-range=2001:db8:123d:abc::, slaac

# Record the DHCP leases here
dhcp-leasefile=/run/dnsmasq/dhcp-lease

# DHCPv4 Router
dhcp-option=3,192.168.1.254

So, that’s what I’m doing now! Hope it helps you!

Late edit (2019-01-22): In issue 129 of the “Awesome Self Hosted Newsletter“, I found a post called “My New Years Resolution: Learn IPv6“… which uses a pfSense box and a Hurricane Electric tunnel too. Fab!

Header image is “www.GetIPv6.info decal” by “Phil Wolff” on Flickr and is released under a CC-BY-SA license. Used with thanks!

"Zenith Z-19 Terminal" from ajmexico on Flickr

Some things I learned this week while coding extensions to Ansible!

If you follow any of the content I post around the internet, you might have seen me asking questions about trying to get data out of azure_rm_*_facts into something that’s usable. I can’t go into why I needed that data yet (it’s a little project I’m working on), but the upshot is that trying to manipulate data using “set_fact” with jinja is *doable* but *messy*. In the end, I decided to hand it all off to a new ansible module I’m writing. So, here are the things I learned about this.

  1. There’s lots more documentation about writing a module (a plugin that let’s you do stuff) than there is about writing filters (things that change text inline) or lookups (things that let you search other data stores). In the end, while I could have spent the time to figure out how better to write a filter or a lookup, it actually makes more sense in my context to hand a module all my data, and say “Parse this” and register the result than it would have done to have the playbook constantly check whether things were in other things. I still might have to do that, but… you know, for now, I’ve got the bits I want! :)
  2. I did start looking at writing a filter, and discovered that the “debugging advice” on the ansible site is all geared up to enable more modules than enabling filters… but I did discover that modules execute on their target (e.g. WebHost01) while filters and lookups execute on the local machine. Why does this matter? Well…..
  3. While I was looking for documentation about debugging Ansible code, I stumbled over this page on debugging modules that makes it all look easy. Except, it’s only for debugging *MODULES* (very frustrating. Well, what does it actually mean? The modules get zipped up and sent to the host that will be executing the code, which means that with an extra flag to your playbook (ANSIBLE_KEEP_REMOTE_FILES – even if it’s going to be run on “localhost”), you get the combined output of the script placed into a path on your machine, which means you can debug that specific play. That doesn’t work for filters…
  4. SOO, I jumped into #ansible on Freenode and asked for help. They in turn couldn’t help me (it’s more about writing playbooks than writing filters, modules, etc), so they directed me to #ansible-devel, where I was advised to use a python library called “q” (Edit, same day: my friend @mohclips pointed me to this youtube video from 2003 of the guy who wrote q explaining about it. Thanks Nick! I learned something *else* about this library).
  5. Oh man, this is the motherlode. So, q makes life *VERY* easy. Assuming this is valid code: All you’d need to do would be to add two lines, as you’ll see here: This then dumps the output from each of the q(something) lines into /tmp/q for you to read at your leisure! (To be fair, I’d probably remove it after you’ve finished, so you don’t fill a disk :) )
  6. And that’s when I discovered that it’s actually easier to use q() for all my python debugging purposes than it is to follow the advice above about debugging modules. Yehr, it’s basically a load of print statements, so you don’t get to see stack traces, or read all the variables, and you don’t get to step through code to see why decisions were taken… but for the rubbish code I produce, it’s easily enough for me!

Header image is “Zenith Z-19 Terminal” by “ajmexico” on Flickr and is released under a CC-BY license. Used with thanks!

"LEGO Factory Playset" from Brickset on Flickr

Building Azure Environments in Ansible

Recently, I’ve been migrating my POV (proof of value) and POC (proof of concept) environment from K5 to Azure to be able to test vendor products inside Azure. I ran a few tests to build the environment using the native tools (the powershell scripts) and found that the Powershell way of delivering Azure environments seems overly complicated… particularly as I’m comfortable with how Ansible works.

To be fair, I also need to look at Terraform, but that isn’t what I’m looking at today :)

So, let’s start with the scaffolding. Any Ansible Playbook which deals with creating virtual machines needs to have some extra modules installed. Make sure you’ve got ansible 2.7 or later and the python azure library 2.0.0 or later (you can get both with pip for python).

Next, let’s look at the group_vars for this playbook.

This file has several pieces. We define the project settings (anything prefixed project_ is a project setting), including the prefix used for all resources we create (in this case “env01“), and a standard password used for all VMs we create (in this case “My$uper$ecret$Passw0rd“).

Next we define the standard images to load from the Marketplace. You can extend this with other images, these are just the “easiest” ones that I’m most familiar with (your mileage may vary). Next up is the networks to build inside the VNet, and lastly we define the actual machines we want to build. If you’ve got questions about any of the values we define here, just let me know in the comments below :)

Next, we’ll start looking at the playbook (this has been exploded out – the full playbook is also in the gist).

Here we start by pulling in the variables we might want to override, and we do this by reading system environment variables (ANSIBLE_PREFIX and BREAKGLASS) and using them if they’re set. If they’re not, use the project defaults, and if that hasn’t been set, use some pre-defined values… and then tell us what they are when we’re running the tasks (those are the debug: lines).

This block is where we create our “Static Assets” – individual items that we will be consuming later. This shows a clear win here over the Powershell methods endorsed by Microsoft – here you can create a Resource Group (RG) as part of the playbook! We also create a single Storage Account for this RG and a single VNET too.

These creation rules are not suitable for production use, as this defines an “Any-Any” Security group! You should tailor your security groups for your need, not for blanket access in!

This is where things start to get a bit more interesting – We’re using the “async/async_status” pattern here (and the rest of these sections) to start creating the resources in parallel. As far as I can tell, sometimes you’ll get a case where the async doesn’t quite get set up fast enough, then the async_status can’t track the resources properly, but re-running the playbook should be enough to sort that out, without slowing things down too much.

But what are we actually doing with this block of code? A UDR is a “User Defined Route” or routing table for Azure. Effectively, you treat each network interface as being plumbed directly to the router (none of this “same subnet broadcast” stuff works here!) so you can do routing at the router for all the networks.

By default there are some existing network routes (stuff to the internet flows to the internet, RFC1918 addresses are dropped with the exception of any RFC1918 addresses you have covered in your VNETs, and each of your subnets can reach each other “directly”). Adding a UDR overrides this routing table. The UDRs we’re creating here are applied at a subnet level, but currently don’t override any of the existing routes (they’re blank). We’ll start putting routes in after we’ve added the UDRs to the subnets. Talking of which….

Again, this block is not really suitable for production use, and assumes the VNET supernet of /8 will be broken down into several /24’s. In the “real world” you might deliver a handful of /26’s in a /24 VNET… or you might even have lots of disparate /24’s in the VNET which are then allocated exactly as individual /24 subnets… this is not what this model delivers but you might wish to investigate further!

Now that we’ve created our subnets, we can start adding the routing table to the UDR. This is a basic one – add a 0.0.0.0/0 route (internet access) from the “protected” network via the firewall. You can get a lot more specific than this – most people are likely to want to add the VNET range (in this case 10.0.0.0/8) via the firewall as well, except for this subnet (because otherwise, for example, 10.0.0.100 trying to reach 10.0.0.101 will go via the firewall too).

Without going too much into the intricacies of network architecture, if you are routing your traffic between subnets to the firewall, it’s probably better to get an appliance with more interfaces, so you can route traffic across the appliance, rather than going across a single interface as this will halve your traffic bandwidth (it’s currently capped 1Gb/s – so 500Mb/s).

Having mentioned “The Internet” – let’s give our firewall a public IP address, and create the rest of the interfaces as well.

This script creates a public IP address by default for each interface unless you explicitly tell it not to (see lines 40, 53 and 62 in the group_vars file I rendered above). You could easily turn this around by changing the lines which contain this:

item.1.public is not defined or (item.1.public is defined and item.1.public == 'true')

into lines which contain this:

item.1.public is defined and item.1.public == 'true'

OK, having done all that, we’re now ready to build our virtual machines. I’ve introduced a “Priority system” here – VMs with priority 0 go first, then 1, and 2 go last. The code snippet below is just for priority 0, but you can easily see how you’d extrapolate that out (and in fact, the full code sample does just that).

There are a few blocks here to draw attention to :) I’ve re-jigged them a bit here so it’s clearer to understand, but when you see them in the main playbook they’re a bit more compact. Let’s start with looking at the Network Interfaces section!

network_interfaces: |
  [
    {%- for nw in item.value.ports -%}
      '{{ prefix }}{{ item.value.name }}port{{ nw.subnet.name }}'
      {%- if not loop.last -%}, {%- endif -%} 
    {%- endfor -%}
  ]

In this part, we loop over the ports defined for the virtual machine. This is because one device may have 1 interface, or four interfaces. YAML is parsed to make a JSON variable, so here we can create a JSON variable, that when the YAML is parsed it will just drop in. We’ve previously created all the interfaces to have names like this PREFIXhostnamePORTsubnetname (or aFW01portWAN in more conventional terms), so here we construct a JSON array, like this: ['aFW01portWAN'] but that could just as easily have been ['aFW01portWAN', 'aFW01portProtect', 'aFW01portMGMT', 'aFW01portSync']. This will then attach those interfaces to the virtual machine.

Next up, custom_data. This section is sometimes known externally as userdata or config_disk. My code has always referred to it as a “Provision Script” – hence the variable name in the code below!

custom_data: |
  {%- if item.value.provision_script is defined and item.value.provision_script != '' -%}
    {%- include(item.value.provision_script) -%}
  {%- elif item.value.image.provision_script is defined and item.value.image.provision_script != '' -%}
    {%- include(item.value.image.provision_script) -%}
  {%- else -%}
    {{ omit }}
  {%- endif -%}

Let’s pick this one apart too. If we’ve defined a provisioning script file for the VM, include it, if we’ve defined a provisioning script file for the image (or marketplace entry), then include that instead… otherwise, pretend that there’s no “custom_data” field before you submit this to Azure.

One last quirk to Azure, is that some images require a “plan” to go with it, and others don’t.

plan: |
  {%- if item.value.image.plan is not defined -%}{{ omit }}{%- else -%}
    {'name': '{{ item.value.image.sku }}',
     'publisher': '{{ item.value.image.publisher }}',
     'product': '{{ item.value.image.offer }}'
    }
  {%- endif -%}

So, here we say “if we’ve not got a plan, omit the value being passed to Azure, otherwise use these fields we previously specified. Weird huh?

The very last thing we do in the script is to re-render the standard password we’ve used for all these builds, so that we can check them out!

Want to review this all in one place?

Here’s the link to the full playbook, as well as the group variables (which should be in ./group_vars/all.yml) and two sample userdata files (which should be in ./userdata) for an Ubuntu machine (using cloud-init) and one for a FortiGate Firewall.

All the other files in that gist (prefixes from 10-16 and 00) are for this blog post only, and aren’t likely to work!

If you do end up using this, please drop me a note below, or star the gist! That’d be awesome!!

Image credit: “Lego Factory Playset” from Flickr by “Brickset” released under a CC-BY license. Used with Thanks!

Defining Networks with Ansible

In my day job, I’m using Ansible to provision networks in OpenStack. One of the complaints I’ve had about the way I now define them is that the person implementing the network has to spell out all the network elements – the subnet size, DHCP pool, the addresses of the firewalls and names of those items. This works for a manual implementation process, but is seriously broken when you try to hand that over to someone else to implement. Most people just want something which says “Here is the network I want to implement – 192.0.2.0/24″… and let the system make it for you.

So, I wrote some code to make that happen. It’s not perfect, and it’s not what’s in production (we have lots more things I need to add for that!) but it should do OK with an IPv4 network.

Hope this makes sense!

---
- hosts: localhost
  vars:
  - networks:
      # Defined as a subnet with specific router and firewall addressing
      external:
        subnet: "192.0.2.0/24"
        firewall: "192.0.2.1"
        router: "192.0.2.254"
      # Defined as an IP address and CIDR prefix, rather than a proper network address and CIDR prefix
      internal_1:
        subnet: "198.51.100.64/24"
      # A valid smaller network and CIDR prefix
      internal_2:
        subnet: "203.0.113.0/27"
      # A tiny CIDR network
      internal_3:
        subnet: "203.0.113.64/30"
      # These two CIDR networks are unusable for this environment
      internal_4:
        subnet: "203.0.113.128/31"
      internal_5:
        subnet: "203.0.113.192/32"
      # A massive CIDR network
      internal_6:
        subnet: "10.0.0.0/8"
  tasks:
  # Based on https://stackoverflow.com/a/47631963/5738 with serious help from mgedmin and apollo13 via #ansible on Freenode
  - name: Add router and firewall addressing for CIDR prefixes < 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4),
            'dhcp_start': item.value.dhcp_start | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 2) | ipv4),
            'dhcp_end': item.value.dhcp_end | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 2) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') < 30   - name: Add router and firewall addressing for CIDR prefixes = 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') == 30
  - debug:
      var: networks

"Copying and Pasting from Stack Overflow" Spoof O'Reilly Book Cover

Just a little reminder (to myself) about changing the path of a git submodule

Sometimes, it’s inevitable (maybe? :) ), you’ll add a git submodule from the wrong URL… I mean, EVERYONE’S done that, right? … right? you lot over there, am I right?… SIGH.

In my case, I’m trying to make sure I always use the https URLs with my github repo, but sometimes I add the git URL instead. When you run git remote -v in the path, you’ll get something like:

origin git@github.com:your-org/your-repo.git (fetch)

instead of

origin https://github.com/your-org/your-repo (fetch)

which means that when someone tries to clone your repo, they’ll be being asked for access to their public keys for all the submodules. Not great

Anyway, it should be easy enough – git creates a .gitmodules file in the repo root, so you should just be able to edit that file, and replace the git@ with https:// and the com: with com/… but what do you do next?

Thanks to this great Stack Overflow answer, I found you can just run these two commands after you’ve made that edit:

git submodule sync ; git submodule update --init --recursive --remote

Isn’t Stack Overflow great?

Using inspec to test your ansible

Over the past few days I’ve been binge listening to the Arrested Devops podcast. In one of the recent episodes (“Career Change Into DevOps With Michael Hedgpeth, Annie Hedgpeth, And Megan Bohl (ADO102)“) one of the interviewees mentions that she got started in DevOps by using Inspec.

Essentially, inspec is a way of explaining “this is what my server must look like”, so you can then test these statements against a built machine… effectively letting you unit test your provisioning scripts.

I’ve already built a fair bit of my current personal project using Ansible, so I wasn’t exactly keen to re-write everything from scratch, but it did make me think that maybe I should have a common set of tests to see how close my server was to the hardening “Benchmark” guides from CIS… and that’s pretty easy to script in inspec, particularly as the tests in those documents list the “how to test” and “how to remediate” commands to execute.

These are in the process of being drawn up (so far, all I have is an inspec test saying “confirm you’re running on Ubuntu 16.04″… not very complex!!) but, from the looks of things, the following playbook would work relatively well!

---
- name: Make /testing path
  file:
    state: directory
    path: /testing
    owner: root
    group: root
- name: Copy tests to /testing
  copy:
    src: ../files/
    dest: /testing/
    owner: root
    group: root
- name: Ensure ruby is installed
  apt:
    name: "{{ item }}"
    state: present
  with_items:
  - ruby
  - ruby-dev
  - build-essential
  - libffi-dev
- name: Ensure inspec is installed
  gem:
    name: inspec
    state: present
    user_install: no
- name: Run inspec tests
  command: inspec exec /testing

Experiments with USBIP on Raspberry Pi

At home, I have a server on which I run my VMs and store my content (MP3/OGG/FLAC files I have ripped from my CDs, Photos I’ve taken, etc.) and I want to record material from FreeSat to play back at home, except the server lives in my garage, and the satellite dish feeds into my Living Room. I bought a TeVii S660 USB FreeSat decoder, and tried to figure out what to do with it.

I previously stored the server near where the feed comes in, but the running fan was a bit annoying, so it got moved… but then I started thinking – what if I ran a Raspberry Pi to consume the media there.

I tried running OpenElec, and then LibreElec, and while both would see the device, and I could even occasionally get *content* out of it, I couldn’t write quick enough to the media devices attached to the RPi to actually record what I wanted to get from it. So, I resigned myself to the fact I wouldn’t be recording any of the Christmas Films… until I stumbled over usbip.

USBIP is a service which binds USB ports to a TCP port, and then lets you consume that USB port on another machine. I’ll discuss consuming the S660’s streams in another post, but the below DOES work :)

There are some caveats here. Because I’m using a Raspberry Pi, I can’t just bung on any old distribution, so I’m a bit limited here. I prefer Debian based images, so I’m going to artificially limit myself to these for now, but if I have any significant issues with these images, then I’ll have to bail on Debian based, and use something else.

  1. If I put on stock Raspbian Jessie, I can’t use usbip, because while ships its own kernel that has the right tools built-in (the usbip_host, usbip_core etc.), it doesn’t ship the right userland tools to manipulate it.
  2. If I’m using a Raspberry Pi 3, there’s no supported version of Ubuntu Server which ships for it. I can use a flavour (e.g. Ubuntu Mate), but that uses the Raspbian kernel, which, as I mentioned before, is not shipping the right userland tools.
  3. If I use a Raspberry Pi 2, then I can use Stock Ubuntu, which ships the right tooling. Now all I need to do is find a CAT5 cable, and some way to patch it through to my network…

Getting the Host stood up

I found most of my notes on this via a wiki entry at Github but essentially, it boils down to this:

On your host machine, (where the USB port is present), run

sudo apt-get install linux-tools-generic
sudo modprobe usbip_host
sudo usbipd -D

This confirms that your host can present the USB ports over the USBIP interface (there are caveats! I’ll cover them later!!).

You now need to find which ports you want to serve. Run this command to list the ports on your system:

lsusb

You’ll get something like this back:

Bus 001 Device 004: ID 9022:d662 TeVii Technology Ltd.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And then you need to find which port the device thinks it’s attached to. Run this to see how usbip sees the world:

usbip list -l

This will return:

- busid 1-1.1 (0424:ec00)
unknown vendor : unknown product (0424:ec00)
- busid 1-1.3 (9022:d662)
unknown vendor : unknown product (9022:d662)

We want to share the TeVii device, which has the ID 9022:d662, and we can see that this is present as busid 1-1.3, so we now we need to bind it to the usbip system, with this command:

usbip bind -b 1-1.3

OK, so now we’re presenting this to the system. Perhaps you might want to make it available on a reboot?

echo "usbip_host" >> /etc/modules

I also added @reboot /usr/bin/usbipd -D ; sleep 5 ; /usr/bin/usbip bind -b 1-1.3 to root’s crontab, but it should probably go into a systemd unit.

Getting the Guest stood up

All these actions are being performed as root. As before, let’s get the modules loaded in the kernel:

apt-get install linux-tools-generic
modprobe vhci-hcd

Now, we can try to attach the module over the wire. Let’s check what’s offered to us (this code example uses 192.0.2.1 but this would be the static IP of your host):

usbip list -r 192.0.2.1

This hands up back the list of offered appliances:

Exportable USB devices
======================
- 192.0.2.1
1-1.3: TeVii Technology Ltd. : unknown product (9022:d662)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3
: (Defined at Interface level) (00/00/00)
: 0 - Vendor Specific Class / unknown subclass / unknown protocol (ff/01/01)

So, now all we need to do is attach it:

usbip attach -r 192.0.2.1 -b 1-1.3

Now I can consume the service from that device in tvheadend on my server. However, again, I need to make this persistent. So, let’s make sure the module is loaded on boot.

echo 'vhci-hcd' >> /etc/modules

And, finally, we need to attach the port on boot. Again, I’m using crontab, but should probably wrap this into a systemd service.

@reboot /usr/bin/usbip attach -r 192.0.2.1 -b 1-1.3

And then I had an attached USB device across my network!

Unfortuately, the throughput was a bit too low (due to silly ethernet-over-power adaptors) to make it work the way I wanted… but theoretically, if I had proper patching done in this house, it’d be perfect! :)

Interestingly, the day I finished this post off (after it’d sat in drafts since December), I spotted that one of the articles in Linux Magazine is “USB over the network with USB/IP”. Just typical! :D

What I Did Last Week (ending 2012-01-15)

Monday 9th: I had a frantic morning before work, moving the last few bits onto our bed before the decorations started. When I got to work, it was a quiet work day, which I was glad of after a hectic weekend. It was quiet in more ways than one – my bluetooth headset died, so I ordered a duplicate as a replacement. I nearly bought a Sony Ericsson LiveView at the same time as the price was nearly what I had previously considered paying, until I read the reviews! Had a couple of emails from my dad which were similar to the conversations I’d had with him Friday Night (see last week for details!) Sent a very emotional response that didn’t seem to make a dent. Never mind. Once Daniel had gone to bed, Jules and I curled up in the bombsite of our living room and watched a couple of programs before going to bed. I then finished “Daemon” 5*s, and I bought the sequel “Freedom TM” also by Daniel Suarez. I started reading “Cyberpunk Stories” by William King which I picked up today. While I was reading, Daniel sat up in his sleep and then fell over, Jules and I think he might be a sleep walker, which will be fun – even more cause for fitting stair gates! One final chat with Dave Lee about Powerline adaptors before going to sleep.

Tuesday 10th: I booked Wednesday off work, and plan to make the most of it. Forgot to set my Out Of Office messages before leaving for the day, so needed to dip back into my e-mail once I was home. Jules and I watched TV for a bit before going to bed, and I realised that I’d got about 6 tracks to submit to cchits (three from a google alert I’ve got set up for CC licensed music, one from a recommendation on statusnet, and two that were played on the Crivins podcast), however, I’m staying offline this week in the evenings as much as possible, so they’ll have to wait until I’m releasing my self-imposed blockade. Posted the powerline adaptors to Dave Lee, hope they help him out! Finished “Cyberpunk Stories” 2*s and then read “Freedom TM” 5*s which is an amazing twist on the “Daemon” book… I think if the author ever collaborates with someone like Cory Doctorow, for the futurisms, or better yet, Charles Stross, for taking a scary concept and making it both funny and deeply understandable… well, let’s just say I want to see what else comes from this author. While I was reading, Daniel sleep-sat-up again twice, bumping his head the second time around. Oh well, maybe he’s just disturbed by all the decorating stuff.

Wednesday 11th: Day off!! Yey!! Early start then off to Head-over-heels, a soft play centre near Hanforth Dean. Daniel had his morning nap while I was driving us there, so Jules jumped out and did some shopping while I listened to “How governments have tried to block Tor” video from 28C3 which was at the recommendation of a colleague. Ironic I’d not heard of it, given that I’d done a presentation on Tor at the first OggCamp. Spent 2hrs at Head-over-heels including getting lunch then went to the Trafford Centre. Daniel had his afternoon nap en-route to the Trafford Centre, so I dropped Jules off at the Trafford Centre and carried on listening to the Tor talk. Very little I’d not heard before, but great to have it in context. Picked up the full Father Ted box set, plus the “X-Men: 1st Class” DVD after having heard some great reviews. Daniel got a new pram book from Waterstones, which he then spent the whole rest of the time there flicking through it. Jules picked up two new lego board games “Sunblock” and “Race 3000“. I swear we’ve nearly got all of the lego games now. Got home to find wet-paint walls in the dining room and wet skirting boards in the lounge, so Daniel and Jules or I spent the whole evening except for dinner in his room. Once he went to bed, Jules and I played a couple of games each of Sunblock and Race 3000, then Jules went to sleep. I bought three new books for my Kindle “Beloved Weapon” by Jonathan A. Price, “The Windup Girl” by Paolo Bacigalupi and “Empire State” by Adam Christopher. I started, and am 38% through “Beloved Weapon”, and although there’s a fair bit of gratuitous and graphic sex scenes, it’s a pretty good superhero story thus far. While I’ve been typing this up, Dave Lee’s been in touch to say that the Powerline Ethernet adaptors I sent him had arrived and we did some diagnostics around why the throughput was low. At the same time, Daniel’s been stirring a lot again. I’ve had to help him back down from sitting up twice already tonight, and that’s not counting the times he’s sorted himself out. Oh well. Today overall has been a good day.

Thursday 12th: really rubbish night, with Daniel waking after sitting up and falling and banging his head, and then not settling for over an hour. Got into work to be told that while I’d been off, a serious issue had occurred with something I’d implemented (which didn’t make sense, as we’d created accurate documentation based on the data I’d entered into the devices in question). Those two items together sent me into a bit of a spin and left me questioning myself for most of the day. Finished more-or-less on time. When I got home, Jules asked me to lower the matress on the cot as Daniel is getting proficient at pulling himself up on the side. Doing this meant I also fixed the under-cot-drawer which had been broken within a couple of weeks of us building it (before Daniel was born!). We played with Daniel until his bed time, and then once he was down, we had Chinese take-away and then played Upwords until Jules was tired. After Jules went to sleep, I finished “Beloved Weapon”. It had, frankly, a rubbish end and barely rated the 2*s I gave it. Personally, I think it paid too much attention to the sex and physical relationships between the characters than it did to any background, non-physical relationships or plot. I probably wouldn’t read anything else by this author. Started to read “Empire State”, but only managed a chapter before I got too sleepy.

Friday 13th: Yet another disturbed night. Jules had promised to take care of Daniel all night, but about 2 hours after I’d fallen asleep, he woke up screaming. Jules couldn’t calm him down and asked for some help. I went in and finished calming him down to just crying, while Jules went downstairs and got him some Calpol. After he’d taken it, he settled well, and Jules put him back down. He then woke up at about 6 and woke us both up. When I got into work, I was covering a collegue while he was on a customer visit, in addition to my normal accounts. One of my normal accounts scheduled a two-hour conference call starting at 12:00 and then at 16:00 (when I was due to be leaving) I got a call from my collegue’s account asking me to implement an urgent change for him. So I ended up leaving 40 minutes late. When I got home, the work downstairs is all finished, so Jules had pushed the sofas out to the edges so Daniel could play, and for the first time in a week, we both sat down with him and played too. When he went to bed, we watched some tv and then I spoke to my brother on the phone for 30 minutes. Jules went to bed at 9:15 and I listened to the live Bugcast show from my phone (I’d not had my laptop out at all this week) until the end of the show, when Dave proposed doing a Google Hangout. Out came the laptop and headphones and I ended up going to bed at about midnight. No reading tonight, straight to sleep – busy day tomorrow.

Saturday 14th: up at 6:45, breakfast and then I went out to get a radiator cover. When I got back, I built it and then loaded up the car for the tip. Jules’ Mum and Dad arrived and started emptying our room into the dining room. When Daniel woke up, the work started in ernest… By 1pm, we’d got most of the stuff down we were ready for, so I took Daniel to his swimming lesson, via the tip and a nap enroute to the class. Lesson went well, but I cut my foot during the lesson (banged it against the steps) but didn’t notice until I got out and, while I was getting dressed, I was bleeding all over the place. I swear, it looked like there had been a massacre in there! Asked the teacher for a plaster, filled out the accident form and went home. Daniel had his dinner then while I put him to bed, Jules nipped out to pick up dinner from the supermarket. We had pizza and watched “Take Me Out” (a guilty pleasure), then the after-show follow up on ITV2. It’s funny how obvious it is that they must pre-record it weeks if not months in advance, as a scandal broke out about the show after last week’s episode, that one of the contestants of this game show used to be a prostitute… And she was completely cut out of the show, even though she’s there in all the video clips and is there in the after show, but they don’t talk to her. Sad really. Early night.

Sunday 15th: not only is my cut foot stinging like crazy, but the ankle on my other foot has gone gout’y again. Aargh. Jules let me have a lay in until 9 AM then while I put Daniel down for his nap, Jules went shopping. When she got back, we all went to do the food shopping for the week, something we’ve not done together for months. Daniel was hungry while I sorted out paying for our purchases, so she fed him in the cafe, then when I got in there I went over all funny. Jules bought me a sandwich then I went out to the car and felt really sick. Jules dropped me off at home and then drove Daniel around so he could have a sleep without disturbing me. When they got home I was feeling much better, so I put the shopping away while Jules sorted out dinner. After dinner, I bathed Daniel, we played with him before bottle and bed and then Jules and I snuggled up on the sofa. We watched TV for an hour or so, and then went to bed. I’ve not read anything tonight, but I did catch up on some social network updates I’ve missed. Tired though, do an early night for me!

What I Did Last Week (ending 2012-01-08)

Inspired by Dan Lynch’s “Weekly Rewind” series, I thought I’d try and document some of what’s happened to me over the past week… you never know, I might even be able to keep on doing these! :)

Monday 2nd: Bank Holiday. De-christmasified the house. Earlier than I’d have liked, but it was a compromise, as Jules wanted to strip the house on Boxing Day. Dave Lee from TheBugCast adds the first CCHits.net “extra” show to his feed – Review of the tracks played on the site in 2011. It had been played on the live show on the 30th Dec.

Tuesday 3rd: Back to work, and catching up with collegues about what happened over the break. Trying and failing to compile Festival for CCHits. Called out… but not for anything sensible – just a license request for a customer – told them to get back in touch during the day.

Wednesday 4th: Discussed at length the differences between Access and Excel with my brother. Convinced I had the same discussion in 2004. He wants to start with a ToDo list which he can filter to show customers or management. Once he’s figured that out, I’m going to talk to him about separating data from presentation in a web app with a database backend. Maybe! Bought and read “Boltman” by Eric Quinn Knowles. It’s a little bit like “Kick Ass” vs. Scientology. I rate it 4*’s. Started reading “Under the Amoral Bridge” by Gary A. Ballard which I’d bought in November.

Thursday 5th: Fixed a Morse Code Keyboard for Android for a collegue (unfixed typos in the code for openbracket, and switched equals and hyphen characters) – developer hasn’t fixed issues since March 2010, so I’ve e-mailed the author to offer my patches, and failing that, I’ll consider a fork. Realised I’d fallen very far behind on my Android development suite – aside from anything else, the version of Eclipse I’ve got installed needed updating! I read to the end of “Under the Amoral Bridge” (4*s) and discovered it’s part 1 in a trilogy. Bought the compilation version of the trilogy, so I’m now reading book 2.

Friday 6th: Took down the decorations at work, then gladly handed over “On Call” for another week to a collegue, discovered they were on leave. Queue frantic phone calls and texts to make sure he knew he was on call. At 3:30 get asked if I could cover him for the night as it’s his wife’s 40th birthday. How can I say no? No calls, fortunately! Listen live to TheBugCast. Dave and Caroline have streaming issues and I get a call from my dad claiming his computer has been compromised, as he can’t log into GMail. Prove there’s no issues there, believe that is the end of it… In the aftershow, I mention the code acadamy site, and then explain some of the concepts of Javascript to another of the listeners, which is good :) end up going to bed at 2am.

Saturday 7th: visit from Jules’ uncle’s brother to discuss getting some decorating done. He can start on Monday. Queue full-scale panic as he’s doing both downstairs rooms! All CD’s and DVD’s not in storage, boxed up moved upstairs and unpacked… now need re-organising! All pictures, bottles, games, Daniel’s toys and books now in our bedroom! Argh! Daniel’s first swimming lesson with the new teacher (new term, and the franchise we go to is growing – which is good!) New teacher is nice and the class is still just 3 children and parents. Good stuff. While I’m in the swimming class, I get two emails from my dad. First is a scattergun one, saying “my email has been compromised, if you get a dodgy mail, let me know” and the second is to check whether he has been compromised. As we’re in full-scale panic, I’ve left that particular issue to my brother to deal with. I suspect drinking-related-paranoia is at the core of this. Never mind! We finish the bulk of the moves before Daniel’s bed time, and then Jules’ Mum and Dad come around to babysit so we can enjoy a meal up the road. The restaurant isn’t licensed, so neither of us can drink, and hasn’t got card processing facilities yet. I nip out after I’ve finished to get enough money to pay. We’ll definitely be back there! Early bed, but then I read all the rest of book 2 and most of book 3 of “The Bridge Chronicles”.

Sunday 8th: Earlyish start. Finish moving the last furniture around for the decorating and start to re-wire the entertainment corner, including unpatching the server where the shows are generated for CCHits.net just before the shows are about to be run (stupid UTC offsets!), resulting in two stinking CRON mails from CCHits complaing about the lack of shows (repatched and run). Run Jules to Halfords to pick up her new bike with child seat. Home and one last sprint around the house, then lunch, and out again! Jules to the shops and me to get Daniel to sleep before heading to a friend’s new house for a tour, games and then dinner. Home at 5, Daniel in bed, 6:45 and the furniture we couldn’t shift while he was awake, away for 7:30. Books in bed for 8. I finish “…Chronicles” (5*s) and then at the recommendation of @nybill, start “Daemon” by Daniel Suarez. At 11pm, stop reading (40% through the book) and start writing this review. 11:40 go to sleep!