Experiments with USBIP on Raspberry Pi

At home, I have a server on which I run my VMs and store my content (MP3/OGG/FLAC files I have ripped from my CDs, Photos I’ve taken, etc.) and I want to record material from FreeSat to play back at home, except the server lives in my garage, and the satellite dish feeds into my Living Room. I bought a TeVii S660 USB FreeSat decoder, and tried to figure out what to do with it.

I previously stored the server near where the feed comes in, but the running fan was a bit annoying, so it got moved… but then I started thinking – what if I ran a Raspberry Pi to consume the media there.

I tried running OpenElec, and then LibreElec, and while both would see the device, and I could even occasionally get *content* out of it, I couldn’t write quick enough to the media devices attached to the RPi to actually record what I wanted to get from it. So, I resigned myself to the fact I wouldn’t be recording any of the Christmas Films… until I stumbled over usbip.

USBIP is a service which binds USB ports to a TCP port, and then lets you consume that USB port on another machine. I’ll discuss consuming the S660’s streams in another post, but the below DOES work :)

There are some caveats here. Because I’m using a Raspberry Pi, I can’t just bung on any old distribution, so I’m a bit limited here. I prefer Debian based images, so I’m going to artificially limit myself to these for now, but if I have any significant issues with these images, then I’ll have to bail on Debian based, and use something else.

  1. If I put on stock Raspbian Jessie, I can’t use usbip, because while ships its own kernel that has the right tools built-in (the usbip_host, usbip_core etc.), it doesn’t ship the right userland tools to manipulate it.
  2. If I’m using a Raspberry Pi 3, there’s no supported version of Ubuntu Server which ships for it. I can use a flavour (e.g. Ubuntu Mate), but that uses the Raspbian kernel, which, as I mentioned before, is not shipping the right userland tools.
  3. If I use a Raspberry Pi 2, then I can use Stock Ubuntu, which ships the right tooling. Now all I need to do is find a CAT5 cable, and some way to patch it through to my network…

Getting the Host stood up

I found most of my notes on this via a wiki entry at Github but essentially, it boils down to this:

On your host machine, (where the USB port is present), run

sudo apt-get install linux-tools-generic
sudo modprobe usbip_host
sudo usbipd -D

This confirms that your host can present the USB ports over the USBIP interface (there are caveats! I’ll cover them later!!). Late edit: 2020-05-21 I never did write up those caveats, and now, two years later, I don’t recall what they were. Apologies.

You now need to find which ports you want to serve. Run this command to list the ports on your system:

lsusb

You’ll get something like this back:

Bus 001 Device 004: ID 9022:d662 TeVii Technology Ltd.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And then you need to find which port the device thinks it’s attached to. Run this to see how usbip sees the world:

usbip list -l

This will return:

- busid 1-1.1 (0424:ec00)
unknown vendor : unknown product (0424:ec00)
- busid 1-1.3 (9022:d662)
unknown vendor : unknown product (9022:d662)

We want to share the TeVii device, which has the ID 9022:d662, and we can see that this is present as busid 1-1.3, so we now we need to bind it to the usbip system, with this command:

usbip bind -b 1-1.3

OK, so now we’re presenting this to the system. Perhaps you might want to make it available on a reboot?

echo "usbip_host" >> /etc/modules

I also added @reboot /usr/bin/usbipd -D ; sleep 5 ; /usr/bin/usbip bind -b 1-1.3 to root’s crontab, but it should probably go into a systemd unit.

Getting the Guest stood up

All these actions are being performed as root. As before, let’s get the modules loaded in the kernel:

apt-get install linux-tools-generic
modprobe vhci-hcd

Now, we can try to attach the module over the wire. Let’s check what’s offered to us (this code example uses 192.0.2.1 but this would be the static IP of your host):

usbip list -r 192.0.2.1

This hands up back the list of offered appliances:

Exportable USB devices
======================
- 192.0.2.1
1-1.3: TeVii Technology Ltd. : unknown product (9022:d662)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3
: (Defined at Interface level) (00/00/00)
: 0 - Vendor Specific Class / unknown subclass / unknown protocol (ff/01/01)

So, now all we need to do is attach it:

usbip attach -r 192.0.2.1 -b 1-1.3

Now I can consume the service from that device in tvheadend on my server. However, again, I need to make this persistent. So, let’s make sure the module is loaded on boot.

echo 'vhci-hcd' >> /etc/modules

And, finally, we need to attach the port on boot. Again, I’m using crontab, but should probably wrap this into a systemd service.

@reboot /usr/bin/usbip attach -r 192.0.2.1 -b 1-1.3

And then I had an attached USB device across my network!

Unfortuately, the throughput was a bit too low (due to silly ethernet-over-power adaptors) to make it work the way I wanted… but theoretically, if I had proper patching done in this house, it’d be perfect! :)

Interestingly, the day I finished this post off (after it’d sat in drafts since December), I spotted that one of the articles in Linux Magazine is “USB over the network with USB/IP”. Just typical! :D

A brief guide to using vagrant-aws

CCHits was recently asked to move it’s media to another host, and while we were doing that we noticed that many of the Monthly shows were broken in one way or another…

Cue a massive rebuild attempt!

We already have a “ShowRunner” script, which we use with a simple Vagrant machine, and I knew you can use other hypervisor “providers”, and I used to use AWS to build the shows, so why not wrap the two parts together?

Firstly, I installed the vagrant-aws plugin:

vagrant plugin install vagrant-aws

Next I amended my Vagrantfile with the vagrant-aws values mentioned in the plugin readme:

Vagrant.configure(2) do |config|
    config.vm.provider :aws do |aws, override|
    config.vm.box = "ShowMaker"
    aws.tags = { 'Name' => 'ShowMaker' }
    config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
    
    # AWS Credentials:
    aws.access_key_id = "DECAFBADDECAFBADDECAF"
    aws.secret_access_key = "DeadBeef1234567890+AbcdeFghijKlmnopqrstu"
    aws.keypair_name = "TheNameOfYourSSHKeyInTheEC2ManagementPortal"
    
    # AWS Location:
    aws.region = "us-east-1"
    aws.region_config "us-east-1", :ami => "ami-c29e1cb8" # If you pick another region, use the relevant AMI for that region
    aws.instance_type = "t2.micro" # Scale accordingly
    aws.security_groups = [ "sg-1234567" ] # Note this *MUST* be an SG ID not the name
    aws.subnet_id = "subnet-decafbad" # Pick one subnet from https://console.aws.amazon.com/vpc/home
    
    # AWS Storage:
    aws.block_device_mapping = [{
      'DeviceName' => "/dev/sda1",
      'Ebs.VolumeSize' => 8, # Size in GB
      'Ebs.DeleteOnTermination' => true,
      'Ebs.VolumeType' => "GP2", # General performance - you might want something faster
    }]
    
    # SSH:
    override.ssh.username = "ubuntu"
    override.ssh.private_key_path = "/home/youruser/.ssh/id_rsa" # or the SSH key you've generated
    
    # /vagrant directory - thanks to https://github.com/hashicorp/vagrant/issues/5401
    override.nfs.functional = false # It tries to use NFS - use RSYNC instead
  end
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision "shell", path: "./run_setup.sh"
  config.vm.provision "shell", run: "always", path: "./run_showmaker.sh"
end

Of course, if you try to put this into your Github repo, it’s going to get pillaged and you’ll be spending lots of money on monero mining very quickly… so instead, I spotted this which you can do to separate out your credentials:

At the top of the Vagrantfile, add these two lines:

require_relative 'settings_aws.rb'
include SettingsAws

Then, replace the lines where you specify a “secret”, like this:

    aws.access_key_id = AWS_ACCESS_KEY
    aws.secret_access_key = AWS_SECRET_KEY

Lastly, create a file “settings_aws.rb” in the same path as your Vagrantfile, that looks like this:

module SettingsAws
    AWS_ACCESS_KEY = "DECAFBADDECAFBADDECAF"
    AWS_SECRET_KEY = "DeadBeef1234567890+AbcdeFghijKlmnopqrstu"
end

This file then can be omitted from your git repository using a .gitignore file.

Running Streisand to provide VPN services on my home server

A few months ago I was a guest on The Ubuntu Podcast, where I mentioned that I use Streisand to terminate my VPN connections. I waffled and blathered a bit about how I set it up, but in the end it comes down to this:

  1. Install Virtualbox on my Ubuntu server. Include the “Ext Pack”.
  2. Install Vagrant on my Ubuntu server.
  3. Clone the Streisand Github repository to my Ubuntu server.
  4. Enter that cloned repository, and edit the Vagrantfile as follows:
    1. Add the line “config.vm.boot_timeout = 65535” after the one starting “config.vm.box”.
    2. Change the streisand.vm.hostname line to be an appropriate hostname for my network, and add on the following line (replace “eth0” with the attached interface on your network and “192.0.2.1” with an unallocated static IP address from your network):
      streisand.vm.network "public_network", bridge: "eth0", ip: "192.0.2.1", :use_dhcp_assigned_default_route => false
    3. Add a “routing” line, as follows (replace 192.0.2.254 with your router IP address):
      streisand.vm.provision "shell", run: "always", inline: "ip route add 0.0.0.0/1 via 192.0.2.254 ; ip route add 128.0.0.0/1 via 192.0.2.254"
    4. Comment out the line “streisand_client_test => true”
    5. Amend the line “streisand_ipv4_address” to reflect the IP address you’ve put above in 4.2.
    6. Remove the block starting “config.vm.define streisand-client do |client|”
  5. Run “vagrant up” in that directory to start the virtual machine. Once it’s finished starting, there will be a folder called “Generated Docs” – open the .html file to see what credentials you must use to access the server. Follow it’s instructions.
  6. Once it’s completed, you should open ports on your router to the IP address you’ve specified. Typically, at least, UDP/500 and UDP/4500 for the IPsec service, UDP/636 for the OpenVPN service and TCP/4443 for the OpenConnect service.

Running Google MusicManager for two profiles

I’ve previously made mention of my addiction to Google Play Music… but I was called out recently, and asked about the script I used at the time. I’m sorry to say that I have had some issues with it, and instead, have resorted to using X forwarding. Here’s how I do it.

I create a user account for that other person (note, GMM will only let you upload to 3 accounts using this method. For any more, you’ll need a virtual machine!).

I then create an SSH public/private key with no passphrase.

ssh-keygen -b 2048 -N “” -C “$(whoami)@localhost” -f ~/.ssh/gmm.id_rsa

I write the public key into that new user’s .ssh/authorized_keys, by running:

ssh-copy-id -i ~/.ssh/gmm.id_rsa bloggsf@localhost

I will be prompted for the password of that account.

Finally, I create this script:

This is then added to the startup tasks of my headless-but-running-a-desktop machine.

One to read: “SKIP grep, use AWK” / ”Awk Tutorial, part {1,2,3,4}”

Do you use this pattern in your sh/bash/zsh/etc-sh scripts?

cat somefile | grep 'some string' | awk '{print $2}'

If so, you can replace that as follows:

cat somefile | awk '/some string/ {print $2}'

Or how about this?

grep -v 'something' < somefile | awk '{print $0}'

Try this:

awk '! /something/ {print $0}' < somefile

Ooo OK, how about if you want to get all actions performed by users when the ISO formatted dates (Y-m-d) match the first day of the month, but where you don’t want to also match January (unless you’re talking about the first of January)…

# echo 'BLOGGSF 2001-01-23 SOME_ACTION' | awk '$2 ~ /-01$/ {print $1, $3}'
(EMPTY LINE)
# echo 'BLOGGSF 2002-02-01 SOME_ACTION' | awk '$2 ~ /-01$/ {print $1, $3}'
BLOGGSF SOME_ACTION

This is so cool! Thanks to the tutorials “SKIP grep, use AWK” and the follow-up tutorials starting here…

Today I learned… Cloud-init doesn’t like you repeating the same things

Because of templates I was building in my post “Today I learned… Ansible Include Templates”, I thought you could repeat the same sections over again. Here’s a snippet of something like what I’d built (after combining lots of templates together):

Note this is a non-working code sample!


#cloud-config
packages:
- iperf
- git

write_files:
- content: {% include 'files/public_key.j2' %}
  path: /root/.ssh/authorized_keys
  owner: root:root
  permission: '0600'
- content: {% include 'files/private_key.j2' %}
  path: /root/.ssh/id_rsa
  owner: root:root
  permission: '0600'

packages:
- byobu

write_files:
- content: |
    #!/bin/bash
    git clone {{ test_scripts }} /root/iperf_scripts
    bash /root/iperf_scripts/run_test.sh
  path: /root/run_test
  owner: root:root
  permission: '0700'

runcmd:
- /root/run_test

I’d get *bits* of it to run – basically, the last file, the last package and the last runcmd… but not all of it.

Turns out, cloud-init doesn’t like having to rebuild all the fragments together. Instead, you need to put them all together, so the write_files items, and the packages items all live in the same area.

Which, when you think about what it’s doing, which is that the parent lines are defining a variable called… well, whatever that line is, and if you replace it, it’s only going to keep the last one, then it all makes sense really!

Today I learned… that you can look at the “cloud-init” files on your target server…

Today I have been debugging why my Cloud-init scripts weren’t triggering on my Openstack environment.

I realised that something was wrong when I tried to use the noVNC console[1] with a password I’d set… no luck. So, next I ran a command to review the console logs[2], and saw a message (now, sadly, long gone – so I can’t even include it here!) suggesting there was an issue parsing my YAML file. Uh oh!

I’m using Ansible’s os_server module, and using templates to complete the userdata field, which in turn gets populated as cloud-init scripts…. and so clearly I had two ways to debug this – prefix my ansible playbook with a few debug commands, but then that can get messy… OR SSH into the box, and look through the logs. I knew I could SSH in, so the cloud-init had partially fired, but it just wasn’t parsing what I’d submitted. I had a quick look around, and found a post which mentioned debugging cloud-init. This mentioned that there’s a path (/var/lib/cloud/instances/$UUID/) you can mess around in, to remove some files to “fool” cloud-init into thinking it’s not been run… but, I reasoned, why not just see what’s there.

And in there, was the motherlode – user-data.txt…. bingo.

In the jinja2 template I was using to populate the userdata, I’d referenced another file, again using a template. It looks like that template needs an extra line at the end, otherwise, it all runs together.

Whew!

This does concern me a little, as I had previously been using this stanza to “simply” change the default user password to something a little less complicated:


#cloud-config
ssh_pwauth: True
chpasswd:
  list: |
    ubuntu:{{ default_password }}
  expire: False

But now that I look at the documentation, I realise you can also specify that as a pre-hashed value (in which case, you would suffix that default_password item above with |password_hash('sha512')) which makes it all better again!

[1] If you run openstack --os-cloud cloud_a console url show servername gives you a URL to visit that has an HTML5 based VNC-ish client. Note the “cloud_a” and “servername” should be replaced by your clouds.yml reference and the server name or server ID you want to connect to.
[2] Like before, openstack --os-cloud cloud_a console log show servername gives you the output of the boot sequence (e.g. dmesg plus the normal startup commands, and finally, cloud-init). It can be useful. Equally, it’s logs… which means there’s a lot to wade through!

One to read: “Test Driven Development (TDD) for networks, using Ansible”

Thanks to my colleague Simon (@sipart on Twitter), I spotted this post (and it’s companion Github Repository) which explains how to do test-driven development in Ansible.

Essentially, you create two roles – test (the author referred to it as “validate”) and one to actually do the thing you want it to do (in the author’s case “add_vlan”).

In the testing role, you’d have the following layout:

/path/to/roles/testing/tasks/main.yml
/path/to/roles/testing/tasks/SOMEFEATUREtest.yml

In the main.yml file, you have a simple stanza:

---
- name: Include all the test files
  include: "{{ outer_item }}"
  with_fileglob:"/path/to/roles/validate/tasks/*test.yml"
  loop_control: loop_var=outer_item

I’m sure that “with_fileglob” line could be improved to not actually need a full path… anyway

Then in your YourFeature_test.yml file, you do things like this:

---
- name: "Pseudocode in here. Use real modules for your testing!!"
  get_vlan_config: filter_for=needle_vlan
  register:haystack_var

- assert: that=" {{ needle_item }} in haystack_var "

When you run the play of the role the first time, the response will be “failed” (because “needle_vlan” doesn’t exist). Next do the “real” play of the role (so, in the author’s case, add_vlan) which creates the vlan. Then re-run the test role, your response should now be “ok”.

I’d probably script this so that it goes:

      reset-environment set_testing=true (maybe create a random little network)
      test
      run-action
      test
      reset-environment set_testing=false

The benefit to doing it that way is that you “know” your tests aren’t running if the environment doesn’t have the “set_testing” thing in place, you get to run all your tests in a “clean room”, and then you clear it back down again afterwards, leaving it clear for the next pass of your automated testing suite.

Fun!

Today I learned… Ansible Include Templates

I am building Openstack Servers with the ansible os_server module. One of these fields will accept a very long string (userdata). Typically, I end up with a giant blob of unreadable build script in this field…

Today I learned that I can use this:

---
- name: "Create Server"
  os_server:
    name: "{{ item.value.name }}"
    state: present
    availability_zone: "{{ item.value.az.name }}"
    flavor: "{{ item.value.flavor }}"
    key_name: "{{ item.value.az.keypair }}"
    nics: "[{%- for nw in item.value.ports -%}{'port-name': '{{ ProjectPrefix }}{{ item.value.name }}-Port-{{nw.network.name}}'}{%- if not loop.last -%}, {%- endif -%} {%- endfor -%}]" # Ignore this line - it's complicated for a reason
    boot_volume: "{{ ProjectPrefix }}{{ item.value.name }}-OS-Volume" # Ignore this line also :)
    terminate_volume: yes
    volumes: "{%- if item.value.log_size is defined -%}[{{ ProjectPrefix }}{{ item.value.name }}-Log-Volume]{%- else -%}{{ omit }}{%- endif -%}"
    userdata: "{% include 'templates/userdata.j2' %}"
    auto_ip: no
    timeout: 65535
    cloud: "{{ cloud }}"
  with_dict: "{{ Servers }}"

This file (/path/to/ansible/playbooks/servers.yml) is referenced by my play.yml (/path/to/ansible/play.yml) via an include, so the template reference there is in my templates directory (/path/to/ansible/templates/userdata.j2).

That template can also then reference other template files itself (using {% include 'templates/some_other_file.extension' %}) so you can have nicely complex userdata fields with loads and loads of detail, and not make the actual play complicated (or at least, no more than it already needs to be!)

Some notes from Ansible mentoring

Last night, I met up with my friend Tim Dobson to talk about Ansible. I’m not an expert, but I’ve done a lot of Ansible recently, and he wanted some pointers.

He already had some general knowledge, but wanted some pointers on “other things you can do with Ansible”, so here are a couple of the things we did.

  • If you want to set certain things to happen as “Production” and other things to happen as “Pre-production” you can either have two playbooks (e.g. pre-prod.yml versus prod.yml) which call certain features… OR use something like this:

    ---
    - hosts: localhost
      tasks:
    - set_fact:
        my_run_state: "{% if lookup('env', 'runstate') == '' %}{{ default_run_state|default('prod') }}{% else %}{{ lookup('env', 'runstate')|lower() }}{% endif %}"
    - debug: msg="Doing prod"
      when: my_run_state == 'prod'
    - debug: msg="Doing something else"
      when: my_run_state != 'prod'

    With this, you can define a default run state (prod), override it with a group or host var (if you have, for example, a staging service or proof of concept space), or use your Environment variables to do things. In the last case, you’d execute this as follows:

    runstate=preprod ansible-playbook site.yml

  • You can tag almost every action in your plays. Here are some (contrived) examples:


    ---
    - name: Get facts from your hosts
      tags: configure
      hosts: all
    - name: Tell me all the variable data you've collected
      tags: dump
      hosts: localhost
      tasks:
        - name: Show data
          tags: show
          debug:
            var=item
          with_items: hostvars

    When you then run

    ansible-playbook test.yml --list-tags

    You get

    playbook: test.yml

      play #1 (all): Get facts from your hosts      TAGS: [configure]
          TASK TAGS: [configure]

      play #2 (localhost): Tell me all the variable data you've collected   TAGS: [dump]
          TASK TAGS: [dump, show]

    Now you can run ansible-playbook test.yml -t configure or ansible-playbook test.yml --skip-tags configure

    To show how useful this can be, here’s the output from the “–list-tags” I’ve got on a project I’m doing at work:
    playbook: site.yml

      play #1 (localhost): Provision A-Side Infrastructure  TAGS: [Functional_Testing,A_End]
          TASK TAGS: [A_End, EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]

      play #2 (localhost): Provision B-Side Infrastructure  TAGS: [Functional_Testing,B_End]
          TASK TAGS: [B_End, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]

      play #3 (localhost): Provision InterProject Links - Part 1    TAGS: [Functional_Testing,InterProjectLink]
          TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]

      play #4 (localhost): Provision InterProject Links - Part 2    TAGS: [Functional_Testing,InterProjectLink]
          TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]

      play #5 (localhost): Provision TPT environment        TAGS: [Performance_Testing]
          TASK TAGS: [EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers, Performance_Testing, debug]

    This then means that if I get a build fail part-way through, or if I’m debugging just a particular part, I can run this: ansible-playbook site.yml -t Performance_Testing --skip-tags EXCLUDE_K5_Firewalls,EXCLUDE_K5_SecurityGroups,EXCLUDE_K5_Networks