"Shipping Containers" by "asgw" on Flickr

Creating my first Docker containerized LEMP (Linux, nginx, MariaDB, PHP) application

Want to see what I built without reading the why’s and wherefore’s? The git repository with all the docker-compose goodness is here!

Late edit 2020-01-16: The fantastic Jerry Steel, my co-host on The Admin Admin podcast looked at what I wrote, and made a few suggestions. I’ve updated the code in the git repo, and I’ll try to annotate below when I’ve changed something. If I miss it, it’s right in the Git repo!

One of the challenges I set myself this Christmas was to learn enough about Docker to put an arbitrary PHP application, that I would previously have misused Vagrant to contain.

Just before I started down this rabbit hole, I spoke to my Aunt about some family tree research my father had left behind after he died, and how I wished I could easily share the old tree with her (I organised getting her a Chromebook a couple of years ago, after fighting with doing remote support for years on Linux and Windows laptops). In the end, I found a web application for genealogical research called HuMo-gen, that is a perfect match for both projects I wanted to look at.

HuMo-gen was first created in 1999, with a PHP version being released in 2005. It used MySQL or MariaDB as the Database engine. I was reasonably confident that I could have created a Vagrantfile to deliver this on my home server, but I wanted to try something new. I wanted to use the “standard” building blocks of Docker and Docker-Compose, and some common containers to make my way around learning Docker.

I started by looking for some resources on how to build a Docker container. Much of the guidance I’d found was to use Docker-Compose, as this allows you to stand several components up at the same time!

In contrast to how Vagrant works (which is basically a CLI wrapper to many virtual machine services), Docker isolates resources for a single process that runs on a machine. Where in Vagrant, you might run several processes on one machine (perhaps, in this instance, nginx, PHP-FPM and MariaDB), with Docker, you’re encouraged to run each “service” as their own containers, and link them together with an overlay network. It’s possible to also do the same with Vagrant, but you’ll end up with an awful lot of VM overhead to separate out each piece.

So, I first needed to select my services. My initial line-up was:

  • MariaDB
  • PHP-FPM
  • Apache’s httpd2 (replaced by nginx)

I was able to find official Docker images for PHP, MariaDB and httpd, but after extensive tweaking, I couldn’t make the httpd image talk the way I wanted it to with the PHP image. Bowing to what now seems to be conventional wisdom, I swapped out the httpd service for nginx.

One of the stumbling blocks for me, particularly early on, was how to build several different Dockerfiles (these are basically the instructions for the container you’re constructing). Here is the basic outline of how to do this:

version: '3'
services:
  yourservice:
    build:
      context: .
      dockerfile: relative/path/to/Dockerfile

In this docker-compose.yml file, I tell it that to create the yourservice service, it needs to build the docker container, using the file in ./relative/path/to/Dockerfile. This file in turn contains an instruction to import an image.

Each service stacks on top of each other in that docker-compose.yml file, like this:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2

Late edit 2020-01-16: This previously listed Dockerfile/service1, however, much of the documentation suggested that Docker gets quite opinionated about the file being called Dockerfile. While docker-compose can work around this, it’s better to stick to tradition :) The docker-compose.yml files below have also been adjusted accordingly. I’ve also added an image: somehost:1234/image_name line to help with tagging the images for later use. It’s not critical to what’s going on here, but I found it useful with some later projects.

To allow containers to see ports between themselves, you add the expose: command in your docker-compose.yml, and to allow that port to be visible from the “outside” (i.e. to the host and upwards), use the ports: command listing the “host” port (the one on the host OS), then a colon and then the “target” port (the one in the container), like these:

version: '3'
services:
  service1:
    build:
      context: .
      dockerfile: service1/Dockerfile
    image: localhost:32000/service1
    expose:
    - 12345
  service2:
    build:
      context: .
      dockerfile: service2/Dockerfile
    image: localhost:32000/service2
    ports:
    - 8000:80

Now, let’s take a quick look into the Dockerfiles. Each “statement” in a Dockerfile adds a new “layer” to the image. For local operations, this probably isn’t a problem, but when you’re storing these images on a hosted provider, you want to keep these images as small as possible.

I built a Database Dockerfile, which is about as small as you can make it!

FROM mariadb:10.4.10

Yep, one line. How cool is that? In the docker-compose.yml file, I invoke this, like this:

version: '3'
services:
  db:
    build:
      context: .
      dockerfile: mariadb/Dockerfile
    image: localhost:32000/db
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: a_root_pw
      MYSQL_USER: a_user
      MYSQL_PASSWORD: a_password
      MYSQL_DATABASE: a_db
    expose:
      - 3306

OK, so this one is a bit more complex! I wanted it to build my Dockerfile, which is “mariadb/Dockerfile“. I wanted it to restart the container whenever it failed (which hopefully isn’t that often!), and I wanted to inject some specific environment variables into the file – the root and user passwords, a user account and a database name. Initially I was having some issues where it wasn’t building the database with these credentials, but I think that’s because I wasn’t “building” the new database, I was just using it. I also expose the MariaDB (MySQL) port, 3306 to the other containers in the docker-compose.yml file.

Let’s take a look at the next part! PHP-FPM. Here’s the Dockerfile:

FROM php:7.4-fpm
RUN docker-php-ext-install pdo pdo_mysql
ADD --chown=www-data:www-data public /var/www/html

There’s a bit more to this, but not loads. We build our image from a named version of PHP, and install two extensions to PHP, pdo and pdo_mysql. Lastly, we copy the content of the “public” directory into the /var/www/html path, and make sure it “belongs” to the right user (www-data).

I’d previously tried to do a lot more complicated things with this Dockerfile, but it wasn’t working, so instead I slimmed it right down to just this, and the docker-compose.yml is a lot simpler too.

  phpfpm:
    build:
      context: .
      dockerfile: phpfpm/Dockerfile
    image: localhost:32000/phpfpm

See! Loads simpler! Now we need the complicated bit! :) This is the Dockerfile for nginx.

FROM nginx:1.17.7
COPY nginx/default.conf /etc/nginx/conf.d/default.conf

COPY public /var/www/html

Weirdly, even though I’ve added version numbers for MariaDB and PHP, I’ve not done the same for nginx, perhaps I should! Late edit 2020-01-16: I’ve put a version number on there now, previously where it said nginx:1.17.7 it actually said nginx:latest.

I’ve created the configuration block for nginx in a single “RUN” line. Late edit 2020-01-16: This Dockerfile now doesn’t have a giant echo 'stuff' > file block either, following Jerry’s advice, and I’m using COPY instead of ADD on his advice too. I’ll show that config file below. There’s a couple of high points for me here!

server {
  index index.php index.html;
  server_name _;
  error_log /proc/self/fd/2;
  access_log /proc/self/fd/1;
  root /var/www/html;
  location ~ \.php$ {
    try_files $uri =404;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass phpfpm:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}
  • server_name _; means “use this block for all unnamed requests”.
  • access_log /proc/self/fd/1; and error_log /proc/self/fd/2;These are links to the “stdout” and “stderr” file descriptors (or pointers to other parts of the filesystem), and basically means that when you do docker-compose logs, you’ll see the HTTP logs for the server! These two files are guaranteed to be there, while /dev/stderr isn’t!

Because nginx is “just” caching the web content, and I know the content doesn’t need to be written to from nginx, I knew I didn’t need to do the chown action, like I did with the PHP-FPM block.

Lastly, I need to configure the docker-compose.yml file for nginx:

  nginx:
    build:
      context: .
      dockerfile: Dockerfile/nginx
    image: localhost:32000/nginx
    ports:
      - 127.0.0.1:1980:80

I’ve gone for a slightly unusual ports configuration when I deployed this to my web server… you see, I already have the HTTP port (TCP/80) configured for use on my home server – for running the rest of my web services. During development, on my home machine, the ports line instead showed “1980:80” because I was running this on Instead, I’m running this application bound to “localhost” (127.0.0.1) on a different port number (1980 selected because it could, conceivably, be a birthday of someone on this system), and then in my local web server configuration, I’m proxying connections to this service, with HTTPS encryption as well. That’s all outside the scope of this article (as I probably should be using something like Traefik, anyway) but it shows you how you could bind to a separate port too.

Anyway, that was my Docker journey over Christmas, and I look forward to using it more, going forward!

Featured image is “Shipping Containers” by “asgw” on Flickr and is released under a CC-BY license.

"Feb 11" by "Gordon" on Flickr

How to quickly get the next SemVer for your app

SemVer, short for Semantic Versioning is an easy way of numbering your software versions. They follow the model Major.Minor.Patch, like this 0.9.1 and has a very opinionated view on what is considered a Major “version bump” and what isn’t.

Sometimes, when writing a library, it’s easy to forget what version you’re on. Perhaps you have a feature change you’re working on, but also bug fixes to two or three previous versions you need to keep an eye on? How about an easy way of figuring out what that next bump should be?

In a recent conversation on the McrTech slack, Steven [0] mentioned he had a simple bash script for incrementing his SemVer numbers, and posted it over. Naturally, I tweaked it to work more easily for my usecases so, this is *mostly* Steven’s code, but with a bit of a wrapper before and after by me :)

Late Edit: 2022-11-19 ictus4u spotted that I wasn’t handling the reset of PATCH to 0 when MINOR gets a bump. I fixed this in the above gist.

So how do you use this? Dead simple, use nextver in a tree that has an existing git tag SemVer to get the next patch number. If you want to bump it to the next minor or major version, try nextver minor or nextver major. If you don’t have a git tag, and don’t specify a SemVer number, then it’ll just assume you’re starting from fresh, and return 0.0.1 :)

Want to find more cool stuff from the original author of this work? Here is a video by the author :)

Featured image is “Feb 11” by “Gordon” on Flickr and is released under a CC-BY-SA license.

"www.GetIPv6.info decal" from Phil Wolff on Flickr

Hurricane Electric IPv6 Gateway on Raspbian for Raspberry Pi

NOTE: This article was replaced on 2019-03-12 by a github repository where I now use Vagrant instead of a Raspberry Pi, because I was having some power issues with my Raspberry Pi. Also, using this method means I can easily use an Ansible Playbook. The following config will still work(!) however I prefer this Vagrant/Ansible workflow for this, so won’t update this blog post any further.

Following an off-hand remark from a colleague at work, I decided I wanted to set up a Raspberry Pi as a Hurricane Electric IPv6 6in4 tunnel router. Most of the advice around (in particular, this post about setting up IPv6 on the Raspberry Pi Forums) related to earlier version of Raspbian, so I thought I’d bring it up-to-date.

I installed the latest available version of Raspbian Stretch Lite (2018-11-13) and transferred it to a MicroSD card. I added the file ssh to the boot volume and unmounted it. I then fitted it into my Raspberry Pi, and booted it. While it was booting, I set a static IPv4 address on my router (192.168.1.252) for the Raspberry Pi, so I knew what IP address it would be on my network.

I logged into my Hurricane Electric (HE) account at tunnelbroker.net and created a new tunnel, specifying my public IP address, and selecting my closest HE endpoint. When the new tunnel was created, I went to the “Example Configurations” tab, and selected “Debian/Ubuntu” from the list of available OS options. I copied this configuration into my clipboard.

I SSH’d into the Pi, and gave it a basic config (changed the password, expanded the disk, turned off “predictable network names”, etc) and then rebooted it.

After this was done, I created a file in /etc/network/interfaces.d/he-ipv6 and pasted in the config from the HE website. I had to change the “local” line from the public IP I’d provided HE with, to the real IP address of this box. Note that any public IPs (that is, not 192.168.x.x addresses) in the config files and settings I’ve noted refer to documentation addressing (TEST-NET-2 and the IPv6 documentation address ranges)

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
        address 2001:db8:123c:abd::2
        netmask 64
        endpoint 198.51.100.100
        local 192.168.1.252
        ttl 255
        gateway 2001:db8:123c:abd::1

Next, I created a file in /etc/network/interfaces.d/eth0 and put the following configuration in, using the first IPv6 address in the “routed /64” range listed on the HE site:

auto eth0
iface eth0 inet static
    address 192.168.1.252
    gateway 192.168.1.254
    netmask 24
    dns-nameserver 8.8.8.8
    dns-nameserver 8.8.4.4

iface eth0 inet6 static
    address 2001:db8:123d:abc::1
    netmask 64

Next, I disabled the DHCPd service by issuing systemctl stop dhcpcd.service Late edit (2019-01-22): Note, a colleague mentioned that this should have actually been systemctl stop dhcpcd.service && systemctl disable dhcpcd.service – good spot! Thanks!! This ensures that if, for some crazy reason, the router stops offering the right DHCP address to me, I can still access this box on this IP. Huzzah!

I accessed another host which had IPv6 access, and performed both a ping and an SSH attempt. Both worked. Fab. However, this now needs to be blocked, as we shouldn’t permit anything to be visible downstream from this gateway.

I’m using the Uncomplicated Firewall (ufw) which is a simple wrapper around IPTables. Let’s create our policy.

# First install the software
sudo apt update && sudo apt install ufw -y

# Permits inbound IPv4 SSH to this host - which should be internal only. 
# These rules allow tailored access in to our managed services
ufw allow in on eth0 app DNS
ufw allow in on eth0 app OpenSSH

# These rules accept all broadcast and multicast traffic
ufw allow in on eth0 to 224.0.0.0/4 # Multicast addresses
ufw allow in on eth0 to 255.255.255.255 # Global broadcast
ufw allow in on eth0 to 192.168.1.255 # Local broadcast

# Alternatively, accept everything coming in on eth0
# If you do this one, you don't need the lines above
ufw allow in on eth0

# Setup the default rules - deny inbound and routed, permit outbound
ufw default deny incoming 
ufw default deny routed
ufw default allow outgoing

# Prevent inbound IPv6 to the network
# Also, log any drops so we can spot them if we have an issue
ufw route deny log from ::/0 to 2001:db8:123d:abc::/64

# Permit outbound IPv6 from the network
ufw route allow from 2001:db8:123d:abc::/64

# Start the firewall!
ufw enable

# Check the policy
ufw status verbose
ufw status numbered

Most of the documentation I found suggested running radvd for IPv6 address allocation. This basically just allocates on a random basis, and, as far as I can make out, each renewal gives the host a new IPv6 address. To make that work, I performed apt-get update && apt-get install radvd -y and then created this file as /etc/radvd.conf. If all you want is a floating IP address with no static assignment – this will do it…

interface eth0
{
    AdvSendAdvert on;
    MinRtrAdvInterval 3;
    MaxRtrAdvInterval 10;
    prefix 2001:db8:123d:abc::/64
    {
        AdvOnLink on;
        AdvAutonomous on;
    };
   route ::/0 {
   };
};

However, this doesn’t give me the ability to statically assign IPv6 addresses to hosts. I found that a different IPv6 allocation method will do static addressing, based on your MAC address called SLAAC (note there are some privacy issues with this, but I’m OK with them for now…) In this mode assuming the prefix as before – 2001:db8:123d:abc:: and a MAC address of de:ad:be:ef:01:23, your IPv6 address will be something like: 2001:db8:123d:abc:dead:beff:feef:0123and this will be repeatably so – because you’re unlikely to change your MAC address (hopefully!!).

This SLAAC allocation mode is available in DNSMasq, which I’ve consumed before (in a Pi-Hole). To use this, I installed DNSMasq with apt-get update && apt-get install dnsmasq -y and then configured it as follows:

interface=eth0
listen-address=127.0.0.1
# DHCPv6 - Hurricane Electric Resolver and Google's
dhcp-option=option6:dns-server,[2001:470:20::2],[2001:4860:4860::8888]
# IPv6 DHCP scope
dhcp-range=2001:db8:123d:abc::, slaac

I decided to move from using my router as a DHCP server, to using this same host, so expanded that config as follows, based on several posts, but mostly centred around the MAN page (I’m happy to have this DNSMasq config improved if you’ve got any suggestions ;) )

# Stuff for DNS resolution
domain-needed
bogus-priv
no-resolv
filterwin2k
expand-hosts
domain=localnet
local=/localnet/
log-queries

# Global options
interface=eth0
listen-address=127.0.0.1

# Set these hosts as the DNS server for your network
# Hurricane Electric and Google
dhcp-option=option6:dns-server,[2001:470:20::2],2001:4860:4860::8888]

# My DNS servers are:
server=1.1.1.1                # Cloudflare's DNS server
server=8.8.8.8                # Google's DNS server

# IPv4 DHCP scope
dhcp-range=192.168.1.10,192.168.1.210,12h
# IPv6 DHCP scope
dhcp-range=2001:db8:123d:abc::, slaac

# Record the DHCP leases here
dhcp-leasefile=/run/dnsmasq/dhcp-lease

# DHCPv4 Router
dhcp-option=3,192.168.1.254

So, that’s what I’m doing now! Hope it helps you!

Late edit (2019-01-22): In issue 129 of the “Awesome Self Hosted Newsletter“, I found a post called “My New Years Resolution: Learn IPv6“… which uses a pfSense box and a Hurricane Electric tunnel too. Fab!

Header image is “www.GetIPv6.info decal” by “Phil Wolff” on Flickr and is released under a CC-BY-SA license. Used with thanks!

“You can’t run multiple commands in sudo” – and how to work around this

At work, we share tips and tricks, and one of my colleagues recently called me out on the following stanza I posted:

I like this [ansible] one for Debian based systems:
  - name: "Apt update, Full-upgrade, autoremove, autoclean"
    become: yes
    apt:
      upgrade: full
      update_cache: yes
      autoremove: yes
      autoclean: yes

And if you’re trying to figure out how to do that in Shell:
apt-get update && apt-get full-update -y && apt-get autoremove -y && apt-get autoclean -y

His response was “Surely you’re not logging into bash as root”. I said “I normally sudo -i as soon as I’ve logged in. I can’t recall offhand how one does a sudo for a string of command && command statements”

Well, as a result of this, I looked into it. Here’s one comment from the first Stack Overflow page I found:

You can’t run multiple commands from sudo – you always need to trick it into executing a shell which may accept multiple commands to run as parameters

So here are a few options on how to do that:

  1. sudo -s whoami \; whoami (link to answer)
  2. sudo sh -c "whoami ; whoami" (link to answer)
  3. But, my favourite is from this answer:

    An alternative using eval so avoiding use of a subshell: sudo -s eval 'whoami; whoami'

Why do I prefer the last one? Well, I already use eval for other purposes – mostly for starting my ssh-agent over SSH, like this: eval `ssh-agent` ; ssh-add

One to read/watch: IPsec and IKE Tutorial

Ever been told that IPsec is hard? Maybe you’ve seen it yourself? Well, Paul Wouters and Sowmini Varadhan recently co-delivered a talk at the NetDev conference, and it’s really good.

Sowmini’s and Paul’s slides are available here: https://www.files.netdevconf.org/d/a18e61e734714da59571/

A complete recording of the tutorial is here. Sowmini’s part of the tutorial (which starts first in the video) is quite technically complex, looking at specifically the way that Linux handles the packets through the kernel. I’ve focused more on Paul’s part of the tutorial (starting at 26m23s)… but my interest was piqued from 40m40s when he starts to actually show how “easy” configuration is. There are two quick run throughs of typical host-to-host IPsec and subnet-to-subnet IPsec tunnels.

A key message for me, which previously hadn’t been at all clear in IPsec using {free,libre,open}swan is that they refer to Left and Right as being one party and the other… but the node itself works out if it’s “left” or “right” so the *SAME CONFIG* can be used on both machines. GENIUS.

Also, when you’re looking at the config files, anything prefixed with an @ symbol is something that doesn’t need resolving to something else.

It’s well worth a check-out, and it’s inspired me to take another look at IPsec for my personal VPNs :)

I should note that towards the end, Paul tried to run a selection of demonstrations in Opportunistic Encryption (which basically is a way to enable encryption between two nodes, even if you don’t have a pre-established VPN with them). Because of issues with the conference wifi, plus the fact that what he’s demoing isn’t exactly production-grade yet, it doesn’t really work right, and much of the rest of the video (from around 1h10m) is him trying to show that working while attendees are running through the lab, and having conversations about those labs with the attendees.

TCPDump Made Easier Parody Book Cover, with the subtitle "Who actually understands all those switches?"

One to use: tcpdump101.com

I’m sure that anyone doing operational work has been asked at some point if you can run a “TCPDump” on something, or if you could get a “packet capture” – if you have, this tool (as spotted on the Check Point community sites) might help you!

https://tcpdump101.com

Using simple drop-down fields for filters and options and using simple prompts, this tool tells you how to run each of the packet capturing commands for common firewall products (FortiGate, ASA, Check Point) and the more generic tcpdump tool (indicated by a Linux Penguin, but it runs on all major desktop and server OSs, as well as rooted Android devices).

Well worth a check out!

Experiments with USBIP on Raspberry Pi

At home, I have a server on which I run my VMs and store my content (MP3/OGG/FLAC files I have ripped from my CDs, Photos I’ve taken, etc.) and I want to record material from FreeSat to play back at home, except the server lives in my garage, and the satellite dish feeds into my Living Room. I bought a TeVii S660 USB FreeSat decoder, and tried to figure out what to do with it.

I previously stored the server near where the feed comes in, but the running fan was a bit annoying, so it got moved… but then I started thinking – what if I ran a Raspberry Pi to consume the media there.

I tried running OpenElec, and then LibreElec, and while both would see the device, and I could even occasionally get *content* out of it, I couldn’t write quick enough to the media devices attached to the RPi to actually record what I wanted to get from it. So, I resigned myself to the fact I wouldn’t be recording any of the Christmas Films… until I stumbled over usbip.

USBIP is a service which binds USB ports to a TCP port, and then lets you consume that USB port on another machine. I’ll discuss consuming the S660’s streams in another post, but the below DOES work :)

There are some caveats here. Because I’m using a Raspberry Pi, I can’t just bung on any old distribution, so I’m a bit limited here. I prefer Debian based images, so I’m going to artificially limit myself to these for now, but if I have any significant issues with these images, then I’ll have to bail on Debian based, and use something else.

  1. If I put on stock Raspbian Jessie, I can’t use usbip, because while ships its own kernel that has the right tools built-in (the usbip_host, usbip_core etc.), it doesn’t ship the right userland tools to manipulate it.
  2. If I’m using a Raspberry Pi 3, there’s no supported version of Ubuntu Server which ships for it. I can use a flavour (e.g. Ubuntu Mate), but that uses the Raspbian kernel, which, as I mentioned before, is not shipping the right userland tools.
  3. If I use a Raspberry Pi 2, then I can use Stock Ubuntu, which ships the right tooling. Now all I need to do is find a CAT5 cable, and some way to patch it through to my network…

Getting the Host stood up

I found most of my notes on this via a wiki entry at Github but essentially, it boils down to this:

On your host machine, (where the USB port is present), run

sudo apt-get install linux-tools-generic
sudo modprobe usbip_host
sudo usbipd -D

This confirms that your host can present the USB ports over the USBIP interface (there are caveats! I’ll cover them later!!). Late edit: 2020-05-21 I never did write up those caveats, and now, two years later, I don’t recall what they were. Apologies.

You now need to find which ports you want to serve. Run this command to list the ports on your system:

lsusb

You’ll get something like this back:

Bus 001 Device 004: ID 9022:d662 TeVii Technology Ltd.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And then you need to find which port the device thinks it’s attached to. Run this to see how usbip sees the world:

usbip list -l

This will return:

- busid 1-1.1 (0424:ec00)
unknown vendor : unknown product (0424:ec00)
- busid 1-1.3 (9022:d662)
unknown vendor : unknown product (9022:d662)

We want to share the TeVii device, which has the ID 9022:d662, and we can see that this is present as busid 1-1.3, so we now we need to bind it to the usbip system, with this command:

usbip bind -b 1-1.3

OK, so now we’re presenting this to the system. Perhaps you might want to make it available on a reboot?

echo "usbip_host" >> /etc/modules

I also added @reboot /usr/bin/usbipd -D ; sleep 5 ; /usr/bin/usbip bind -b 1-1.3 to root’s crontab, but it should probably go into a systemd unit.

Getting the Guest stood up

All these actions are being performed as root. As before, let’s get the modules loaded in the kernel:

apt-get install linux-tools-generic
modprobe vhci-hcd

Now, we can try to attach the module over the wire. Let’s check what’s offered to us (this code example uses 192.0.2.1 but this would be the static IP of your host):

usbip list -r 192.0.2.1

This hands up back the list of offered appliances:

Exportable USB devices
======================
- 192.0.2.1
1-1.3: TeVii Technology Ltd. : unknown product (9022:d662)
: /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.3
: (Defined at Interface level) (00/00/00)
: 0 - Vendor Specific Class / unknown subclass / unknown protocol (ff/01/01)

So, now all we need to do is attach it:

usbip attach -r 192.0.2.1 -b 1-1.3

Now I can consume the service from that device in tvheadend on my server. However, again, I need to make this persistent. So, let’s make sure the module is loaded on boot.

echo 'vhci-hcd' >> /etc/modules

And, finally, we need to attach the port on boot. Again, I’m using crontab, but should probably wrap this into a systemd service.

@reboot /usr/bin/usbip attach -r 192.0.2.1 -b 1-1.3

And then I had an attached USB device across my network!

Unfortuately, the throughput was a bit too low (due to silly ethernet-over-power adaptors) to make it work the way I wanted… but theoretically, if I had proper patching done in this house, it’d be perfect! :)

Interestingly, the day I finished this post off (after it’d sat in drafts since December), I spotted that one of the articles in Linux Magazine is “USB over the network with USB/IP”. Just typical! :D

Running Google MusicManager for two profiles

I’ve previously made mention of my addiction to Google Play Music… but I was called out recently, and asked about the script I used at the time. I’m sorry to say that I have had some issues with it, and instead, have resorted to using X forwarding. Here’s how I do it.

I create a user account for that other person (note, GMM will only let you upload to 3 accounts using this method. For any more, you’ll need a virtual machine!).

I then create an SSH public/private key with no passphrase.

ssh-keygen -b 2048 -N “” -C “$(whoami)@localhost” -f ~/.ssh/gmm.id_rsa

I write the public key into that new user’s .ssh/authorized_keys, by running:

ssh-copy-id -i ~/.ssh/gmm.id_rsa bloggsf@localhost

I will be prompted for the password of that account.

Finally, I create this script:

#!/bin/bash
while ! ping -c 1 8.8.8.8 2>/dev/null >/dev/null ; do
  echo Waiting for network...
done

ssh -X bloggsf@localhost -i ~/.ssh/gmm.id_rsa /opt/google/musicmanager/google-musicmanager

This is then added to the startup tasks of my headless-but-running-a-desktop machine.