Working with complicated template data UserData in Ansible

My new job means I’m currently building a lot of test boxes with Ansible, particularly OpenStack guests. This means I’m trying to script as much as possible without actually … getting my hands dirty with the actual “logging into it and running things” perspective.

This week, I hit a problem standing up a popular firewall vendor’s machine with Ansible, because I was trying to bypass the first-time-wizard… anyway, it wasn’t working, and I couldn’t figure out why. I talked to my colleague [mohclips] and he eventually told me that I needed to use a template, because what I was trying to do was too complicated.

But, damn him, I knew that wasn’t the answer :)

Anyway, I found this comment on a ticket, which lead me to the following… if you’re finding that your userdata: variable in the os_server module of Ansible isn’t working, you might need to wrap it up like this:

userdata: |
  {%- raw -%}#!/bin/bash
  # Kill script if the pipe fails
  set -euf -o pipefail
  # Write everything from this point on to Syslog
  echo " == Set admin credentials == "
  clish -c 'set user admin password-hash {% endraw -%}{{ default_password|password_hash('sha512') }}{%- raw -%}' -s
  {% endraw %}

Note that, if you have a space before your variable, use {% endraw -%} and if you’ve a space after it, use {%- raw %} as the hyphen means “ditch all the spaces before/after this command”.

One to read or watch: “Programming is Forgetting: Toward a New Hacker Ethic”

Here is a transcript of a talk by Allison Parrish at the Open Hardware Summit in Portland, OR. The talk “Programming is Forgetting: Toward a New Hacker Ethic” is a discussion about the failings of the book “Hackers” by Steven Levy. Essentially, that book proposed (in the 80’s) a set of ethics for Hackers (which is to say, creative programmers or engineers, not malicious operators). Allison suggests that many of the parables in the book do not truly reflect the “Hacker Ethic”, and revises them for today’s world.

Her new questions (not statements) are as follows:

  • Who gets to use what I make? Who am I leaving out? How does what I make facilitate or hinder access?
  • What data am I using? Whose labor produced it and what biases and assumptions are built into it? Why choose this particular phenomenon for digitization or transcription? And what do the data leave out?
  • What systems of authority am I enacting through what I make? What systems of support do I rely on? How does what I make support other people?
  • What kind of community am I assuming? What community do I invite through what I make? How are my own personal values reflected in what I make?

This is a significant re-work of the original “Hacker Ethic“, and you should really either watch or read the talk to see how she got to these from the original, especially as it’s not as punchy as the original.

I’d like to think I was thinking of things like these questions when I wrote CampFireManager and CCHits.

Using Expect to SFTP a file

Because of technical limitations on a pair of platforms I’m using at work, I am unable to set-up key-based SFTP or SCP to transfer files between the pair of them, so I knocked together this short script using the TCL based Expect language.


#!/usr/bin/expect
set arg1 [lindex $argv 0]
set arg2 [lindex $argv 1]
set arg3 [lindex $argv 2]
set timeout 1000
spawn sftp "$arg2"
expect {
yes {
send "yes\r"
exp_continue
}
ass {
send "$arg3\r"
exp_continue
}
sftp {
send "put $arg1\r"
expect {
100% {
send "quit\r"
exp_continue
}
}
}
}

view raw

upload.exp

hosted with ❤ by GitHub

There’s no error checking here, which isn’t great, but as a quick-and-dirty script to SFTP files to a box which needs the password each run… it works! :)

GPG Encrypting files using a keyserver

Another “at work” post!

I’ve been generating files which need to be distributed via a file server, but need to be encrypted using GPG (the open source PGP application). Rather than managing keys for a large number of users, instead, I have a text file with the user names in, and a batch file. Please see the below gist for details :)

Installing Symantec Endpoint Protection (SEP) on Ubuntu 14.04

At work we use Symantec Endpoint Protection, and in a lab, I was asked to confirm whether we could install it on our Ubuntu 14.04 servers. This took a few hops to get it installed, so I figured, I’d publish how I got it working, to save some other poor soul the trouble :)

Firstly, add the webupd8team’s Java PPA and update the repository cache: sudo add-apt-repository ppa:webupd8team/java && sudo apt-get update

This gives you the ability to install the Java 8 installer: sudo apt-get install oracle-java8-installer

This should download the install files, but for some reason, I was struggling to download it (the install script seems to struggle with downloading the actual .tar.gz file from Oracle), so I manually followed the link to http://download.oracle.com/otn-pub/java/jdk/8u77-b03/jdk-8u77-linux-x64.tar.gz, accepted the license, and placed the file in /var/cache/oracle-jdk8-installer/ and then re-ran the above apt-get install line.

— Note: This above issue was because I was running a caching proxy, which somehow doesn’t play nicely with this script. Turn off your proxy – should be all good :)

Next I had to install the Java Cryptography Extension which I got from the Java SE page. I placed this file in /tmp/jce_policy-8.zip (the filename is the one Oracle use) and replaced the files in /usr/lib/jvm/java-8-oracle/jre/lib/security with the ones from the extracted archive with this line: cp -b /tmp/UnlimitedJCEPolicyJDK8/*.jar /usr/lib/jvm/java-8-oracle/jre/lib/security.

The SEP client also has a dependency on the 32bit version of GLibc. I installed this with sudo apt-get install libc6-i386

I was then, finally, able to install the SEP client by unpacking the installer zip file, and running sudo bash install.sh -i from the path I’d unpacked the zip file in.

Not very complicated, I guess!

— Sources:

  • https://ubuntuincident.wordpress.com/2011/04/14/install-the-java-cryptography-extension-jce/
  • http://www.linuxquestions.org/questions/linux-newbie-8/how-to-install-32-bit-glibc-2-9-or-later-on-64-bit-ubuntu-12-04-a-4175413667/
  • http://www.webupd8.org/2012/09/install-oracle-java-8-in-ubuntu-via-ppa.html

Setting up a Google Play Music uploader for Linux Servers

THIS POST HAS BEEN SUPERCEEDED.

I like having an online music server. At home, I use a Logitech Media Server (formerly Squeezebox Server) [1] and run my several O2 Jogglers around the house with the “Squeezeplay OS” [2] images to play music from that server, but when it comes to an Android tablet (or Android phone), there’s not that joined-up thinking from Logitech (although you can just about cobble it together using a few 3rd party apps [3]), but what I do like is the Google Play Music service.

Google Play Music [4] was the first product from the Google Play team after they rebranded to “Google Play”, and I pretty quickly got interested in it. I installed the Google Play Uploader [5] on my home server, and uploaded all my music (apparently, I’m up to 12,000 tracks, but I think there are some duplicates there!) but what to do about the rest of the family? Well, until just recently, it didn’t matter. Jules has no interest in playing music on her phone or tablet, and Daniel, well, he’s 3-and-a-bit.

Since pretty early on, he’s been all over our tech – initially just using whatever apps we had installed on Jules’ and my phones, then Jules’ tablet (I was, and still am, pretty cautious about him using my tablet as it’s an Asus TF300T [6] which cost quite a lot of money, and I keep toying with the idea of installing some other OS – like Firefox OS [7] or Ubuntu Touch [8] on there and can’t if it’s wrecked), and now he’s pretty comfortable with browsing YouTube or the Play Store, although he knows not to click on adverts and can’t install anything that costs anything.

In the last couple of months, since he’s been really learning how to spell, he’s been asking how to spell the names of his favourite films (“Oliver!”, “Frozen” and “The Polar Express” primarily) to get the film clips up in YouTube, or to play snippets from the soundtracks of the films… which got me thinking. I can’t really do much about the films (not yet at least!), but perhaps I could set up Google Play Music with the soundtracks he listens to… but I already have one account sync’ed with Google Play Music from my media server…. how do I get his stuff set up on there?

Essentially, Google Play Music Uploader is a GUI application that can’t be started in the CLI (which kinda makes sense from a simplicity perspective), but as I’m already running one instance of the application on my media server, I can’t start up a second one, so what do I do?

Well, as it turns out, there’s actually a python library for interacting with Google Play Music’s upload and download applications called “gmusicapi” [9], and this couples with a really nice wrapper gives me a CLI utility I can run in a CRON job on my media server.

The wrapper is called gmusicapi-scripts [10], which contains 3 utilities – gmdownload.py, gmsync.py and the key to this – gmupload.py.

You need to install a few libraries. On my Ubuntu 14.04 system, I needed to run:

sudo apt-get install python-pip avconv-tools
sudo pip install gmusicapi
sudo pip install docopt

Once you’ve got this, you can get the tools themselves like this:

git clone https://github.com/thebigmunch/gmusicapi-scripts.git

This will give you a folder called gmusicapi-scripts in which is gmupload.py – the first time you run it, it’ll ask you to visit a web page in order to register the client. Click “OK” to approve the library having access to Google Play Music. This is a pretty spartan page, and ends up with a grey text box containing a string. Copy the contents of that box back into your terminal, and hey-presto, you get it working…

Well, sort of. For me, because I’d not set Daniel up with Google Play Music yet, I needed to set up his account first. I didn’t realise this (I thought, just going to the web page the script points you to will get you access, but it doesn’t because they need to vet which country you’re accessing from…) and the script didn’t tell me that (it just kept saying “Not Subscribed” [11])

Anyway, once it’s done you run

~/gmusicapi-scripts/gmupload.py /path/to/file.mp3

( 1/1) Successfully uploaded /path/to/file.mp3

If you’ve got multiple users, you can rename ~/.local/share/gmusicapi/oauth.cred to ~/.local/share/gmusicapi/SomeOtherName.cred and then run

~/gmusicapi-scripts/gmupload.py -c SomeOtherName /path/to/file.mp3

Subsequent passes will prompt you to authenticate the next account as you go, and then you can rename them as appropriate.

[1] Logitech Media Server (formerly Squeezebox Server): http://www.mysqueezebox.com/download
[2] “Squeezeplay OS”: http://birdslikewires.co.uk/articles/squeezeplay-os
[3] a few 3rd party apps: I got it working on my TF300T by combining https://play.google.com/store/apps/details?id=de.bluegaspode.squeezeplayer and https://play.google.com/store/apps/details?id=de.cedata.android.squeezecommander
[4] Google Play Music: https://play.google.com/music
[5] Google Play Uploader: https://play.google.com/music/listen?u=0#/manager
[6] TF300T: http://en.wikipedia.org/wiki/Asus_Transformer_Pad_TF300T
[7] Firefox OS: https://www.mozilla.org/en-US/firefox/os/
[8] Ubuntu Touch: http://www.ubuntu.com/tablet
[9] “gmusicapi”: https://github.com/simon-weber/Unofficial-Google-Music-API
[10] gmusicapi-scripts: https://github.com/thebigmunch/gmusicapi-scripts
[11] “Not Subscribed”: I raised a bug, but the lead developer said it seemed obvious to him that you have to set it up first… not disputing that for the first pass, but a nice message would have been good :) https://github.com/thebigmunch/gmusicapi-scripts/issues/22

Some notes about Ethernet over Power

I messed around a bit with my network tonight, in order to set set up my Ethernet-over-power (AKA Powerline Networking), and I figured out some things which, while they may not be useful to many of you, this is a bit of a prompt for the next time around.

1) The manager application runs under Windows only (although apparently, there are github repositories where you can get and build a linux application which even lets you set QoS aka Quality Of Service and other such fun things – I’ve not tried them, so I can’t recommend them). If you’ve got more than a matched pair of these, then you’ll need to run the application. I didn’t try running it in a virtual machine – I kept the supplied Windows OS from when I bought this machine specifically for purposes like this.

2) Not all models reset in the same way. If you can’t get them all to reset, connect to them with a CAT5 cable, go to the “Privacy” tab, select “Public network” which will reset it to “PowerLineAV”, and then select “Local computer”. You should then be able to browse across them all.

3) Not all models come with a “password” (sometimes referred to as a DEK). In this case, you also have to plug into these devices to set up their security. If they do have a password, it’ll be entirely in upper case, and even though the application shows numeric characters, in the 4 devices I received, they were all alphabetic-only strings of 16 characters, separated by hyphens.

4) Once you’ve got them all set to “PowerLineAV”, typed the passwords in for the models which have them, you can now set a community wide network password. This could be used to set up several logical segments, but realistically, it’s going to be one flat network :)

I can’t think, offhand, of anything else I need to say right now, but it’s been pretty interesting setting this up, so… hope you enjoyed it!

Starting EC2 instances using PHP

I run a small podcast website called CCHits.net. It runs on Dreamhost because they offer unlimited storage and bandwidth, but while it’s a great service for storage, it’s not really useful for running a batch process because long running processes are killed regularly (in my case, building the cchits podcasts on a daily basis).

As a result, I built an EC2 instance which I trigger every day using a cronjob. Previously, I used the “AWS CLI tools”, but as this uses a Java Virtual Machine, it was taking an awful lot of resources just to spin up an instance, and Dreamhost kept killing the task off. As a result, I found the AWS PHP SDK, and coded up this little snippet to spin up the EC2 instance.

Development Environment Replication with Vagrant and Puppet

This week, I was fortunate enough to meet up with the Cheadle Geeks group. I got talking to a couple of people about Vagrant and Puppet, and explaining how it works, and I thought the best thing to do would be to also write that down here, so that I can point anyone who missed any of what I was saying to it.

Essentially, Vagrant is program to read a config file which defines how to initialize a pre-built virtual machine. It has several virtual machine engines which it can invoke (see [1] for more details on that), but the default virtual machine to use is VirtualBox.

To actually find a virtual box to load, there’s a big list over at vagrantbox.es which have most standard cloud servers available to you. Personally I use the Ubuntu Precise 32bit image from VagrantUp.com for my open source projects (which means more developers can get involved). Once you’ve picked an image, use the following command to get it installed on your development machine (you only need to do this step once per box!):

vagrant box add {YourBoxName} {BoxURL}

After you’ve done that, you need to set up the Vagrant configuration file.

cd /path/to/your/dev/environment
mkdir Vagrant
cd Vagrant
vagrant init {YourBoxName}

This will create a file called Vagrantfile in /path/to/your/dev/environment/Vagrant. It looks overwhelming at first, but if you trim out some of the notes (and tweak one or two of the lines), you’ll end up with a file which looks a bit like this:

Vagrant.configure("2") do |config|
  config.vm.box = "{YourBoxName}"
  config.vm.hostname = "{fqdn.of.your.host}"
  config.vm.box_url = "{BoxURL}"
  config.vm.network :forwarded_port, guest: 80, host: 8080
  # config.vm.network :public_network
  config.vm.synced_folder "../web", "/var/www"
  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
  end
end

This assumes you’ve replaced anything with {}’s in it with a real value, and that you want to forward TCP/8080 on your machine to TCP/80 on that box (there are other work arounds, using more Vagrant plugins, different network types, or other services such as pagekite, but this will do for now).

Once you’ve got this file, you could start up your machine and get a bare box, but that’s not much use to you, as you’d have to tell people how to configure your development environment every time they started up a new box. Instead, we’ll be using a Provisioning service, and we’re going to use Puppet for that.

Puppet was originally designed as a way of defining configuration across all an estate’s servers, and a lot of tutorials I’ve found online explain how to use it for that, but when we’re setting up Puppet for a development environment, we just need a simple file. This is the site.pp manifest, and in here we define the extra files and packages we need, plus any commands we need to run. So, let’s start with a basic manifest file:

node default {

}

Wow, isn’t that easy? :) We need some more detail than that though. First, let’s make sure the timezone is set. I live in the UK, so my timezone is “Europe/London”. Let’s put that in. We also need to make sure that any commands we run have the right path in them. So here’s our revised, debian based, manifest file.

node default {
    Exec {
        path => '/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/sbin:/usr/sbin'
    }

    package { "tzdata":
        ensure => "installed"
    }

    file { "/etc/timezone":
        content => "Europe/London\n",
        require => Package["tzdata"]
    }

    exec { "Set Timezone":
        unless => "diff /etc/localtime /usr/share/zoneinfo/`cat /etc/timezone`",
        command => "dpkg-reconfigure -f noninteractive tzdata",
        require => File["/etc/timezone"]
    }
}

OK, so we’ve got some pretty clear examples of code to run here. The first Exec statement must always be in there, otherwise it gets a bit confused, but after that, we’re making sure the package tzdata is installed, we then make sure that, once the tzdata package is installed, we create or update the /etc/timezone file with the value we want, and then we use the dpkg-reconfigure command to set the timezone, but only if the timezone isn’t already set to that.

Just to be clear, this file describes what the system should look like at the end of it running, not a step-by-step guide to getting it running, so you might find that some of these packages install out of sequence, or something else might run before or after when you were expecting it to run. As a result, you should make good use of the “require” and “unless” statements if you want a proper sequence of events to occur.

Now, so far, all this does is set the timezone for us, it doesn’t set up anything like Apache or MySQL… perhaps you want to install something like WordPress here? Well, let’s see how we get other packages installed.

In the following lines of code, we’ll assume you’re just adding this text above the last curled bracket (the “}” at the end).

First, we need to ensure our packages are up to date:

exec { "Update packages":
    command => "sudo apt-get update && sudo apt-get dist-upgrade -y",
}

Here’s Apache getting installed:

package { "apache2":
    ensure => "installed",
    require => Exec['Update packages']
}

And, maybe you’ll want to set up something that needs mod_rewrite and a custom site? Add this to your Vagrantfile

config.vm.synced_folder "../Apache_Site", "/etc/apache2/shared_config"

Create a directory called /path/to/your/dev/environment/Apache_Site which should contain your apache site configuration file called “default”. Then add this to your site.pp

exec { "Enable rewrite":
    command => 'a2enmod rewrite',
    onlyif => 'test ! -e /etc/apache2/mods-enabled/rewrite.load',
    require => Package['apache2']
}

file { "/etc/apache2/sites-enabled/default":
  ensure => link,
  target => "/etc/apache2/shared_config/default",
}

So, at the end of all this, we have the following file structure:

/path/to/your/dev/environment
+ -- /Apache_Site
|    + -- default
+ -- /web
|    + -- index.html
+ -- /Vagrant
     + -- /manifests
     |    + -- site.pp
     + -- Vagrantfile

And now, you can add all of this to your Git repository [2], and off you go! To bring up your Vagrant machine, type (from the Vagrant directory):

vagrant up

And then to connect into it:

vagrant ssh

And finally to halt it:

vagrant halt

Or if you just want to kill it off…

vagrant destroy

If you’re tweaking the provisioning code, you can run this instead of destroying it and bringing it back up again:

vagrant provision

You can do some funky stuff with running several machines, and using the same puppet file for all of those, but frankly, that’s a topic for another day.

[1] Vagrant is extended using plugins. There is a list of plugins on this Github Wiki Page. The plugins here can include additional virtual machine back ends (called Providers in Vagrant terminology), and methods of configuring the OS after bootup (called Provisioners), but also anything around defining where to find resources, to define network addresses, even to handle caches and proxies.

[2] If you’re not using Git, you should be! However, you might want to add some stuff to your .gitignore – in particular, Vagrant adds a directory called /path/to/your/dev/environment/Vagrant/.vagrant where it puts the VMs it creates.

Stripping a UK O2 Samsung Galaxy SIII Mini down to the bare essentials

The company I work for have recently issued all On Call engineers in my team a Samsung Galaxy SIII Mini to give us access to company e-mail and resources out of hours. Rather than shipping a customized image, we have received a stock O2 imaged mobile, and so this is my limited guide to bringing this to as close to “Stock” Android as I can manage (or want).

Most of what we need is provided to us using a commercial solution called Touchdown, so I won’t be covering that here, as whatever you get shipped to you will not include that. I’ve elected not to use the device for my personal systems, barring my Google calendar, which means I’ll stand a fighting chance of not booking overtime and other work things for personal appointments.

So, on power-on, I completed the post-install steps, including setting up my Google account. I decided not to keep the device in sync with my Google account, as I already have a few other Android devices, and I don’t want to get my work infrastructure mixed up with my home kit.

Next, I went into Settings, and from there into the Google Account. I clicked on my e-mail address and unselected the following options:

  • Sync App Data
  • Sync Contacts
  • Sync Gmail
  • Sync Internet
  • Sync People details

After that, I went into Application Manager (again, in Settings), and swapped to the “All” tab. Firstly, I needed to clear out the downloaded contacts, which I did by selecting Contacts Storage, and then pressing the “Clear Data” button.

Next, I disabled all the applications that I either don’t need, or don’t want on my work phone. I did this by selecting each in turn, and then selecting the option to disable them. Here’s the list:

  • Amazon MP3
  • eBay
  • Flipboard
  • Gallery
  • Game Hub
  • Gmail
  • Google Play Books
  • Google Play Magazines
  • Google Play Music
  • Google+
  • Music
  • O2 Space
  • S Planner
  • S Planner Widget
  • S Suggest
  • S Voice
  • Samsung Account
  • Samsung Apps
  • Samsung Backup Provider
  • Samsung Browser SyncAdapter
  • Samsung Calendar SyncAdapter
  • Samsung Cloud Data Relay
  • Samsung Contact SyncAdapter
  • Samsung Push Service
  • Samsung Syncadapters
  • Tags
  • Talk
  • Talkback
  • Video Hub
  • Yahoo! Finance
  • Yahoo! News
  • YouTube

Wow, isn’t that a list!

My next step was to hide some of the applications I don’t need. To do this, I went into the applications page, pressed the menu button, and selected “Hide applications”. This puts selection boxes next to all the applications on the page, and once you’ve done selecting options, press “Done” in the top right corner to hide them. Here’s my list:

  • Contacts
  • Downloads
  • E-Mail
  • FM Radio
  • Google Settings
  • Help
  • Memo
  • Music Player
  • My Files
  • Video Player
  • Voice Recorder
  • Voice Search

Lastly, installed a couple of applications from the Play Store:

Once I’d got Agenda Widget Plus, and Google Keyboard configured, I hid those applications in the applications pane too.

After all of that, I set up Touchdown… which you’ll need to follow up though your own instructions!

One final thing before I wrap this all up… even though I’m on-call, this doesn’t include being engaged via e-mail. As such, my e-mail doesn’t need to disturb me, and so I’ve disabled the touchdown application’s notifications for e-mail. To do this, go into Touchdown, make sure you’re at the “main” screen (not the default e-mail screen, but the one which also includes all your tasks and calendar options), and then press the menu button, press “Settings”, and select the “Advanced” tab. Scroll right to the bottom of the list, and press the “Email Alerts” button. Select “Customize settings” and then select appropriate options. If you leave nothing ticked, all you’ll get is a flag in the notifications tray showing an e-mail has appeared. Personally, I’ve turned on “Enable lights” and picked a colour, so I can quickly see whether I’ve had a mail just by checking the screen.