Creating OpenStack “Allowed Address Pairs” for Clusters with Ansible

This post came about after a couple of hours of iterations, so I can’t necessarily quote all the sources I worked from, but I’ll do my best!

In OpenStack (particularly in the “Kilo” release I’m working with), if you have a networking device that will pass traffic on behalf of something else (e.g. Firewall, IDS, Router, Transparent Proxy) you need to tell the virtual NIC that the interface is allowed to pass traffic for other IP addresses, as OpenStack applies by default a “Same Origin” firewall rule to the interface. Defining this in OpenStack is more complex than it could be, because for some reason, you can’t define 0.0.0.0/0 as this allowed address pair, so instead you have to define 0.0.0.0/1 and 128.0.0.0/1.

Here’s how you define those allowed address pairs (note, this assumes you’ve got some scaffolding in place to define things like “network_appliance”):

allowed_address_pairs: "{% if (item.0.network_appliance|default('false')|lower() == 'true') or (item.1.network_appliance|default('false')|lower() == 'true') %}[{'ip_address': '0.0.0.0/1'}, {'ip_address': '128.0.0.0/1'}]{% else %}{{ item.0.allowed_address_pairs|default(omit) }}{% endif %}"

OK, so we’ve defined the allowed address pairs! We can pass traffic across our firewall. But (and there’s always a but), the product I’m working with at the moment has a floating MAC address in a cluster, when you define an HA pair. They have a standard schedule for how each port’s floating MAC is assigned… so here’s what I’ve ended up with (and yes, I know it’s a mess!)

allowed_address_pairs: "{% if (item.0.network_appliance|default('false')|lower() == 'true') or (item.1.network_appliance|default('false')|lower() == 'true') %}[{'ip_address': '0.0.0.0/1'},{'ip_address': '128.0.0.0/1'}{% if item.0.ha is defined and item.0.ha != '' %}{% for vdom in range(0,40, 10) %},{'ip_address': '0.0.0.0/1','mac_address': '{{ item.0.floating_mac_prefix|default(item.0.image.floating_mac_prefix|default(floating_mac_prefix)) }}:{% if item.0.ha.group_id|default(0) < 16 %}0{% endif %}{{ '%0x' | format(item.0.ha.group_id|default(0)|int) }}:{% if vdom+(item.1.interface|default('1')|replace('port', '')|int)-1 < 16 %}0{% endif %}{{ '%0x' | format(vdom+(item.1.interface|default('1')|replace('port', '')|int)-1) }}'}, {'ip_address': '128.0.0.0/1','mac_address': '{{ item.0.floating_mac_prefix|default(item.0.image.floating_mac_prefix|default(floating_mac_prefix)) }}:{% if item.0.ha.group_id|default(0) < 16 %}0{% endif %}{{ '%0x' | format(item.0.ha.group_id|default(0)|int) }}:{% if vdom+(item.1.interface|default('0')|replace('port', '')|int)-1 < 16 %}0{% endif %}{{ '%0x' | format(vdom+(item.1.interface|default('1')|replace('port', '')|int)-1) }}'}{% endfor %}{% endif %}]{% else %}{{ item.0.allowed_address_pairs|default(omit) }}{% endif %}"

Let's break this down a bit. The vendor says that each port gets a standard prefix, (e.g. DE:CA:FB:AD) then the penultimate octet is the "Cluster ID" number in hex, and then the last octet is the sum of the port number (zero-indexed) added to a VDOM number, which increments in 10's. We're only allowed to assign 10 "allowed address pairs" to an interface, so I've got the two originals (which are assigned to "whatever" the defined mac address is of the interface), and four passes around. Other vendors (e.g. this one) do things differently, so I'll probably need to revisit this once I know how the next one (and the next one... etc.) works!

So, we have here a few parts to make that happen.

The penultimate octet, which is the group ID in hex needs to be two hex digits long, and without adding more python modules to our default machines, we can't use a "pad" filter (to add 0's to the beginning of the mac octets), so we do that by hand:

{% if item.0.ha.group_id|default(0) < 16 %}0{% endif %}

And here's how to convert the group ID into a hex number:

{{ '%0x' | format(item.0.ha.group_id|default(0)|int) }}

Then the next octet is the sum of the VDOM and PortID. First we need to loop around the VDOMs. We don't always know whether we're going to be adding VDOMs until after deployment has started, so here we will assume we've got 3 VDOMs (plus VDOM "0" for management) as it doesn't really matter if we don't end up using them. We create the vdom variable like this:

{% for vdom in range(0, 40, 10) %} STUFF {% endfor %}

We need to put the actual port ID in there too. As we're using a with_subelement loop we can't create an increment, but what we can do is ensure we're recording the interface number. This only works here because the vendor has a sequential port number (port1, port2, etc). We'll need to experiment further with other vendors! So, here's how we're doing this. We already know how to create a hex number, but we do need to use some other Jinja2 filters here:

{{ '%0x' | format(vdom+(item.1.interface|default('1')|replace('port', '')|int)-1) }}

Let's pull this apart a bit further. item.1.interface is the name of the interface, and if it doesn't exist (using the |default('1') part) we replace it with the string "1". So, let's replace that variable with a "normal" value.

{{ '%0x' | format(vdom+("port1"|replace('port', '')|int)-1) }}

Next, we need to remove the word "port" from the string "port1" to make it just "1", so we use the replace filter to strip part of that value out. Let's do that:

{{ '%0x' | format(vdom+("1"|int)-1) }}

After that, we need to turn the string "1" into the literal number 1:

{{ '%0x' | format(vdom+1-1) }}

We loop through vdom several times, but let's pick one instance of that at random - 30 (the fourth iteration of the vdom for-loop):

{{ '%0x' | format(30+1-1) }}

And then we resolve the maths:

{{ '%0x' | format(30) }}

And then the |format(30) turns the '%0x' into the value "1e"

Assuming the vendor prefix is, as I mentioned, 'de:ca:fb:ad:' and the cluster ID is 0, this gives us the following resulting allowed address pairs:

[
{"ip_address": "0.0.0.0/1"},
{"ip_address": "128.0.0.0/1"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:00"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:00"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:0a"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:0a"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:14"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:14"},
{"ip_address": "0.0.0.0/1", "mac_address": "de:ca:fb:ad:00:1e"},
{"ip_address": "128.0.0.0/1", "mac_address": "de:ca:fb:ad:00:1e"}
]

I hope this has helped you!

Sources of information:

Defining Networks with Ansible

In my day job, I’m using Ansible to provision networks in OpenStack. One of the complaints I’ve had about the way I now define them is that the person implementing the network has to spell out all the network elements – the subnet size, DHCP pool, the addresses of the firewalls and names of those items. This works for a manual implementation process, but is seriously broken when you try to hand that over to someone else to implement. Most people just want something which says “Here is the network I want to implement – 192.0.2.0/24″… and let the system make it for you.

So, I wrote some code to make that happen. It’s not perfect, and it’s not what’s in production (we have lots more things I need to add for that!) but it should do OK with an IPv4 network.

Hope this makes sense!

---
- hosts: localhost
  vars:
  - networks:
      # Defined as a subnet with specific router and firewall addressing
      external:
        subnet: "192.0.2.0/24"
        firewall: "192.0.2.1"
        router: "192.0.2.254"
      # Defined as an IP address and CIDR prefix, rather than a proper network address and CIDR prefix
      internal_1:
        subnet: "198.51.100.64/24"
      # A valid smaller network and CIDR prefix
      internal_2:
        subnet: "203.0.113.0/27"
      # A tiny CIDR network
      internal_3:
        subnet: "203.0.113.64/30"
      # These two CIDR networks are unusable for this environment
      internal_4:
        subnet: "203.0.113.128/31"
      internal_5:
        subnet: "203.0.113.192/32"
      # A massive CIDR network
      internal_6:
        subnet: "10.0.0.0/8"
  tasks:
  # Based on https://stackoverflow.com/a/47631963/5738 with serious help from mgedmin and apollo13 via #ansible on Freenode
  - name: Add router and firewall addressing for CIDR prefixes < 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4),
            'dhcp_start': item.value.dhcp_start | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 2) | ipv4),
            'dhcp_end': item.value.dhcp_end | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 2) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') < 30   - name: Add router and firewall addressing for CIDR prefixes = 30     set_fact:       networks: >
        {{ networks | default({}) | combine(
          {item.key: {
            'subnet': item.value.subnet | ipv4('network'),
            'router': item.value.router | default((( item.value.subnet | ipv4('network') | ipv4('int') ) + 1) | ipv4),
            'firewall': item.value.firewall | default((( item.value.subnet | ipv4('broadcast') | ipv4('int') ) - 1) | ipv4)
          }
        }) }}
    with_dict: "{{ networks }}"
    when: item.value.subnet | ipv4('prefix') == 30
  - debug:
      var: networks

Using inspec to test your ansible

Over the past few days I’ve been binge listening to the Arrested Devops podcast. In one of the recent episodes (“Career Change Into DevOps With Michael Hedgpeth, Annie Hedgpeth, And Megan Bohl (ADO102)“) one of the interviewees mentions that she got started in DevOps by using Inspec.

Essentially, inspec is a way of explaining “this is what my server must look like”, so you can then test these statements against a built machine… effectively letting you unit test your provisioning scripts.

I’ve already built a fair bit of my current personal project using Ansible, so I wasn’t exactly keen to re-write everything from scratch, but it did make me think that maybe I should have a common set of tests to see how close my server was to the hardening “Benchmark” guides from CIS… and that’s pretty easy to script in inspec, particularly as the tests in those documents list the “how to test” and “how to remediate” commands to execute.

These are in the process of being drawn up (so far, all I have is an inspec test saying “confirm you’re running on Ubuntu 16.04″… not very complex!!) but, from the looks of things, the following playbook would work relatively well!

A brief guide to using vagrant-aws

CCHits was recently asked to move it’s media to another host, and while we were doing that we noticed that many of the Monthly shows were broken in one way or another…

Cue a massive rebuild attempt!

We already have a “ShowRunner” script, which we use with a simple Vagrant machine, and I knew you can use other hypervisor “providers”, and I used to use AWS to build the shows, so why not wrap the two parts together?

Firstly, I installed the vagrant-aws plugin:

vagrant plugin install vagrant-aws

Next I amended my Vagrantfile with the vagrant-aws values mentioned in the plugin readme:

Vagrant.configure(2) do |config|
    config.vm.provider :aws do |aws, override|
    config.vm.box = "ShowMaker"
    aws.tags = { 'Name' => 'ShowMaker' }
    config.vm.box_url = "https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box"
    
    # AWS Credentials:
    aws.access_key_id = "DECAFBADDECAFBADDECAF"
    aws.secret_access_key = "DeadBeef1234567890+AbcdeFghijKlmnopqrstu"
    aws.keypair_name = "TheNameOfYourSSHKeyInTheEC2ManagementPortal"
    
    # AWS Location:
    aws.region = "us-east-1"
    aws.region_config "us-east-1", :ami => "ami-c29e1cb8" # If you pick another region, use the relevant AMI for that region
    aws.instance_type = "t2.micro" # Scale accordingly
    aws.security_groups = [ "sg-1234567" ] # Note this *MUST* be an SG ID not the name
    aws.subnet_id = "subnet-decafbad" # Pick one subnet from https://console.aws.amazon.com/vpc/home
    
    # AWS Storage:
    aws.block_device_mapping = [{
      'DeviceName' => "/dev/sda1",
      'Ebs.VolumeSize' => 8, # Size in GB
      'Ebs.DeleteOnTermination' => true,
      'Ebs.VolumeType' => "GP2", # General performance - you might want something faster
    }]
    
    # SSH:
    override.ssh.username = "ubuntu"
    override.ssh.private_key_path = "/home/youruser/.ssh/id_rsa" # or the SSH key you've generated
    
    # /vagrant directory - thanks to https://github.com/hashicorp/vagrant/issues/5401
    override.nfs.functional = false # It tries to use NFS - use RSYNC instead
  end
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision "shell", path: "./run_setup.sh"
  config.vm.provision "shell", run: "always", path: "./run_showmaker.sh"
end

Of course, if you try to put this into your Github repo, it’s going to get pillaged and you’ll be spending lots of money on monero mining very quickly… so instead, I spotted this which you can do to separate out your credentials:

At the top of the Vagrantfile, add these two lines:

require_relative 'settings_aws.rb'
include SettingsAws

Then, replace the lines where you specify a “secret”, like this:

    aws.access_key_id = AWS_ACCESS_KEY
    aws.secret_access_key = AWS_SECRET_KEY

Lastly, create a file “settings_aws.rb” in the same path as your Vagrantfile, that looks like this:

module SettingsAws
    AWS_ACCESS_KEY = "DECAFBADDECAFBADDECAF"
    AWS_SECRET_KEY = "DeadBeef1234567890+AbcdeFghijKlmnopqrstu"
end

This file then can be omitted from your git repository using a .gitignore file.

Today I learned… Cloud-init doesn’t like you repeating the same things

Because of templates I was building in my post “Today I learned… Ansible Include Templates”, I thought you could repeat the same sections over again. Here’s a snippet of something like what I’d built (after combining lots of templates together):

Note this is a non-working code sample!


#cloud-config
packages:
- iperf
- git

write_files:
- content: {% include 'files/public_key.j2' %}
  path: /root/.ssh/authorized_keys
  owner: root:root
  permission: '0600'
- content: {% include 'files/private_key.j2' %}
  path: /root/.ssh/id_rsa
  owner: root:root
  permission: '0600'

packages:
- byobu

write_files:
- content: |
    #!/bin/bash
    git clone {{ test_scripts }} /root/iperf_scripts
    bash /root/iperf_scripts/run_test.sh
  path: /root/run_test
  owner: root:root
  permission: '0700'

runcmd:
- /root/run_test

I’d get *bits* of it to run – basically, the last file, the last package and the last runcmd… but not all of it.

Turns out, cloud-init doesn’t like having to rebuild all the fragments together. Instead, you need to put them all together, so the write_files items, and the packages items all live in the same area.

Which, when you think about what it’s doing, which is that the parent lines are defining a variable called… well, whatever that line is, and if you replace it, it’s only going to keep the last one, then it all makes sense really!

Today I learned… that you can look at the “cloud-init” files on your target server…

Today I have been debugging why my Cloud-init scripts weren’t triggering on my Openstack environment.

I realised that something was wrong when I tried to use the noVNC console[1] with a password I’d set… no luck. So, next I ran a command to review the console logs[2], and saw a message (now, sadly, long gone – so I can’t even include it here!) suggesting there was an issue parsing my YAML file. Uh oh!

I’m using Ansible’s os_server module, and using templates to complete the userdata field, which in turn gets populated as cloud-init scripts…. and so clearly I had two ways to debug this – prefix my ansible playbook with a few debug commands, but then that can get messy… OR SSH into the box, and look through the logs. I knew I could SSH in, so the cloud-init had partially fired, but it just wasn’t parsing what I’d submitted. I had a quick look around, and found a post which mentioned debugging cloud-init. This mentioned that there’s a path (/var/lib/cloud/instances/$UUID/) you can mess around in, to remove some files to “fool” cloud-init into thinking it’s not been run… but, I reasoned, why not just see what’s there.

And in there, was the motherlode – user-data.txt…. bingo.

In the jinja2 template I was using to populate the userdata, I’d referenced another file, again using a template. It looks like that template needs an extra line at the end, otherwise, it all runs together.

Whew!

This does concern me a little, as I had previously been using this stanza to “simply” change the default user password to something a little less complicated:


#cloud-config
ssh_pwauth: True
chpasswd:
  list: |
    ubuntu:{{ default_password }}
  expire: False

But now that I look at the documentation, I realise you can also specify that as a pre-hashed value (in which case, you would suffix that default_password item above with |password_hash('sha512')) which makes it all better again!

[1] If you run openstack --os-cloud cloud_a console url show servername gives you a URL to visit that has an HTML5 based VNC-ish client. Note the “cloud_a” and “servername” should be replaced by your clouds.yml reference and the server name or server ID you want to connect to.
[2] Like before, openstack --os-cloud cloud_a console log show servername gives you the output of the boot sequence (e.g. dmesg plus the normal startup commands, and finally, cloud-init). It can be useful. Equally, it’s logs… which means there’s a lot to wade through!

One to read: “Test Driven Development (TDD) for networks, using Ansible”

Thanks to my colleague Simon (@sipart on Twitter), I spotted this post (and it’s companion Github Repository) which explains how to do test-driven development in Ansible.

Essentially, you create two roles – test (the author referred to it as “validate”) and one to actually do the thing you want it to do (in the author’s case “add_vlan”).

In the testing role, you’d have the following layout:

/path/to/roles/testing/tasks/main.yml
/path/to/roles/testing/tasks/SOMEFEATUREtest.yml

In the main.yml file, you have a simple stanza:

---
- name: Include all the test files
  include: "{{ outer_item }}"
  with_fileglob:"/path/to/roles/validate/tasks/*test.yml"
  loop_control: loop_var=outer_item

I’m sure that “with_fileglob” line could be improved to not actually need a full path… anyway

Then in your YourFeature_test.yml file, you do things like this:

---
- name: "Pseudocode in here. Use real modules for your testing!!"
  get_vlan_config: filter_for=needle_vlan
  register:haystack_var

- assert: that=" {{ needle_item }} in haystack_var "

When you run the play of the role the first time, the response will be “failed” (because “needle_vlan” doesn’t exist). Next do the “real” play of the role (so, in the author’s case, add_vlan) which creates the vlan. Then re-run the test role, your response should now be “ok”.

I’d probably script this so that it goes:

      reset-environment set_testing=true (maybe create a random little network)
      test
      run-action
      test
      reset-environment set_testing=false

The benefit to doing it that way is that you “know” your tests aren’t running if the environment doesn’t have the “set_testing” thing in place, you get to run all your tests in a “clean room”, and then you clear it back down again afterwards, leaving it clear for the next pass of your automated testing suite.

Fun!

Today I learned… Ansible Include Templates

I am building Openstack Servers with the ansible os_server module. One of these fields will accept a very long string (userdata). Typically, I end up with a giant blob of unreadable build script in this field…

Today I learned that I can use this:

---
- name: "Create Server"
  os_server:
    name: "{{ item.value.name }}"
    state: present
    availability_zone: "{{ item.value.az.name }}"
    flavor: "{{ item.value.flavor }}"
    key_name: "{{ item.value.az.keypair }}"
    nics: "[{%- for nw in item.value.ports -%}{'port-name': '{{ ProjectPrefix }}{{ item.value.name }}-Port-{{nw.network.name}}'}{%- if not loop.last -%}, {%- endif -%} {%- endfor -%}]" # Ignore this line - it's complicated for a reason
    boot_volume: "{{ ProjectPrefix }}{{ item.value.name }}-OS-Volume" # Ignore this line also :)
    terminate_volume: yes
    volumes: "{%- if item.value.log_size is defined -%}[{{ ProjectPrefix }}{{ item.value.name }}-Log-Volume]{%- else -%}{{ omit }}{%- endif -%}"
    userdata: "{% include 'templates/userdata.j2' %}"
    auto_ip: no
    timeout: 65535
    cloud: "{{ cloud }}"
  with_dict: "{{ Servers }}"

This file (/path/to/ansible/playbooks/servers.yml) is referenced by my play.yml (/path/to/ansible/play.yml) via an include, so the template reference there is in my templates directory (/path/to/ansible/templates/userdata.j2).

That template can also then reference other template files itself (using {% include 'templates/some_other_file.extension' %}) so you can have nicely complex userdata fields with loads and loads of detail, and not make the actual play complicated (or at least, no more than it already needs to be!)

Some notes from Ansible mentoring

Last night, I met up with my friend Tim Dobson to talk about Ansible. I’m not an expert, but I’ve done a lot of Ansible recently, and he wanted some pointers.

He already had some general knowledge, but wanted some pointers on “other things you can do with Ansible”, so here are a couple of the things we did.

  • If you want to set certain things to happen as “Production” and other things to happen as “Pre-production” you can either have two playbooks (e.g. pre-prod.yml versus prod.yml) which call certain features… OR use something like this:

    ---
    - hosts: localhost
      tasks:
    - set_fact:
        my_run_state: "{% if lookup('env', 'runstate') == '' %}{{ default_run_state|default('prod') }}{% else %}{{ lookup('env', 'runstate')|lower() }}{% endif %}"
    - debug: msg="Doing prod"
      when: my_run_state == 'prod'
    - debug: msg="Doing something else"
      when: my_run_state != 'prod'

    With this, you can define a default run state (prod), override it with a group or host var (if you have, for example, a staging service or proof of concept space), or use your Environment variables to do things. In the last case, you’d execute this as follows:

    runstate=preprod ansible-playbook site.yml

  • You can tag almost every action in your plays. Here are some (contrived) examples:


    ---
    - name: Get facts from your hosts
      tags: configure
      hosts: all
    - name: Tell me all the variable data you've collected
      tags: dump
      hosts: localhost
      tasks:
        - name: Show data
          tags: show
          debug:
            var=item
          with_items: hostvars

    When you then run

    ansible-playbook test.yml --list-tags

    You get

    playbook: test.yml

      play #1 (all): Get facts from your hosts      TAGS: [configure]
          TASK TAGS: [configure]

      play #2 (localhost): Tell me all the variable data you've collected   TAGS: [dump]
          TASK TAGS: [dump, show]

    Now you can run ansible-playbook test.yml -t configure or ansible-playbook test.yml --skip-tags configure

    To show how useful this can be, here’s the output from the “–list-tags” I’ve got on a project I’m doing at work:
    playbook: site.yml

      play #1 (localhost): Provision A-Side Infrastructure  TAGS: [Functional_Testing,A_End]
          TASK TAGS: [A_End, EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]

      play #2 (localhost): Provision B-Side Infrastructure  TAGS: [Functional_Testing,B_End]
          TASK TAGS: [B_End, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]

      play #3 (localhost): Provision InterProject Links - Part 1    TAGS: [Functional_Testing,InterProjectLink]
          TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]

      play #4 (localhost): Provision InterProject Links - Part 2    TAGS: [Functional_Testing,InterProjectLink]
          TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]

      play #5 (localhost): Provision TPT environment        TAGS: [Performance_Testing]
          TASK TAGS: [EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers, Performance_Testing, debug]

    This then means that if I get a build fail part-way through, or if I’m debugging just a particular part, I can run this: ansible-playbook site.yml -t Performance_Testing --skip-tags EXCLUDE_K5_Firewalls,EXCLUDE_K5_SecurityGroups,EXCLUDE_K5_Networks

Using Python-OpenstackClient and Ansible with K5

Recently, I have used K5, which is an instance of OpenStack, run by Fujitsu (my employer). To do some of the automation tasks I have played with both python-openstackclient and Ansible. This post is going to cover how to get those tools to work with K5.

I have access to a Linux virtual machine (Ubuntu 16.04) and the Windows Subsystem for Linux in Windows 10 to run “Bash on Ubuntu on Windows”, and both accept the same set of commands.

In order to run these commands, you need a couple of dependencies. Your mileage might vary with other Linux distributions, but, for Ubuntu based distributions, run this command:

sudo apt install python-pip build-essential libssl-dev libffi-dev python-dev

Next, use pip to install the python modules you need:

sudo -H pip install shade==1.11.1 ansible cryptography python-openstackclient

If you’re only ever going to be working with a single project, you can define a handful of environment variables prefixed OS_, like this:

export OS_USERNAME=BloggsF
export OS_PASSWORD=MySuperSecretPasswordIsHere
export OS_REGION_NAME=uk-1
export OS_USER_DOMAIN_NAME=YourProjectName
export OS_PROJECT_NAME=YourProjectName-prj
export OS_PROJECT_ID=baddecafbaddecafbaddecafbaddecaf
export OS_AUTH_URL=https://identity.uk-1.cloud.global.fujitsu.com/v3
export OS_VOLUME_API_VERSION=2
export OS_IDENTITY_API_VERSION=3

But, if you’re working with a few projects, it’s probably worth separating these out into clouds.yml files. This would be stored in ~/.config/openstack/clouds.yml with the credentials for the environment you’re using:

---
clouds:
  root:
    identity_api_version: 3
    regions:
    - uk-1
    auth:
      auth_url: https://identity.uk-1.cloud.global.fujitsu.com/v3
      password: MySuperSecretPasswordIsHere
      project_id: baddecafbaddecafbaddecafbaddecaf
      project_name: YourProjectName-prj
      username: BloggsF
      user_domain_name: YourProjectName

Optionally, you can separate out the password, username or any other “sensitive” information into a secure.yml file stored in the same location (removing those lines from the clouds.yml file), like this:

---
clouds:
  root:
    auth:
      password: MySuperSecretPasswordIsHere

Now, you can use the Python based Openstack Client, using this invocation:

openstack --os-cloud root server list

Alternatively you can use the Ansible Openstack (and K5) modules, like this:

---
tasks:
- name: "Authenticate to K5"
  k5_auth:
    cloud: root
  register: k5_auth_reg
- name: "Create Network"
  k5_create_network:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Subnet"
  k5_create_subnet:
    name: "Public"
    network_name: "Public"
    cidr: "192.0.2.0/24"
    gateway_ip: "192.0.2.1"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Router"
  k5_create_router:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Attach private network to router"
  os_router:
    name: "Public"
    state: present
    network: "inf_az1_ext-net02"
    interfaces: "Public"
    cloud: root
- name: "Create Servers"
  os_server:
    name: "Server"
    availability_zone: "uk-1a"
    flavor: "P-1"
    state: present
    key_name: "MyFirstKey"
    network: "Public-Network"
    image: "Ubuntu Server 14.04 LTS (English) 02"
    boot_from_volume: yes
    terminate_volume: yes
    security_groups: "Default"
    auto_ip: no
    timeout: 7200
    cloud: root