Podcast Summary – Admin Admin Podcast #58

Less than two weeks after my last “Live” show with the podcast, I’m once again contributing to the show. This time, I’m covering for Jerry as a guest presenter on The Admin Admin Podcast – #58 The Correct Answer to the Microsoft Question? this time being a bit controversial for anyone who knows me… By defending Microsoft. I also mention using Ansible to automate server software deployments, and tried to work out how small IT firms work out when to get support contracts. I briefly mention kanban as a task tracking methodology and a specific implementation for MS Outlook.

The Admin Admin podcast is a really nice and broad ranged podcast by three guys who work in IT support at different levels of the support chain. I think, if you work in IT, it’s probably worth having a listen.

Outlook based “Kanban”

Do you use Outlook for your email? Do you sometimes wish you could use a Kanban board with Outlook? Well, look no further!!

Thanks to an internal post about improving workflows, someone mentioned this git repo called “Outlook-Taskboard”, that gives you the ability to create and manipulate your Outlook tasks in a Kanban fashion.

Because it’s “just” native Outlook tasks, you can still manage them using the sidebar or the mobile apps, but when you get back to Outlook, you get to see their status and manage your tasks appropriately.

Podcast Summary – Admin Admin Podcast #57

I’m back again! I’m standing in (again) for Andy as a guest presenter on The Admin Admin Podcast episode #57- Live at OggCamp talking about getting Open Source products under support in a proprietary company. The “famous” Martin Wimpress stands in for Jerry.

As I said last time I was on there, the guys who host the Admin Admin podcast are a really nice, and cover a really great range of subjects about working as a server or network administrator. They have a chat room on Telegram, so if you’re interested in being an admin, it’s worth having a listen, and then maybe join the chat room!

Check Point Management API tips

I was very fortunate yesterday to spend some time with two Check Point engineering staff. Check Point make high-end firewall products that I’m using at work. During the conversation, I mentioned two issues I’ve had during automated builds of Checkpoint appliances…

  1. During the build process, I want to add lots of devices. In my build, however, I need to log in to the management API, and therefore hand into the clear-text userdata field the credentials for the user account – NOT GOOD! What I was told was that actually, you don’t need to operate like that! If you’re running commands on your manager, you can instead run the command in “root” mode to make it bypass any requests for authentication, and as an added “win” it publishes every change you make on exit too! Here’s how:mgmt_cli -r true add host name "New Host 1" ip-address "192.0.2.1"
  2. My other option was to make it so that we can finish our Ansible deployment of the OpenStack server, and then, once it was up and accessible… call out against the API. But how do you do this during the build? Well, you can run four commands against the server to allow remote access to the API, and then you should have access from all the same places your GUI client can access it from! Here’s how:mgmt_cli -r true login domain "System Data" > id.txt
    mgmt_cli -s id.txt set api-settings accepted-api-calls-from "all ip addresses that can be used for gui clients" automatic-start true
    mgmt_cli -s id.txt publish
    api restart

My sincere thanks to Javier and Uri for their guidance. For those wondering about those API calls – see these links: Using the -r flag and configuring the API for remote access.

Podcast Summary – Admin Admin Podcast #55

This week sees the publication of The Admin Admin Podcast episode #55 in which I guest present (and guest introduce!) about network infrastructure. I also answer some questions about using certbot (the free TLS certificate provider), about where to put script files on Linux and a bit about MTU (Message Transfer Units) – although that’s a bit outside my area of expertise, so if I got it wrong, let them know!

The guys who host the Admin Admin podcast are a really nice, and cover a really great range of subjects about working as a server or network administrator. They have a chat room on Telegram, so if you’re interested in being an admin, it’s worth having a listen, and then maybe join the chat room!

Today I learned… Cloud-init doesn’t like you repeating the same things

Because of templates I was building in my post “Today I learned… Ansible Include Templates”, I thought you could repeat the same sections over again. Here’s a snippet of something like what I’d built (after combining lots of templates together):

Note this is a non-working code sample!


#cloud-config
packages:
- iperf
- git

write_files:
- content: {% include 'files/public_key.j2' %}
  path: /root/.ssh/authorized_keys
  owner: root:root
  permission: '0600'
- content: {% include 'files/private_key.j2' %}
  path: /root/.ssh/id_rsa
  owner: root:root
  permission: '0600'

packages:
- byobu

write_files:
- content: |
    #!/bin/bash
    git clone {{ test_scripts }} /root/iperf_scripts
    bash /root/iperf_scripts/run_test.sh
  path: /root/run_test
  owner: root:root
  permission: '0700'

runcmd:
- /root/run_test

I’d get *bits* of it to run – basically, the last file, the last package and the last runcmd… but not all of it.

Turns out, cloud-init doesn’t like having to rebuild all the fragments together. Instead, you need to put them all together, so the write_files items, and the packages items all live in the same area.

Which, when you think about what it’s doing, which is that the parent lines are defining a variable called… well, whatever that line is, and if you replace it, it’s only going to keep the last one, then it all makes sense really!

Today I learned… that you can look at the “cloud-init” files on your target server…

Today I have been debugging why my Cloud-init scripts weren’t triggering on my Openstack environment.

I realised that something was wrong when I tried to use the noVNC console[1] with a password I’d set… no luck. So, next I ran a command to review the console logs[2], and saw a message (now, sadly, long gone – so I can’t even include it here!) suggesting there was an issue parsing my YAML file. Uh oh!

I’m using Ansible’s os_server module, and using templates to complete the userdata field, which in turn gets populated as cloud-init scripts…. and so clearly I had two ways to debug this – prefix my ansible playbook with a few debug commands, but then that can get messy… OR SSH into the box, and look through the logs. I knew I could SSH in, so the cloud-init had partially fired, but it just wasn’t parsing what I’d submitted. I had a quick look around, and found a post which mentioned debugging cloud-init. This mentioned that there’s a path (/var/lib/cloud/instances/$UUID/) you can mess around in, to remove some files to “fool” cloud-init into thinking it’s not been run… but, I reasoned, why not just see what’s there.

And in there, was the motherlode – user-data.txt…. bingo.

In the jinja2 template I was using to populate the userdata, I’d referenced another file, again using a template. It looks like that template needs an extra line at the end, otherwise, it all runs together.

Whew!

This does concern me a little, as I had previously been using this stanza to “simply” change the default user password to something a little less complicated:


#cloud-config
ssh_pwauth: True
chpasswd:
  list: |
    ubuntu:{{ default_password }}
  expire: False

But now that I look at the documentation, I realise you can also specify that as a pre-hashed value (in which case, you would suffix that default_password item above with |password_hash('sha512')) which makes it all better again!

[1] If you run openstack --os-cloud cloud_a console url show servername gives you a URL to visit that has an HTML5 based VNC-ish client. Note the “cloud_a” and “servername” should be replaced by your clouds.yml reference and the server name or server ID you want to connect to.
[2] Like before, openstack --os-cloud cloud_a console log show servername gives you the output of the boot sequence (e.g. dmesg plus the normal startup commands, and finally, cloud-init). It can be useful. Equally, it’s logs… which means there’s a lot to wade through!

Today I learned… Ansible Include Templates

I am building Openstack Servers with the ansible os_server module. One of these fields will accept a very long string (userdata). Typically, I end up with a giant blob of unreadable build script in this field…

Today I learned that I can use this:

---
- name: "Create Server"
  os_server:
    name: "{{ item.value.name }}"
    state: present
    availability_zone: "{{ item.value.az.name }}"
    flavor: "{{ item.value.flavor }}"
    key_name: "{{ item.value.az.keypair }}"
    nics: "[{%- for nw in item.value.ports -%}{'port-name': '{{ ProjectPrefix }}{{ item.value.name }}-Port-{{nw.network.name}}'}{%- if not loop.last -%}, {%- endif -%} {%- endfor -%}]" # Ignore this line - it's complicated for a reason
    boot_volume: "{{ ProjectPrefix }}{{ item.value.name }}-OS-Volume" # Ignore this line also :)
    terminate_volume: yes
    volumes: "{%- if item.value.log_size is defined -%}[{{ ProjectPrefix }}{{ item.value.name }}-Log-Volume]{%- else -%}{{ omit }}{%- endif -%}"
    userdata: "{% include 'templates/userdata.j2' %}"
    auto_ip: no
    timeout: 65535
    cloud: "{{ cloud }}"
  with_dict: "{{ Servers }}"

This file (/path/to/ansible/playbooks/servers.yml) is referenced by my play.yml (/path/to/ansible/play.yml) via an include, so the template reference there is in my templates directory (/path/to/ansible/templates/userdata.j2).

That template can also then reference other template files itself (using {% include 'templates/some_other_file.extension' %}) so you can have nicely complex userdata fields with loads and loads of detail, and not make the actual play complicated (or at least, no more than it already needs to be!)

Some notes from Ansible mentoring

Last night, I met up with my friend Tim Dobson to talk about Ansible. I’m not an expert, but I’ve done a lot of Ansible recently, and he wanted some pointers.

He already had some general knowledge, but wanted some pointers on “other things you can do with Ansible”, so here are a couple of the things we did.

  • If you want to set certain things to happen as “Production” and other things to happen as “Pre-production” you can either have two playbooks (e.g. pre-prod.yml versus prod.yml) which call certain features… OR use something like this:

    ---
    - hosts: localhost
      tasks:
    - set_fact:
        my_run_state: "{% if lookup('env', 'runstate') == '' %}{{ default_run_state|default('prod') }}{% else %}{{ lookup('env', 'runstate')|lower() }}{% endif %}"
    - debug: msg="Doing prod"
      when: my_run_state == 'prod'
    - debug: msg="Doing something else"
      when: my_run_state != 'prod'

    With this, you can define a default run state (prod), override it with a group or host var (if you have, for example, a staging service or proof of concept space), or use your Environment variables to do things. In the last case, you’d execute this as follows:

    runstate=preprod ansible-playbook site.yml

  • You can tag almost every action in your plays. Here are some (contrived) examples:


    ---
    - name: Get facts from your hosts
      tags: configure
      hosts: all
    - name: Tell me all the variable data you've collected
      tags: dump
      hosts: localhost
      tasks:
        - name: Show data
          tags: show
          debug:
            var=item
          with_items: hostvars

    When you then run

    ansible-playbook test.yml --list-tags

    You get

    playbook: test.yml

      play #1 (all): Get facts from your hosts      TAGS: [configure]
          TASK TAGS: [configure]

      play #2 (localhost): Tell me all the variable data you've collected   TAGS: [dump]
          TASK TAGS: [dump, show]

    Now you can run ansible-playbook test.yml -t configure or ansible-playbook test.yml --skip-tags configure

    To show how useful this can be, here’s the output from the “–list-tags” I’ve got on a project I’m doing at work:
    playbook: site.yml

      play #1 (localhost): Provision A-Side Infrastructure  TAGS: [Functional_Testing,A_End]
          TASK TAGS: [A_End, EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]

      play #2 (localhost): Provision B-Side Infrastructure  TAGS: [Functional_Testing,B_End]
          TASK TAGS: [B_End, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]

      play #3 (localhost): Provision InterProject Links - Part 1    TAGS: [Functional_Testing,InterProjectLink]
          TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]

      play #4 (localhost): Provision InterProject Links - Part 2    TAGS: [Functional_Testing,InterProjectLink]
          TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]

      play #5 (localhost): Provision TPT environment        TAGS: [Performance_Testing]
          TASK TAGS: [EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers, Performance_Testing, debug]

    This then means that if I get a build fail part-way through, or if I’m debugging just a particular part, I can run this: ansible-playbook site.yml -t Performance_Testing --skip-tags EXCLUDE_K5_Firewalls,EXCLUDE_K5_SecurityGroups,EXCLUDE_K5_Networks

Using Python-OpenstackClient and Ansible with K5

Recently, I have used K5, which is an instance of OpenStack, run by Fujitsu (my employer). To do some of the automation tasks I have played with both python-openstackclient and Ansible. This post is going to cover how to get those tools to work with K5.

I have access to a Linux virtual machine (Ubuntu 16.04) and the Windows Subsystem for Linux in Windows 10 to run “Bash on Ubuntu on Windows”, and both accept the same set of commands.

In order to run these commands, you need a couple of dependencies. Your mileage might vary with other Linux distributions, but, for Ubuntu based distributions, run this command:

sudo apt install python-pip build-essential libssl-dev libffi-dev python-dev

Next, use pip to install the python modules you need:

sudo -H pip install shade==1.11.1 ansible cryptography python-openstackclient

If you’re only ever going to be working with a single project, you can define a handful of environment variables prefixed OS_, like this:

export OS_USERNAME=BloggsF
export OS_PASSWORD=MySuperSecretPasswordIsHere
export OS_REGION_NAME=uk-1
export OS_USER_DOMAIN_NAME=YourProjectName
export OS_PROJECT_NAME=YourProjectName-prj
export OS_PROJECT_ID=baddecafbaddecafbaddecafbaddecaf
export OS_AUTH_URL=https://identity.uk-1.cloud.global.fujitsu.com/v3
export OS_VOLUME_API_VERSION=2
export OS_IDENTITY_API_VERSION=3

But, if you’re working with a few projects, it’s probably worth separating these out into clouds.yml files. This would be stored in ~/.config/openstack/clouds.yml with the credentials for the environment you’re using:

---
clouds:
  root:
    identity_api_version: 3
    regions:
    - uk-1
    auth:
      auth_url: https://identity.uk-1.cloud.global.fujitsu.com/v3
      password: MySuperSecretPasswordIsHere
      project_id: baddecafbaddecafbaddecafbaddecaf
      project_name: YourProjectName-prj
      username: BloggsF
      user_domain_name: YourProjectName

Optionally, you can separate out the password, username or any other “sensitive” information into a secure.yml file stored in the same location (removing those lines from the clouds.yml file), like this:

---
clouds:
  root:
    auth:
      password: MySuperSecretPasswordIsHere

Now, you can use the Python based Openstack Client, using this invocation:

openstack --os-cloud root server list

Alternatively you can use the Ansible Openstack (and K5) modules, like this:

---
tasks:
- name: "Authenticate to K5"
  k5_auth:
    cloud: root
  register: k5_auth_reg
- name: "Create Network"
  k5_create_network:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Subnet"
  k5_create_subnet:
    name: "Public"
    network_name: "Public"
    cidr: "192.0.2.0/24"
    gateway_ip: "192.0.2.1"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Router"
  k5_create_router:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Attach private network to router"
  os_router:
    name: "Public"
    state: present
    network: "inf_az1_ext-net02"
    interfaces: "Public"
    cloud: root
- name: "Create Servers"
  os_server:
    name: "Server"
    availability_zone: "uk-1a"
    flavor: "P-1"
    state: present
    key_name: "MyFirstKey"
    network: "Public-Network"
    image: "Ubuntu Server 14.04 LTS (English) 02"
    boot_from_volume: yes
    terminate_volume: yes
    security_groups: "Default"
    auto_ip: no
    timeout: 7200
    cloud: root