Outlook based “Kanban”

Do you use Outlook for your email? Do you sometimes wish you could use a Kanban board with Outlook? Well, look no further!!

Thanks to an internal post about improving workflows, someone mentioned this git repo called “Outlook-Taskboard”, that gives you the ability to create and manipulate your Outlook tasks in a Kanban fashion.

Because it’s “just” native Outlook tasks, you can still manage them using the sidebar or the mobile apps, but when you get back to Outlook, you get to see their status and manage your tasks appropriately.

Podcast Summary – Admin Admin Podcast #57

I’m back again! I’m standing in (again) for Andy as a guest presenter on The Admin Admin Podcast episode #57- Live at OggCamp talking about getting Open Source products under support in a proprietary company. The “famous” Martin Wimpress stands in for Jerry.

As I said last time I was on there, the guys who host the Admin Admin podcast are a really nice, and cover a really great range of subjects about working as a server or network administrator. They have a chat room on Telegram, so if you’re interested in being an admin, it’s worth having a listen, and then maybe join the chat room!

Check Point Management API tips

I was very fortunate yesterday to spend some time with two Check Point engineering staff. Check Point make high-end firewall products that I’m using at work. During the conversation, I mentioned two issues I’ve had during automated builds of Checkpoint appliances…

  1. During the build process, I want to add lots of devices. In my build, however, I need to log in to the management API, and therefore hand into the clear-text userdata field the credentials for the user account – NOT GOOD! What I was told was that actually, you don’t need to operate like that! If you’re running commands on your manager, you can instead run the command in “root” mode to make it bypass any requests for authentication, and as an added “win” it publishes every change you make on exit too! Here’s how:mgmt_cli -r true add host name "New Host 1" ip-address "192.0.2.1"
  2. My other option was to make it so that we can finish our Ansible deployment of the OpenStack server, and then, once it was up and accessible… call out against the API. But how do you do this during the build? Well, you can run four commands against the server to allow remote access to the API, and then you should have access from all the same places your GUI client can access it from! Here’s how:mgmt_cli -r true login domain "System Data" > id.txt
    mgmt_cli -s id.txt set api-settings accepted-api-calls-from "all ip addresses that can be used for gui clients" automatic-start true
    mgmt_cli -s id.txt publish
    api restart

My sincere thanks to Javier and Uri for their guidance. For those wondering about those API calls – see these links: Using the -r flag and configuring the API for remote access.

Podcast Summary – Admin Admin Podcast #55

This week sees the publication of The Admin Admin Podcast episode #55 in which I guest present (and guest introduce!) about network infrastructure. I also answer some questions about using certbot (the free TLS certificate provider), about where to put script files on Linux and a bit about MTU (Message Transfer Units) – although that’s a bit outside my area of expertise, so if I got it wrong, let them know!

The guys who host the Admin Admin podcast are a really nice, and cover a really great range of subjects about working as a server or network administrator. They have a chat room on Telegram, so if you’re interested in being an admin, it’s worth having a listen, and then maybe join the chat room!

One to read: “SKIP grep, use AWK” / ”Awk Tutorial, part {1,2,3,4}”

Do you use this pattern in your sh/bash/zsh/etc-sh scripts?

cat somefile | grep 'some string' | awk '{print $2}'

If so, you can replace that as follows:

cat somefile | awk '/some string/ {print $2}'

Or how about this?

grep -v 'something' < somefile | awk '{print $0}'

Try this:

awk '! /something/ {print $0}' < somefile

Ooo OK, how about if you want to get all actions performed by users when the ISO formatted dates (Y-m-d) match the first day of the month, but where you don’t want to also match January (unless you’re talking about the first of January)…

# echo 'BLOGGSF 2001-01-23 SOME_ACTION' | awk '$2 ~ /-01$/ {print $1, $3}'
(EMPTY LINE)
# echo 'BLOGGSF 2002-02-01 SOME_ACTION' | awk '$2 ~ /-01$/ {print $1, $3}'
BLOGGSF SOME_ACTION

This is so cool! Thanks to the tutorials “SKIP grep, use AWK” and the follow-up tutorials starting here…

Today I learned… Cloud-init doesn’t like you repeating the same things

Because of templates I was building in my post “Today I learned… Ansible Include Templates”, I thought you could repeat the same sections over again. Here’s a snippet of something like what I’d built (after combining lots of templates together):

Note this is a non-working code sample!


#cloud-config
packages:
- iperf
- git

write_files:
- content: {% include 'files/public_key.j2' %}
  path: /root/.ssh/authorized_keys
  owner: root:root
  permission: '0600'
- content: {% include 'files/private_key.j2' %}
  path: /root/.ssh/id_rsa
  owner: root:root
  permission: '0600'

packages:
- byobu

write_files:
- content: |
    #!/bin/bash
    git clone {{ test_scripts }} /root/iperf_scripts
    bash /root/iperf_scripts/run_test.sh
  path: /root/run_test
  owner: root:root
  permission: '0700'

runcmd:
- /root/run_test

I’d get *bits* of it to run – basically, the last file, the last package and the last runcmd… but not all of it.

Turns out, cloud-init doesn’t like having to rebuild all the fragments together. Instead, you need to put them all together, so the write_files items, and the packages items all live in the same area.

Which, when you think about what it’s doing, which is that the parent lines are defining a variable called… well, whatever that line is, and if you replace it, it’s only going to keep the last one, then it all makes sense really!

Today I learned… that you can look at the “cloud-init” files on your target server…

Today I have been debugging why my Cloud-init scripts weren’t triggering on my Openstack environment.

I realised that something was wrong when I tried to use the noVNC console[1] with a password I’d set… no luck. So, next I ran a command to review the console logs[2], and saw a message (now, sadly, long gone – so I can’t even include it here!) suggesting there was an issue parsing my YAML file. Uh oh!

I’m using Ansible’s os_server module, and using templates to complete the userdata field, which in turn gets populated as cloud-init scripts…. and so clearly I had two ways to debug this – prefix my ansible playbook with a few debug commands, but then that can get messy… OR SSH into the box, and look through the logs. I knew I could SSH in, so the cloud-init had partially fired, but it just wasn’t parsing what I’d submitted. I had a quick look around, and found a post which mentioned debugging cloud-init. This mentioned that there’s a path (/var/lib/cloud/instances/$UUID/) you can mess around in, to remove some files to “fool” cloud-init into thinking it’s not been run… but, I reasoned, why not just see what’s there.

And in there, was the motherlode – user-data.txt…. bingo.

In the jinja2 template I was using to populate the userdata, I’d referenced another file, again using a template. It looks like that template needs an extra line at the end, otherwise, it all runs together.

Whew!

This does concern me a little, as I had previously been using this stanza to “simply” change the default user password to something a little less complicated:


#cloud-config
ssh_pwauth: True
chpasswd:
  list: |
    ubuntu:{{ default_password }}
  expire: False

But now that I look at the documentation, I realise you can also specify that as a pre-hashed value (in which case, you would suffix that default_password item above with |password_hash('sha512')) which makes it all better again!

[1] If you run openstack --os-cloud cloud_a console url show servername gives you a URL to visit that has an HTML5 based VNC-ish client. Note the “cloud_a” and “servername” should be replaced by your clouds.yml reference and the server name or server ID you want to connect to.
[2] Like before, openstack --os-cloud cloud_a console log show servername gives you the output of the boot sequence (e.g. dmesg plus the normal startup commands, and finally, cloud-init). It can be useful. Equally, it’s logs… which means there’s a lot to wade through!

One to read: “Test Driven Development (TDD) for networks, using Ansible”

Thanks to my colleague Simon (@sipart on Twitter), I spotted this post (and it’s companion Github Repository) which explains how to do test-driven development in Ansible.

Essentially, you create two roles – test (the author referred to it as “validate”) and one to actually do the thing you want it to do (in the author’s case “add_vlan”).

In the testing role, you’d have the following layout:

/path/to/roles/testing/tasks/main.yml
/path/to/roles/testing/tasks/SOMEFEATUREtest.yml

In the main.yml file, you have a simple stanza:

---
- name: Include all the test files
  include: "{{ outer_item }}"
  with_fileglob:"/path/to/roles/validate/tasks/*test.yml"
  loop_control: loop_var=outer_item

I’m sure that “with_fileglob” line could be improved to not actually need a full path… anyway

Then in your YourFeature_test.yml file, you do things like this:

---
- name: "Pseudocode in here. Use real modules for your testing!!"
  get_vlan_config: filter_for=needle_vlan
  register:haystack_var

- assert: that=" {{ needle_item }} in haystack_var "

When you run the play of the role the first time, the response will be “failed” (because “needle_vlan” doesn’t exist). Next do the “real” play of the role (so, in the author’s case, add_vlan) which creates the vlan. Then re-run the test role, your response should now be “ok”.

I’d probably script this so that it goes:

      reset-environment set_testing=true (maybe create a random little network)
      test
      run-action
      test
      reset-environment set_testing=false

The benefit to doing it that way is that you “know” your tests aren’t running if the environment doesn’t have the “set_testing” thing in place, you get to run all your tests in a “clean room”, and then you clear it back down again afterwards, leaving it clear for the next pass of your automated testing suite.

Fun!

Today I learned… Ansible Include Templates

I am building Openstack Servers with the ansible os_server module. One of these fields will accept a very long string (userdata). Typically, I end up with a giant blob of unreadable build script in this field…

Today I learned that I can use this:

---
- name: "Create Server"
  os_server:
    name: "{{ item.value.name }}"
    state: present
    availability_zone: "{{ item.value.az.name }}"
    flavor: "{{ item.value.flavor }}"
    key_name: "{{ item.value.az.keypair }}"
    nics: "[{%- for nw in item.value.ports -%}{'port-name': '{{ ProjectPrefix }}{{ item.value.name }}-Port-{{nw.network.name}}'}{%- if not loop.last -%}, {%- endif -%} {%- endfor -%}]" # Ignore this line - it's complicated for a reason
    boot_volume: "{{ ProjectPrefix }}{{ item.value.name }}-OS-Volume" # Ignore this line also :)
    terminate_volume: yes
    volumes: "{%- if item.value.log_size is defined -%}[{{ ProjectPrefix }}{{ item.value.name }}-Log-Volume]{%- else -%}{{ omit }}{%- endif -%}"
    userdata: "{% include 'templates/userdata.j2' %}"
    auto_ip: no
    timeout: 65535
    cloud: "{{ cloud }}"
  with_dict: "{{ Servers }}"

This file (/path/to/ansible/playbooks/servers.yml) is referenced by my play.yml (/path/to/ansible/play.yml) via an include, so the template reference there is in my templates directory (/path/to/ansible/templates/userdata.j2).

That template can also then reference other template files itself (using {% include 'templates/some_other_file.extension' %}) so you can have nicely complex userdata fields with loads and loads of detail, and not make the actual play complicated (or at least, no more than it already needs to be!)