"Zenith Z-19 Terminal" from ajmexico on Flickr

Some things I learned this week while coding extensions to Ansible!

If you follow any of the content I post around the internet, you might have seen me asking questions about trying to get data out of azure_rm_*_facts into something that’s usable. I can’t go into why I needed that data yet (it’s a little project I’m working on), but the upshot is that trying to manipulate data using “set_fact” with jinja is *doable* but *messy*. In the end, I decided to hand it all off to a new ansible module I’m writing. So, here are the things I learned about this.

  1. There’s lots more documentation about writing a module (a plugin that let’s you do stuff) than there is about writing filters (things that change text inline) or lookups (things that let you search other data stores). In the end, while I could have spent the time to figure out how better to write a filter or a lookup, it actually makes more sense in my context to hand a module all my data, and say “Parse this” and register the result than it would have done to have the playbook constantly check whether things were in other things. I still might have to do that, but… you know, for now, I’ve got the bits I want! :)
  2. I did start looking at writing a filter, and discovered that the “debugging advice” on the ansible site is all geared up to enable more modules than enabling filters… but I did discover that modules execute on their target (e.g. WebHost01) while filters and lookups execute on the local machine. Why does this matter? Well…..
  3. While I was looking for documentation about debugging Ansible code, I stumbled over this page on debugging modules that makes it all look easy. Except, it’s only for debugging *MODULES* (very frustrating. Well, what does it actually mean? The modules get zipped up and sent to the host that will be executing the code, which means that with an extra flag to your playbook (ANSIBLE_KEEP_REMOTE_FILES – even if it’s going to be run on “localhost”), you get the combined output of the script placed into a path on your machine, which means you can debug that specific play. That doesn’t work for filters…
  4. SOO, I jumped into #ansible on Freenode and asked for help. They in turn couldn’t help me (it’s more about writing playbooks than writing filters, modules, etc), so they directed me to #ansible-devel, where I was advised to use a python library called “q” (Edit, same day: my friend @mohclips pointed me to this youtube video from 2003 of the guy who wrote q explaining about it. Thanks Nick! I learned something *else* about this library).
  5. Oh man, this is the motherlode. So, q makes life *VERY* easy. Assuming this is valid code: All you’d need to do would be to add two lines, as you’ll see here: This then dumps the output from each of the q(something) lines into /tmp/q for you to read at your leisure! (To be fair, I’d probably remove it after you’ve finished, so you don’t fill a disk :) )
  6. And that’s when I discovered that it’s actually easier to use q() for all my python debugging purposes than it is to follow the advice above about debugging modules. Yehr, it’s basically a load of print statements, so you don’t get to see stack traces, or read all the variables, and you don’t get to step through code to see why decisions were taken… but for the rubbish code I produce, it’s easily enough for me!

Header image is “Zenith Z-19 Terminal” by “ajmexico” on Flickr and is released under a CC-BY license. Used with thanks!

"LEGO Factory Playset" from Brickset on Flickr

Building Azure Environments in Ansible

Recently, I’ve been migrating my POV (proof of value) and POC (proof of concept) environment from K5 to Azure to be able to test vendor products inside Azure. I ran a few tests to build the environment using the native tools (the powershell scripts) and found that the Powershell way of delivering Azure environments seems overly complicated… particularly as I’m comfortable with how Ansible works.

To be fair, I also need to look at Terraform, but that isn’t what I’m looking at today :)

So, let’s start with the scaffolding. Any Ansible Playbook which deals with creating virtual machines needs to have some extra modules installed. Make sure you’ve got ansible 2.7 or later and the python azure library 2.0.0 or later (you can get both with pip for python).

Next, let’s look at the group_vars for this playbook.

This file has several pieces. We define the project settings (anything prefixed project_ is a project setting), including the prefix used for all resources we create (in this case “env01“), and a standard password used for all VMs we create (in this case “My$uper$ecret$Passw0rd“).

Next we define the standard images to load from the Marketplace. You can extend this with other images, these are just the “easiest” ones that I’m most familiar with (your mileage may vary). Next up is the networks to build inside the VNet, and lastly we define the actual machines we want to build. If you’ve got questions about any of the values we define here, just let me know in the comments below :)

Next, we’ll start looking at the playbook (this has been exploded out – the full playbook is also in the gist).

Here we start by pulling in the variables we might want to override, and we do this by reading system environment variables (ANSIBLE_PREFIX and BREAKGLASS) and using them if they’re set. If they’re not, use the project defaults, and if that hasn’t been set, use some pre-defined values… and then tell us what they are when we’re running the tasks (those are the debug: lines).

This block is where we create our “Static Assets” – individual items that we will be consuming later. This shows a clear win here over the Powershell methods endorsed by Microsoft – here you can create a Resource Group (RG) as part of the playbook! We also create a single Storage Account for this RG and a single VNET too.

These creation rules are not suitable for production use, as this defines an “Any-Any” Security group! You should tailor your security groups for your need, not for blanket access in!

This is where things start to get a bit more interesting – We’re using the “async/async_status” pattern here (and the rest of these sections) to start creating the resources in parallel. As far as I can tell, sometimes you’ll get a case where the async doesn’t quite get set up fast enough, then the async_status can’t track the resources properly, but re-running the playbook should be enough to sort that out, without slowing things down too much.

But what are we actually doing with this block of code? A UDR is a “User Defined Route” or routing table for Azure. Effectively, you treat each network interface as being plumbed directly to the router (none of this “same subnet broadcast” stuff works here!) so you can do routing at the router for all the networks.

By default there are some existing network routes (stuff to the internet flows to the internet, RFC1918 addresses are dropped with the exception of any RFC1918 addresses you have covered in your VNETs, and each of your subnets can reach each other “directly”). Adding a UDR overrides this routing table. The UDRs we’re creating here are applied at a subnet level, but currently don’t override any of the existing routes (they’re blank). We’ll start putting routes in after we’ve added the UDRs to the subnets. Talking of which….

Again, this block is not really suitable for production use, and assumes the VNET supernet of /8 will be broken down into several /24’s. In the “real world” you might deliver a handful of /26’s in a /24 VNET… or you might even have lots of disparate /24’s in the VNET which are then allocated exactly as individual /24 subnets… this is not what this model delivers but you might wish to investigate further!

Now that we’ve created our subnets, we can start adding the routing table to the UDR. This is a basic one – add a 0.0.0.0/0 route (internet access) from the “protected” network via the firewall. You can get a lot more specific than this – most people are likely to want to add the VNET range (in this case 10.0.0.0/8) via the firewall as well, except for this subnet (because otherwise, for example, 10.0.0.100 trying to reach 10.0.0.101 will go via the firewall too).

Without going too much into the intricacies of network architecture, if you are routing your traffic between subnets to the firewall, it’s probably better to get an appliance with more interfaces, so you can route traffic across the appliance, rather than going across a single interface as this will halve your traffic bandwidth (it’s currently capped 1Gb/s – so 500Mb/s).

Having mentioned “The Internet” – let’s give our firewall a public IP address, and create the rest of the interfaces as well.

This script creates a public IP address by default for each interface unless you explicitly tell it not to (see lines 40, 53 and 62 in the group_vars file I rendered above). You could easily turn this around by changing the lines which contain this:

item.1.public is not defined or (item.1.public is defined and item.1.public == 'true')

into lines which contain this:

item.1.public is defined and item.1.public == 'true'

OK, having done all that, we’re now ready to build our virtual machines. I’ve introduced a “Priority system” here – VMs with priority 0 go first, then 1, and 2 go last. The code snippet below is just for priority 0, but you can easily see how you’d extrapolate that out (and in fact, the full code sample does just that).

There are a few blocks here to draw attention to :) I’ve re-jigged them a bit here so it’s clearer to understand, but when you see them in the main playbook they’re a bit more compact. Let’s start with looking at the Network Interfaces section!

network_interfaces: |
  [
    {%- for nw in item.value.ports -%}
      '{{ prefix }}{{ item.value.name }}port{{ nw.subnet.name }}'
      {%- if not loop.last -%}, {%- endif -%} 
    {%- endfor -%}
  ]

In this part, we loop over the ports defined for the virtual machine. This is because one device may have 1 interface, or four interfaces. YAML is parsed to make a JSON variable, so here we can create a JSON variable, that when the YAML is parsed it will just drop in. We’ve previously created all the interfaces to have names like this PREFIXhostnamePORTsubnetname (or aFW01portWAN in more conventional terms), so here we construct a JSON array, like this: ['aFW01portWAN'] but that could just as easily have been ['aFW01portWAN', 'aFW01portProtect', 'aFW01portMGMT', 'aFW01portSync']. This will then attach those interfaces to the virtual machine.

Next up, custom_data. This section is sometimes known externally as userdata or config_disk. My code has always referred to it as a “Provision Script” – hence the variable name in the code below!

custom_data: |
  {%- if item.value.provision_script is defined and item.value.provision_script != '' -%}
    {%- include(item.value.provision_script) -%}
  {%- elif item.value.image.provision_script is defined and item.value.image.provision_script != '' -%}
    {%- include(item.value.image.provision_script) -%}
  {%- else -%}
    {{ omit }}
  {%- endif -%}

Let’s pick this one apart too. If we’ve defined a provisioning script file for the VM, include it, if we’ve defined a provisioning script file for the image (or marketplace entry), then include that instead… otherwise, pretend that there’s no “custom_data” field before you submit this to Azure.

One last quirk to Azure, is that some images require a “plan” to go with it, and others don’t.

plan: |
  {%- if item.value.image.plan is not defined -%}{{ omit }}{%- else -%}
    {'name': '{{ item.value.image.sku }}',
     'publisher': '{{ item.value.image.publisher }}',
     'product': '{{ item.value.image.offer }}'
    }
  {%- endif -%}

So, here we say “if we’ve not got a plan, omit the value being passed to Azure, otherwise use these fields we previously specified. Weird huh?

The very last thing we do in the script is to re-render the standard password we’ve used for all these builds, so that we can check them out!

Want to review this all in one place?

Here’s the link to the full playbook, as well as the group variables (which should be in ./group_vars/all.yml) and two sample userdata files (which should be in ./userdata) for an Ubuntu machine (using cloud-init) and one for a FortiGate Firewall.

All the other files in that gist (prefixes from 10-16 and 00) are for this blog post only, and aren’t likely to work!

If you do end up using this, please drop me a note below, or star the gist! That’d be awesome!!

Image credit: “Lego Factory Playset” from Flickr by “Brickset” released under a CC-BY license. Used with Thanks!

Picture of a bull from an 1882 reference guide, as found on Flickr

Abattoir Architecture for Evergreen – using the Pets versus Cattle parable

Work very generously sent me on a training course today about a cloud based technology we’re considering deploying.

During the course, the organiser threw a question to the audience about “who can explain what a container does?” and a small number of us ended up talking about Docker (primarily for Linux) and CGroups, and this then turned into a conversation about the exceedingly high rate of changes deployed by Amazon, Etsy and others who have completely embraced microservices and efficient CI/CD pipelines… and then I mentioned the parable of Pets versus Cattle.

The link above points to where the story comes from, but the short version is…

When you get a pet, it comes as a something like a puppy or kitten, you name it, you nurture it, bring it up to live in your household, and when it gets sick, because you’ve made it part of your family, you bring in a vet who nurses it back to health.

When you have cattle, it doesn’t have a name, it has a number, and if it gets sick, you isolate it from the herd and if it doesn’t get better, you take it out back and shoot it, and get a new one.

A large number of the audience either hadn’t heard of the parable, or if they had, hadn’t heard it delivered like this.

We later went on to discuss how this applies in a practical sense, not just in docker or kubernetes containers, but how it could be applied to Infrastructure as a Service (IaaS), even down to things like vendor supplied virtual firewalls where you have Infrastructure as Code (IaC).

If, in your environment, you have some service you treat like cattle – perhaps a cluster of firewalls behind a load balancer or a floating IP address and you need to upgrade it (because it isn’t well, or it’s not had the latest set of policy deployed to it). You don’t amend the policy on the boxes in question… No! You stand up a new service using your IaC with the latest policy deployed upon it, and then you would test it (to make sure it’s stood up right), and then once you’re happy it’s ready, you transition your service to the new nodes. Once the connections have drained from your old nodes, you take them out and shoot them.

Or, if you want this in pictures…

Stage 1 - Before the new service deployment
Stage 1 – Before the new service deployment
Stage 2 - The new service deployment is built and tested
Stage 2 – The new service deployment is built and tested
Stage 3 - Service transitions to the new service deployment
Stage 3 – Service transitions to the new service deployment
Stage 4 - The old service is demised, and any resources (including licenses) return to the pool
Stage 4 – The old service is demised, and any resources (including licenses) return to the pool

I was advised (by a very enthusiastic Mike until he realised that I intended to follow through with it) that the name for this should be as per the title. So, the next time someone asks me to explain how they could deploy, I’ll suggest they look for the Abattoir in my blog, because, you know, that’s normal, right? :)

Image credit: Image from page 255 of “Breeder and sportsman” (1882) via Internet Archive Book Image on Flickr

Troubleshooting FortiGate API issues with the CLI?

One of my colleagues has asked me for some help with an Ansible script he’s writing to push some policy to a cloud hosted FortiGate appliance. Unfortunately, he kept getting some very weird error messages, like this one:

fatal: [localhost]: FAILED! => {"changed": false, "meta": {"build": 200, "error": -651, "http_method": "PUT", "http_status": 500, "mkey": "vip8080", "name": "vip", "path": "firewall", "revision": "36.0.0.10745196634707694665.1544442857", "serial": "CENSORED", "status": "error", "vdom": "root", "version": "v6.0.3"}, "msg": "Error in repo"}

This is using Fortinet’s own Ansible Modules, which, in turn use the fortiosapi python module.

This same colleague came across a post on the Fortinet Developer Network site (access to the site requires vendor approval), which said “this might be an internal bug, but to debug it, use the following”

fgt # diagnose debug enable

fgt # diagnose debug cli 8
Debug messages will be on for 30 minutes.

And then run your API commands. Your error message will be surfaced there… so here’s mine! (Mapped port doesn’t match extport in a vip).

0: config firewall vip
0: edit "vip8080"
0: unset src-filter
0: unset service
0: set extintf "port1"
0: set portforward enable
0: unset srcintf-filter
0: set mappedip "192.0.2.1-192.0.2.1"
0: unset extport
0: set extport 8080-8081
0: unset mappedport
0: set mappedport 8080
-651: end

Late edit 2020-03-27: I spotted a bug in the Ansible issues tracker today, and I added a note to the end of that bug mentioning that as well as diagnose debug cli 8, if that doesn’t give you enough logs to figure out what’s up, you can also try diagnose debug application httpsd -1 but this enables LOTS AND LOTS of logs, so really think twice before turning that one on!

Oh, and if 30 minutes isn’t enough, try diagnose debug duration 480 or however many minutes you think you need. Beware that it will write event logs out to the serial console even when you’ve logged out.

Creating Self Signed certificates in Ansible

In my day job, I sometimes need to use a self-signed certificate when building a box. As I love using Ansible, I wanted to make the self-signed certificate piece something that was part of my Ansible workflow.

Here follows a bit of basic code that you could use to work through how the process of creating a self-signed certificate would work. I would strongly recommend using something more production-ready (e.g. LetsEncrypt) when you’re looking to move from “development” to “production” :)

---
- hosts: localhost
  vars:
  - dnsname: your.dns.name
  - tmppath: "./tmp/"
  - crtpath: "{{ tmppath }}{{ dnsname }}.crt"
  - pempath: "{{ tmppath }}{{ dnsname }}.pem"
  - csrpath: "{{ tmppath }}{{ dnsname }}.csr"
  - pfxpath: "{{ tmppath }}{{ dnsname }}.pfx"
  - private_key_password: "password"
  tasks:
  - file:
      path: "{{ tmppath }}"
      state: absent
  - file:
      path: "{{ tmppath }}"
      state: directory
  - name: "Generate the private key file to sign the CSR"
    openssl_privatekey:
      path: "{{ pempath }}"
      passphrase: "{{ private_key_password }}"
      cipher: aes256
  - name: "Generate the CSR file signed with the private key"
    openssl_csr:
      path: "{{ csrpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      common_name: "{{ dnsname }}"
  - name: "Sign the CSR file as a CA to turn it into a certificate"
    openssl_certificate:
      path: "{{ crtpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      csr_path: "{{ csrpath }}"
      provider: selfsigned
  - name: "Convert the signed certificate into a PKCS12 file with the attached private key"
    openssl_pkcs12:
      action: export
      path: "{{ pfxpath }}"
      name: "{{ dnsname }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      passphrase: password
      certificate_path: "{{ crtpath }}"
      state: present

Hacker at Keyboard

What happens on an IT Penetration Test

I recently was asked to describe what happens in a penetration test (pentest), how it’s organised and what happens after the test is completed.

Some caveats first:

  • While I’ve been involved in escorting penetration testers in controlled areas, and helping to provide environments for the tests to occur in, I’ve not been heavily involved in the set-up of one, so some of the details in that area are likely to be a bit fuzzy.
  • I’m not involved in procurement in any way, so I can’t endorse or discredit any particular testing organisations.
  • This is a personal viewpoint and doesn’t represent a professional recommendation of how a penetration test should or typically does occur.

So, what actually happens?…

Before the pentest begins, a testing firm would be sourced and the “Terms of Engagement” (TOE), or perhaps a list of requirements would be defined. This might result in a list of tasks that are expected to be performed, and some idea on resources required. It would also be the point where the initiator (the organisation getting the test performed) defines what is “In Scope” (is available to be tested) and what is “Out Of Scope” (and must not be tested).

Some of the usual tasks are:

  • Internet Scan (the testers will have a testing appliance that will scan all IPs provided to them, and all open ports on those IPs, looking for vulnerable services, servers and responding applications).
  • Black Box [See note below] Red Team test (the testers are probing the network as though they were malicious outsiders with no knowledge of the system, similar to the sorts of things you might see in hacker movies – go through discovered documentation, socially engineer access to environments, run testing applications like NMAP and Metasploit, and generally see what stuff appears to be publicly accessible, and from there see how the environment appears once you have a foothold).
  • White Box test (the testers have access to internal documentation and/or source code about the environment they’re testing, and thus can run customised and specific tests against the target environment).
  • Configuration Analysis (access to servers, source code, firewall policies or network topology documentation intending to check where flaws may have been introduced).
  • Social Engineering test (see how amenable staff, customers and suppliers are to providing access to the target environment for your testing team).
  • Physical access test (prove whether the testing team can gain physical access to elements of your target, e.g. servers, documentation, management stations, signing keys, etc).
  • Some testing firms will also stress test any Denial Of Service Mitigations you may have in-place, but these must be carefully negotiated first with your bandwidth providers, their peering firms and so on, as they will more-than-likely disrupt more than just your services! DO NOT ENGAGE A DOS TEST LIGHTLY!

Once the Terms have been agreed and the duration of these tests have been ironed out (some tests could go on indefinitely, but you wouldn’t *really* want to pay the bills for an indefinite Black Box test, for example!), a project plan is usually defined showing these stages. Depending on the complexity of your environment, I might expect a reasonable duration for a small estate being approximately a day or two for each test. In a larger estate, particularly where little-to-no automation has been implemented, you may find (for example) a thorough Configuration Analysis of your server configurations taking weeks or even months.

Depending on how true-to-life the test “should” be, you may have the Physical Security assessment and Social Engineering tests be considered part of the Black Box test, or perhaps you may purposefully provide some entry point for the testing team to reduce the entry time. Most of the Black Box tests I’ve been involved in supporting have started from giving the testers access to a point inside your trusted network (e.g. a server which has been built for the purpose of giving access to testers or a VPN entry point with a “lax” firewall policy). Others will provide a “standard” asset (e.g. laptop) and user credential to the testing firm. Finally, some environments will put the testing firm through “recruitment” and put them in-situ in the firm for a week or two to bed them in before the testing starts… this is pretty extreme however!!

The Black Box test will typically be run before any others (except perhaps the Social Engineering and Physical Access tests) and without the knowledge of the “normal” administrators. This will also test the responses of the “Blue Team” (the system administrators and any security operations centre teams) to see whether they would notice an in-progress, working attack by a “Red Team” (the attackers).

After the Black Box test is completed, the “Blue Team” may be notified that there was a pentest, and then (providing it is being run) the testing organisation will start a White Box test will be given open access to the tested environment.

The Configuration Check will normally start with hands-on time with members of the “Blue Team” to see configuration and settings, and will compare these settings against known best practices. If there is an application being tested where source code is available to the testers, then they may check the source code against programming bad practices.

Once these tests are performed, the testing organisation will write a report documenting the state of the environment and rate criticality of the flaws against current recommendations.

The report would be submitted to the organisation who requested the test, and then the *real* fun begins – either fixing the flaws, or finger pointing at who let the flaws occur… Oh, and then scheduling the next pentest :)

I hope this has helped people who may be wondering what happens during a pentest!

Just to noteIf you want to know more about pentests, and how they work in the real world, check out the podcast “Darknet Diaries“, and in particular episode 6 – “The Beirut Bank Job”. To get an idea of what the pentest is supposed to simulate, (although it’s a fictional series) “Mr Robot” (<- Amazon affiliate link) is very close to how I would imagine a sequence of real-world “Red Team” attacks might look like, and experts seem to agree!

Image Credit: “hacker” by Dani Latorre licensed under a Creative Commons, By-Attribution, Share-Alike license.


Additional note; 2018-12-12: Following me posting this to the #security channel on the McrTech Slack group, one of the other members of that group (Jay Harris from Digital Interruption) mentioned that I’d conflated a black box test and a red team test. A black box test is like a white box test, but with no documentation or access to the implementing team. It’s much slower than a white box test, because you don’t have access to the people to ask “why did you do this” or “what threat are you trying to mitigate here”. He’s right. This post was based on a previous experience I had with a red team test, but that they’d referred to it as a black box test, because that’s what the engagement started out as. Well spotted Jay!

He also suggested reading his post in a similar vein.

One to read: A Beginner’s Guide to IPFS

One to read: “A Beginner’s Guide to IPFS”

Ever wondered about IPFS (the “Inter Planetary File System”) – a new way to share and store content. This doesn’t rely on a central server (e.g. Facebook, Google, Digital Ocean, or your home NAS) but instead uses a system like bittorrent combined with published records to keep the content in the system.

If your host goes down (where the original content is stored) it’s also cached on other nodes who have visited your site.

These caches are cleared over time, so are suitable for short outages, or you can have other nodes who “pin” your content (and this can be seen as a paid solution that can fund hosts).

IPFS is great at hosting static content, but how to deal with dynamic content? That’s where PubSub comes into play (which isn’t in this article). There’s a database service which sits on IPFS and uses PubSub to sync data content across the network, called Orbit-DB.

It’s looking interesting, especially in light of the announcement from CloudFlare about their introduction of an available IPFS gateway.

It’s looking good for IPFS!

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

“You can’t run multiple commands in sudo” – and how to work around this

At work, we share tips and tricks, and one of my colleagues recently called me out on the following stanza I posted:

I like this [ansible] one for Debian based systems:
  - name: "Apt update, Full-upgrade, autoremove, autoclean"
    become: yes
    apt:
      upgrade: full
      update_cache: yes
      autoremove: yes
      autoclean: yes

And if you’re trying to figure out how to do that in Shell:
apt-get update && apt-get full-update -y && apt-get autoremove -y && apt-get autoclean -y

His response was “Surely you’re not logging into bash as root”. I said “I normally sudo -i as soon as I’ve logged in. I can’t recall offhand how one does a sudo for a string of command && command statements”

Well, as a result of this, I looked into it. Here’s one comment from the first Stack Overflow page I found:

You can’t run multiple commands from sudo – you always need to trick it into executing a shell which may accept multiple commands to run as parameters

So here are a few options on how to do that:

  1. sudo -s whoami \; whoami (link to answer)
  2. sudo sh -c "whoami ; whoami" (link to answer)
  3. But, my favourite is from this answer:

    An alternative using eval so avoiding use of a subshell: sudo -s eval 'whoami; whoami'

Why do I prefer the last one? Well, I already use eval for other purposes – mostly for starting my ssh-agent over SSH, like this: eval `ssh-agent` ; ssh-add