"Seca" by "Olearys" on Flickr

Getting Started with Terraform on Azure

I’m strongly in the “Ansible is my tool, what needs fixing” camp, when it comes to Infrastructure as Code (IaC) but, I know there are other tools out there which are equally as good. I’ve been strongly advised to take a look at Terraform from HashiCorp. I’m most familiar at the moment with Azure, so this is going to be based around resources available on Azure.


Late edit: I want to credit my colleague, Pete, for his help getting started with this. While many of the code samples have been changed from what he provided me with, if it hadn’t been for these code samples in the first place, I’d never have got started!

Late edit 2: This post was initially based on Terraform 0.11, and I was prompted by another colleague, Jon, that the available documentation still follows the 0.11 layout. 0.12 was released in May, and changes how variables are reused in the code. This post now *should* follow the 0.12 conventions, but if you spot something where it doesn’t, check out this post from the Terraform team.


As with most things, there’s a learning curve, and I struggled to find a “simple” getting started guide for Terraform. I’m sure this is a failing on my part, but I thought it wouldn’t hurt to put something out there for others to pick up and see if it helps someone else (and, if that “someone else” is you, please let me know in the comments!)

Pre-requisites

You need an Azure account for this. This part is very far outside my spectrum of influence, but I’m assuming you’ve got one. If not, look at something like Digital Ocean, AWS or VMWare :) For my “controller”, I’m using Windows Subsystem for Linux (WSL), and wrote the following notes about getting my pre-requisites.

Building the file structure

One quirk with Terraform, versus other tools like Ansible, is that when you run one of the terraform commands (like terraform init, terraform plan or terraform apply), it reads the entire content of any file suffixed “tf” in that directory, so if you don’t want a file to be loaded, you need to either move it out of the directory, comment it out, or rename it so it doesn’t end .tf. By convention, you normally have three “standard” files in a terraform directory – main.tf, variables.tf and output.tf, but logically speaking, you could have everything in a single file, or each instruction in it’s own file. Because this is a relatively simple script, I’ll use this standard layout.

The actions I’ll be performing are the “standard” steps you’d perform in Azure to build a single Infrastructure as a Service (IAAS) server service:

  • Create your Resource Group (RG)
  • Create a Virtual Network (VNET)
  • Create a Subnet
  • Create a Security Group (SG) and rules
  • Create a Public IP address (PubIP) with a DNS name associated to that IP.
  • Create a Network Interface (NIC)
  • Create a Virtual Machine (VM), supplying a username and password, the size of disks and VM instance, and any post-provisioning instructions (yep, I’m using Ansible for that :) ).

I’m using Visual Studio Code, but almost any IDE will have integrations for Terraform. The main thing I’m using it for is auto-completion of resource, data and output types, also the fact that control+clicking resource types opens your browser to the documentation page on terraform.io.

So, creating my main.tf, I start by telling it that I’m working with the Terraform AzureRM Provider (the bit of code that can talk Azure API).

This simple statement is enough to get Terraform to load the AzureRM, but it still doesn’t tell Terraform how to get access to the Azure account. Use az login from a WSL shell session to authenticate.

Next, we create our basic resource, vnet and subnet resources.

But wait, I hear you cry, what are those var.something bits in there? I mentioned before that in the “standard” set of files is a “variables.tf” file. In here, you specify values for later consumption. I have recorded variables for the resource group name and location, as well as the VNet name and subnet name. Let’s add those into variables.tf.

When you’ve specified a resource, you can capture any of the results from that resource to use later – either in the main.tf or in the output.tf files. By creating the resource group (called “rg” here, but you can call it anything from “demo” to “myfirstresourcegroup”), we can consume the name or location with azurerm_resource_group.rg.name and azurerm_resource_group.rg.location, and so on. In the above code, we use the VNet name in the subnet, and so on.

After the subnet is created, we can start adding the VM specific parts – a security group (with rules), a public IP (with DNS name) and a network interface. I’ll create the VM itself later. So, let’s do this.

BUT WAIT, what’s that ${trimspace(data.http.icanhazip.body)}/32 bit there?? Any resources we want to load from the terraform state, but that we’ve not directly defined ourselves needs to come from somewhere. These items are classed as “data” – that is, we want to know what their values are, but we aren’t *changing* the service to get it. You can also use this to import other resource items, perhaps a virtual network that is created by another team, or perhaps your account doesn’t have the rights to create a resource group. I’ll include a commented out data block in the overall main.tf file for review that specifies a VNet if you want to see how that works.

In this case, I want to put the public IP address I’m coming from into the NSG Rule, so I can get access to the VM, without opening it up to *everyone*. I’m not that sure that my IP address won’t change between one run and the next, so I’m using the icanhazip.com service to determine my IP address. But I’ve not defined how to get that resource yet. Let’s add it to the main.tf for now.

So, we’re now ready to create our virtual machine. It’s quite a long block, but I’ll pull certain elements apart once I’ve pasted this block in.

So, this is broken into four main pieces.

  • Virtual Machine Details. This part is relatively sensible. Name RG, location, NIC, Size and what happens to the disks when the machine powers on. OK.
name                             = "iaas-vm"
location                         = azurerm_resource_group.rg.location
resource_group_name              = azurerm_resource_group.rg.name
network_interface_ids            = [azurerm_network_interface.iaasnic.id]
vm_size                          = "Standard_DS1_v2"
delete_os_disk_on_termination    = true
delete_data_disks_on_termination = true
  • Disk details.
storage_image_reference {
  publisher = "Canonical"
  offer     = "UbuntuServer"
  sku       = "18.04-LTS"
  version   = "latest"
}
storage_os_disk {
  name              = "iaas-os-disk"
  caching           = "ReadWrite"
  create_option     = "FromImage"
  managed_disk_type = "Standard_LRS"
}
  • OS basics: VM Hostname, username of the first user, and it’s password. Note, if you want to use an SSH key, this must be stored for Terraform to use without passphrase. If you mention an SSH key here, as well as a password, this can cause all sorts of connection issues, so pick one or the other.
os_profile {
  computer_name  = "iaas"
  admin_username = var.ssh_user
  admin_password = var.ssh_password
}
os_profile_linux_config {
  disable_password_authentication = false
}
  • And lastly, provisioning. I want to use Ansible for my provisioning. In this example, I have a basic playbook stored locally on my Terraform host, which I transfer to the VM, install Ansible via pip, and then execute ansible-playbook against the file I uploaded. This could just as easily be a git repo to clone or a shell script to copy in, but this is a “simple” example.
provisioner "remote-exec" {
  inline = ["mkdir /tmp/ansible"]

  connection {
    type     = "ssh"
    host     = azurerm_public_ip.iaaspubip.fqdn
    user     = var.ssh_user
    password = var.ssh_password
  }
}

provisioner "file" {
  source = "ansible/"
  destination = "/tmp/ansible"

  connection {
    type     = "ssh"
    host     = azurerm_public_ip.iaaspubip.fqdn
    user     = var.ssh_user
    password = var.ssh_password
  }
}

provisioner "remote-exec" {
  inline = [
    "sudo apt update > /tmp/apt_update || cat /tmp/apt_update",
    "sudo apt install -y python3-pip > /tmp/apt_install_python3_pip || cat /tmp/apt_install_python3_pip",
    "sudo -H pip3 install ansible > /tmp/pip_install_ansible || cat /tmp/pip_install_ansible",
    "ansible-playbook /tmp/ansible/main.yml"
  ]

  connection {
    type     = "ssh"
    host     = azurerm_public_ip.iaaspubip.fqdn
    user     = var.ssh_user
    password = var.ssh_password
  }
}

This part of code is done in three parts – create upload path, copy the files in, and then execute it. If you don’t create the upload path, it’ll upload just the first file it comes to into the path specified.

Each remote-exec and file provisioner statement must include the hostname, username and either the password, or SSH private key. In this example, I provide just the password.

So, having created all this lot, you need to execute the terraform workload. Initially you do terraform init. This downloads all the provisioners and puts them into the same tree as these .tf files are stored in. It also resets the state of the terraform discovered or created datastore.

Next, you do terraform plan -out tfout. Technically, the tfout part can be any filename, but having something like tfout marks it as clearly part of Terraform. This creates the tfout file with the current state, and whatever needs to change in the Terraform state file on it’s next run. Typically, if you don’t use a tfout file within about 20 minutes, it’s probably worth removing it.

Finally, once you’ve run your plan stage, now you need to apply it. In this case you execute terraform apply tfout. This tfout is the same filename you specified in terraform plan. If you don’t include -out tfout on your plan (or even run a plan!) and tfout in your apply, then you can skip the terraform plan stage entirely.

When I ran this, with a handful of changes to the variable files, I got this result:

Once you’re done with your environment, use terraform destroy to shut it all down… and enjoy :)

The full source is available in the associated Gist. Pull requests and constructive criticism are very welcome!

Featured image is “Seca” by “Olearys” on Flickr and is released under a CC-BY license.

"Tower" by " Yijun Chen" on Flickr

Building a Gitlab and Ansible Tower (AWX) Demo in Vagrant with Ansible

TL;DR – I created a repository on GitHub‌ containing a Vagrantfile and an Ansible Playbook to build a VM running Docker. That VM hosts AWX (Ansible Tower’s upstream open-source project) and Gitlab.

A couple of years ago, a colleague created (and I enhanced) a Vagrant and Ansible playbook called “Project X” which would run an AWX instance in a Virtual Machine. It’s a bit heavy, and did a lot of things to do with persistence that I really didn’t need, so I parked my changes and kept an eye on his playbook…

Fast-forward to a week-or-so ago. I needed to explain what a Git/Ansible Workflow would look like, and so I went back to look at ProjectX. Oh my, it looks very complex and consumed a lot of roles that, historically, I’ve not been that impressed with… I just needed the basics to run AWX. Oh, and I also needed a Gitlab environment.

I knew that Gitlab had a docker-based install, and so does AWX, so I trundled off to find some install guides. These are listed in the playbook I eventually created (hence not listing them here). Not all the choices I made were inspired by those guides – I wanted to make quite a bit of this stuff “build itself”… this meant I wanted users, groups and projects to be created in Gitlab, and users, projects, organisations, inventories and credentials to be created in AWX.

I knew that you can create Docker Containers in Ansible, so after I’d got my pre-requisites built (full upgrade, docker installed, pip libraries installed), I add the gitlab-ce:latest docker image, and expose some ports. Even now, I’m not getting the SSH port mapped that I was expecting, but … it’s no disaster.

I did notice that the Gitlab service takes ages to start once the container is marked as running, so I did some more digging, and found that the uri module can be used to poll a URL. It wasn’t documented well how you can make it keep polling until you get the response you want, so … I added a PR on the Ansible project’s github repo for that one (and I also wrote a blog post about that earlier too).

Once I had a working Gitlab service, I needed to customize it. There are a bunch of Gitlab modules in Ansible but since a few releases back of Gitlab, these don’t work any more, so I had to find a different way. That different way was to run an internal command called “gitlab-rails”. It’s not perfect (so it doesn’t create repos in your projects) but it’s pretty good at giving you just enough to build your demo environment. So that’s getting Gitlab up…

Now I need to build AWX. There’s lots of build guides for this, but actually I had most luck using the README in their repository (I know, who’d have thought it!??!) There are some “Secrets” that should be changed in production that I’m changing in my script, but on the whole, it’s pretty much a vanilla install.

Unlike the Gitlab modules, the Ansible Tower modules all work, so I use these to create the users, credentials and so-on. Like the gitlab-rails commands, however, the documentation for using the tower modules is pretty ropey, and I still don’t have things like “getting your users to have access to your organisation” working from the get-go, but for the bulk of the administration, it does “just work”.

Like all my playbooks, I use group_vars to define the stuff I don’t want to keep repeating. In this demo, I’ve set all the passwords to “Passw0rd”, and I’ve created 3 users in both AWX and Gitlab – csa, ops and release – indicative of the sorts of people this demo I ran was aimed at – Architects, Operations and Release Managers.

Maybe, one day, I’ll even be able to release the presentation that went with the demo ;)

On a more productive note, if you’re doing things with the tower_ modules and want to tell me what I need to fix up, or if you’re doing awesome things with the gitlab-rails tool, please visit the repo with this automation code in, and take a look at some of my “todo” items! Thanks!!

Featured image is “Tower” by “Yijun Chen” on Flickr and is released under a CC-BY-SA license.

"funfair action" by "Jon Bunting" on Flickr

Improving the speed of Azure deployments in Ansible with Async

Recently I was building a few environments in Azure using Ansible, and found this stanza which helped me to speed things up.

  - name: "Schedule UDR Creation"
    azure_rm_routetable:
      resource_group: "{{ resource_group }}"
      name: "{{ item.key }}_udr"
    loop: "{{ routetables | dict2items }}"
    loop_control:
        label: "{{ item.key }}_udr"
    async: 1000
    poll: 0
    changed_when: False
    register: sleeper

  - name: "Check UDRs Created"
    async_status:
      jid: "{{ item.ansible_job_id }}"
    register: sleeper_status
    until: sleeper_status.finished
    retries: 500
    delay: 4
    loop: "{{ sleeper.results|flatten(levels=1) }}"
    when: item.ansible_job_id is defined
    loop_control:
      label: "{{ item._ansible_item_label }}"

What we do here is to start an action with an “async” time (to give the Schedule an opportunity to register itself) and a “poll” time of 0 (to prevent the Schedule from waiting to be finished). We then tell it that it’s “never changed” (changed_when: False) because otherwise it always shows as changed, and to register the scheduled item itself as a “sleeper”.

After all the async jobs get queued, we then check the status of all the scheduled items with the async_status module, passing it the registered job ID. This lets me spin up a lot more items in parallel, and then “just” confirm afterwards that they’ve been run properly.

It’s not perfect, and it can make for rather messy code. But, it does work, and it’s well worth giving it the once over, particularly if you’ve got some slow-to-run tasks in your playbook!

Featured image is “funfair action” by “Jon Bunting” on Flickr and is released under a CC-BY license.

A web browser with the example.com web page loaded

Working around the fact that Ansible’s URI module doesn’t honour the no_proxy variable…

An Ansible project I’ve been working on has tripped me up this week. I’m working with some HTTP APIs and I need to check early whether I can reach the host. To do this, I used a simple Ansible Core Module which lets you call an HTTP URI.

- uri:
    follow_redirects: none
    validate_certs: False
    timeout: 5
    url: "http{% if ansible_https | default(True) %}s{% endif %}://{{ ansible_host }}/login"
  register: uri_data
  failed_when: False
  changed_when: False

This all seems pretty simple. One of the environments I’m working in uses the following values in their environment:

http_proxy="http://192.0.2.1:8080"
https_proxy="http://192.0.2.1:8080"
no_proxy="10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24"

And this breaks the uri module, because it tries to punt everything through the proxy if the “no_proxy” contains CIDR values (like 192.0.2.0/24) (there’s a bug raised for this)… So here’s my fix!

- set_fact:
    no_proxy_match: |
      {
        {% for no_proxy in (lookup('env', 'no_proxy') | replace(',', '') ).split() %}
          {% if no_proxy| ipaddr | type_debug != 'NoneType' %}
            {% if ansible_host | ipaddr(no_proxy) | type_debug != 'NoneType' %}
              "match": "True"
            {% endif %}
          {% endif %}
        {% endfor %}
      }

- uri:
    follow_redirects: none
    validate_certs: False
    timeout: 5
    url: "http{% if ansible_https | default(True) %}s{% endif %}://{{ ansible_host }}/login"
  register: uri_data
  failed_when: False
  changed_when: False
  environment: "{ {% if no_proxy_match.match | default(False) %}'no_proxy': '{{ ansible_host }}'{% endif %} }"

So, let’s break this down.

The key part to this script is that we need to override the no_proxy environment variable with the IP address that we’re trying to address (so that we’re not putting 16M addresses for 10.0.0.0/8 into no_proxy, for example). To do that, we use the exact same URI block, except for the environment line at the end.

In turn, the set_fact block steps through the no_proxy values, looking for IP Addresses to check ({% if no_proxy | ipaddr ... %}‌ says “if the no_proxy value is an IP Address, return it, but if it isn’t, return a ‘None’ value”) and if it’s an IP address or subnet mask, it checks to see whether the IP address of the host you’re trying to reach falls inside that IP Address or Subnet Mask ({% if ansible_host | ipaddr(no_proxy) ... %} says “if the ansible_host address falls inside the no_proxy range, then return it, otherwise return a ‘None’ value”). Both of these checks say “If this previous check returns anything other than a ‘None’ value, do the next thing”, and on the last check, the “next” thing is to set the flag ‘match’ to ‘true’. When we get to the environment variable, we say “if match is not true, it’s false, so don’t put a value in there”.

So that’s that! Yes, I could merge the set_fact block into the environment variable, but I do end up using that a fair amount. And really, if it was merged, that would be even MORE complicated to pick through.

I have raised a pull request on the Ansible project to update the documentation, so we’ll see whether we end up with people over here looking for ways around this issue. If so, let me know in the comments below! Thanks!!

"Wifi Here on a Blackboard" by "Jem Stone" on Flickr

Free Wi-Fi does not need to be password-less!

Recently a friend of mine forwarded an email to me about a Wi-fi service he wanted to use from a firm, but he raised some technical questions with them which they seemed to completely misunderstand!

So, let’s talk about the misconceptions of Wi-fi passwords.

Many people assume that when you log into a system, it means that system is secure. For example, logging into a website makes sure that your data is secure and protected, right? Not necessarily – the password you entered could be on a web page that is not secured by TLS, or perhaps the web server doesn’t properly transfer it’s contents to a database. Maybe the website was badly written, and means it’s vulnerable to one of a handful of common attacks (with fun names like “Cross Site Scripting” or “SQL Injection Attacks”)…

People also assume the same thing about Wi-fi. You reached a log in page, so it must be secure, right? It depends. If you didn’t put in a password to access the Wi-fi in the first place (like in the image of the Windows 10 screen, or on my KDE Desktop) then you’re probably using Unsecured Wi-fi.

An example of a secured Wi-fi sign-in box on Windows 10
The same Wi-fi sign in box on KDE Neon

People like to compare network traffic to “sending things through the post”, notablycomparing E-Mail to “sending a postcard”, versus PGP encrypted E-Mail being compared to “sending a sealed letter”. Unencrypted Wi-fi is like using CB. Anyone who can hear your signal can understand what you are saying… but if you visit a website which uses HTTPS, then it’s like listening to someone saying random numbers over the radio.

And, if you’re using Unencrypted Wi-fi, it’s also possible for an attacker to see what website you visited, because the request for the address to reach on the Internet (e.g. “Google.com” = 172.217.23.14) is sent in the clear. Also because of the way that DNS works (that name to address matching thing) means that if someone knows you’re visiting a “site of interest” (like, perhaps a bank website), they can reply *before* the real DNS server, and tell you that the server on their machine is actually your bank’s website.

So, many of these things can be protected against by using a simple method, that many people who provide Wi-fi don’t do.

Turn on WPA2 (the authentication bit). Even if *everyone* uses the same password (which they’d have to for WPA2), the fact you’re logging into the Access Point means it creates a unique shared secret for your session.

“But hang on”, I hear the guy at the back cry, “you used the same password – how does that work?”

OK, so this is where the fun stuff starts. The password is just part of how you negotiate to get on to the network. There’s a complex beast of a method that explains how get a shared unique secret when you’re passing stuff around “in the clear”, and so as a result, when you first connect to that Wi-fi access point, and you hand over your password, it “Authorises” you on to the network, but then hands you over to the encryption part, where you generate a key and then use that to talk to each other. The encryption is the bit like “HTTPS”, where you make it so that people can’t see what you’re looking at.

“I got told that if everyone used the same password” said a hipster in the front row, “I wouldn’t be able to tell them apart.” Aha, not true. You can have a separate passphrase to access the Wi-fi from the Login page, after all, you’ve got to make sure that people aren’t breaking the rules (which they *TOTALLY* read, before clicking “I agree, just get me on the damn Wi-fi already”) by using your network.

“OK”, says the lady over on the right, “but when I connected to the Wi-fi, they asked me to log in using Facebook – that’s secure, right?”

Um, no. Well, maybe. See, if they gave you a WPA2 password to log into the Wi-fi, and then the first thing you got to was that login screen, then yep, it’s all good! {*} You can browse with (relative) impunity. But if they didn’t… well, not only are they asking you to shout your secrets on the radio, but if you’re really unlucky, the page asking you to log into Facebook might *also* not actually be Facebook, but another website that just looks like Facebook… after all, I’m sure that page you went to complained that it wasn’t Google or Facebook when you tried to open it…

{*} Except for the fact they’re asking you to tell them not only who you are, but who you’re also friends with, where you went to school, what your hobbies are, what groups you’re in, your date of birth and so on.

But anyway. I understand why those login screens are there. They’re asserting that not only do you understand that you mustn’t use their network for bad things, but that if the police come and ask them who used their network to do something naughty, they can say “He said his name was ‘Bob Smith’ and his email address was ‘bob@example.com’, Officer”…

It also means that the “free” service they provide to you, usually at some great expense (*eye roll*) can get them some return on investment (like, they just got your totally-real-and-not-at-all-made-up-email-address… honest, and they also know what websites you visited while you were there, which they can sell on).

So… What to do the next time you “need” Wi-fi, and there’s a free service there? Always use a VPN when you’re not using a network you trust. If the Wi-fi isn’t using WPA2 encryption (even something as simple as “Buy a drink first” is a great passphrase to use!) point them to this page, and tell them it’s virtually pain free (as long as the passphrase is easy to remember, easy to type and doesn’t have too many weird symbols in) and makes their service more safe and secure for their customers…

Featured image is “Wifi Here on a Blackboard” by “Jem Stone” on Flickr and is released under a CC-BY license.

"www.GetIPv6.info decal" from Phil Wolff on Flickr

Hurricane Electric IPv6 Gateway on Raspbian for Raspberry Pi

NOTE: This article was replaced on 2019-03-12 by a github repository where I now use Vagrant instead of a Raspberry Pi, because I was having some power issues with my Raspberry Pi. Also, using this method means I can easily use an Ansible Playbook. The following config will still work(!) however I prefer this Vagrant/Ansible workflow for this, so won’t update this blog post any further.

Following an off-hand remark from a colleague at work, I decided I wanted to set up a Raspberry Pi as a Hurricane Electric IPv6 6in4 tunnel router. Most of the advice around (in particular, this post about setting up IPv6 on the Raspberry Pi Forums) related to earlier version of Raspbian, so I thought I’d bring it up-to-date.

I installed the latest available version of Raspbian Stretch Lite (2018-11-13) and transferred it to a MicroSD card. I added the file ssh to the boot volume and unmounted it. I then fitted it into my Raspberry Pi, and booted it. While it was booting, I set a static IPv4 address on my router (192.168.1.252) for the Raspberry Pi, so I knew what IP address it would be on my network.

I logged into my Hurricane Electric (HE) account at tunnelbroker.net and created a new tunnel, specifying my public IP address, and selecting my closest HE endpoint. When the new tunnel was created, I went to the “Example Configurations” tab, and selected “Debian/Ubuntu” from the list of available OS options. I copied this configuration into my clipboard.

I SSH’d into the Pi, and gave it a basic config (changed the password, expanded the disk, turned off “predictable network names”, etc) and then rebooted it.

After this was done, I created a file in /etc/network/interfaces.d/he-ipv6 and pasted in the config from the HE website. I had to change the “local” line from the public IP I’d provided HE with, to the real IP address of this box. Note that any public IPs (that is, not 192.168.x.x addresses) in the config files and settings I’ve noted refer to documentation addressing (TEST-NET-2 and the IPv6 documentation address ranges)

auto he-ipv6
iface he-ipv6 inet6 v4tunnel
        address 2001:db8:123c:abd::2
        netmask 64
        endpoint 198.51.100.100
        local 192.168.1.252
        ttl 255
        gateway 2001:db8:123c:abd::1

Next, I created a file in /etc/network/interfaces.d/eth0 and put the following configuration in, using the first IPv6 address in the “routed /64” range listed on the HE site:

auto eth0
iface eth0 inet static
    address 192.168.1.252
    gateway 192.168.1.254
    netmask 24
    dns-nameserver 8.8.8.8
    dns-nameserver 8.8.4.4

iface eth0 inet6 static
    address 2001:db8:123d:abc::1
    netmask 64

Next, I disabled the DHCPd service by issuing systemctl stop dhcpcd.service Late edit (2019-01-22): Note, a colleague mentioned that this should have actually been systemctl stop dhcpcd.service && systemctl disable dhcpcd.service – good spot! Thanks!! This ensures that if, for some crazy reason, the router stops offering the right DHCP address to me, I can still access this box on this IP. Huzzah!

I accessed another host which had IPv6 access, and performed both a ping and an SSH attempt. Both worked. Fab. However, this now needs to be blocked, as we shouldn’t permit anything to be visible downstream from this gateway.

I’m using the Uncomplicated Firewall (ufw) which is a simple wrapper around IPTables. Let’s create our policy.

# First install the software
sudo apt update && sudo apt install ufw -y

# Permits inbound IPv4 SSH to this host - which should be internal only. 
# These rules allow tailored access in to our managed services
ufw allow in on eth0 app DNS
ufw allow in on eth0 app OpenSSH

# These rules accept all broadcast and multicast traffic
ufw allow in on eth0 to 224.0.0.0/4 # Multicast addresses
ufw allow in on eth0 to 255.255.255.255 # Global broadcast
ufw allow in on eth0 to 192.168.1.255 # Local broadcast

# Alternatively, accept everything coming in on eth0
# If you do this one, you don't need the lines above
ufw allow in on eth0

# Setup the default rules - deny inbound and routed, permit outbound
ufw default deny incoming 
ufw default deny routed
ufw default allow outgoing

# Prevent inbound IPv6 to the network
# Also, log any drops so we can spot them if we have an issue
ufw route deny log from ::/0 to 2001:db8:123d:abc::/64

# Permit outbound IPv6 from the network
ufw route allow from 2001:db8:123d:abc::/64

# Start the firewall!
ufw enable

# Check the policy
ufw status verbose
ufw status numbered

Most of the documentation I found suggested running radvd for IPv6 address allocation. This basically just allocates on a random basis, and, as far as I can make out, each renewal gives the host a new IPv6 address. To make that work, I performed apt-get update && apt-get install radvd -y and then created this file as /etc/radvd.conf. If all you want is a floating IP address with no static assignment – this will do it…

interface eth0
{
    AdvSendAdvert on;
    MinRtrAdvInterval 3;
    MaxRtrAdvInterval 10;
    prefix 2001:db8:123d:abc::/64
    {
        AdvOnLink on;
        AdvAutonomous on;
    };
   route ::/0 {
   };
};

However, this doesn’t give me the ability to statically assign IPv6 addresses to hosts. I found that a different IPv6 allocation method will do static addressing, based on your MAC address called SLAAC (note there are some privacy issues with this, but I’m OK with them for now…) In this mode assuming the prefix as before – 2001:db8:123d:abc:: and a MAC address of de:ad:be:ef:01:23, your IPv6 address will be something like: 2001:db8:123d:abc:dead:beff:feef:0123and this will be repeatably so – because you’re unlikely to change your MAC address (hopefully!!).

This SLAAC allocation mode is available in DNSMasq, which I’ve consumed before (in a Pi-Hole). To use this, I installed DNSMasq with apt-get update && apt-get install dnsmasq -y and then configured it as follows:

interface=eth0
listen-address=127.0.0.1
# DHCPv6 - Hurricane Electric Resolver and Google's
dhcp-option=option6:dns-server,[2001:470:20::2],[2001:4860:4860::8888]
# IPv6 DHCP scope
dhcp-range=2001:db8:123d:abc::, slaac

I decided to move from using my router as a DHCP server, to using this same host, so expanded that config as follows, based on several posts, but mostly centred around the MAN page (I’m happy to have this DNSMasq config improved if you’ve got any suggestions ;) )

# Stuff for DNS resolution
domain-needed
bogus-priv
no-resolv
filterwin2k
expand-hosts
domain=localnet
local=/localnet/
log-queries

# Global options
interface=eth0
listen-address=127.0.0.1

# Set these hosts as the DNS server for your network
# Hurricane Electric and Google
dhcp-option=option6:dns-server,[2001:470:20::2],2001:4860:4860::8888]

# My DNS servers are:
server=1.1.1.1                # Cloudflare's DNS server
server=8.8.8.8                # Google's DNS server

# IPv4 DHCP scope
dhcp-range=192.168.1.10,192.168.1.210,12h
# IPv6 DHCP scope
dhcp-range=2001:db8:123d:abc::, slaac

# Record the DHCP leases here
dhcp-leasefile=/run/dnsmasq/dhcp-lease

# DHCPv4 Router
dhcp-option=3,192.168.1.254

So, that’s what I’m doing now! Hope it helps you!

Late edit (2019-01-22): In issue 129 of the “Awesome Self Hosted Newsletter“, I found a post called “My New Years Resolution: Learn IPv6“… which uses a pfSense box and a Hurricane Electric tunnel too. Fab!

Header image is “www.GetIPv6.info decal” by “Phil Wolff” on Flickr and is released under a CC-BY-SA license. Used with thanks!

Creating Self Signed certificates in Ansible

In my day job, I sometimes need to use a self-signed certificate when building a box. As I love using Ansible, I wanted to make the self-signed certificate piece something that was part of my Ansible workflow.

Here follows a bit of basic code that you could use to work through how the process of creating a self-signed certificate would work. I would strongly recommend using something more production-ready (e.g. LetsEncrypt) when you’re looking to move from “development” to “production” :)

---
- hosts: localhost
  vars:
  - dnsname: your.dns.name
  - tmppath: "./tmp/"
  - crtpath: "{{ tmppath }}{{ dnsname }}.crt"
  - pempath: "{{ tmppath }}{{ dnsname }}.pem"
  - csrpath: "{{ tmppath }}{{ dnsname }}.csr"
  - pfxpath: "{{ tmppath }}{{ dnsname }}.pfx"
  - private_key_password: "password"
  tasks:
  - file:
      path: "{{ tmppath }}"
      state: absent
  - file:
      path: "{{ tmppath }}"
      state: directory
  - name: "Generate the private key file to sign the CSR"
    openssl_privatekey:
      path: "{{ pempath }}"
      passphrase: "{{ private_key_password }}"
      cipher: aes256
  - name: "Generate the CSR file signed with the private key"
    openssl_csr:
      path: "{{ csrpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      common_name: "{{ dnsname }}"
  - name: "Sign the CSR file as a CA to turn it into a certificate"
    openssl_certificate:
      path: "{{ crtpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      csr_path: "{{ csrpath }}"
      provider: selfsigned
  - name: "Convert the signed certificate into a PKCS12 file with the attached private key"
    openssl_pkcs12:
      action: export
      path: "{{ pfxpath }}"
      name: "{{ dnsname }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: "{{ private_key_password }}"
      passphrase: password
      certificate_path: "{{ crtpath }}"
      state: present

Hacker at Keyboard

What happens on an IT Penetration Test

I recently was asked to describe what happens in a penetration test (pentest), how it’s organised and what happens after the test is completed.

Some caveats first:

  • While I’ve been involved in escorting penetration testers in controlled areas, and helping to provide environments for the tests to occur in, I’ve not been heavily involved in the set-up of one, so some of the details in that area are likely to be a bit fuzzy.
  • I’m not involved in procurement in any way, so I can’t endorse or discredit any particular testing organisations.
  • This is a personal viewpoint and doesn’t represent a professional recommendation of how a penetration test should or typically does occur.

So, what actually happens?…

Before the pentest begins, a testing firm would be sourced and the “Terms of Engagement” (TOE), or perhaps a list of requirements would be defined. This might result in a list of tasks that are expected to be performed, and some idea on resources required. It would also be the point where the initiator (the organisation getting the test performed) defines what is “In Scope” (is available to be tested) and what is “Out Of Scope” (and must not be tested).

Some of the usual tasks are:

  • Internet Scan (the testers will have a testing appliance that will scan all IPs provided to them, and all open ports on those IPs, looking for vulnerable services, servers and responding applications).
  • Black Box [See note below] Red Team test (the testers are probing the network as though they were malicious outsiders with no knowledge of the system, similar to the sorts of things you might see in hacker movies – go through discovered documentation, socially engineer access to environments, run testing applications like NMAP and Metasploit, and generally see what stuff appears to be publicly accessible, and from there see how the environment appears once you have a foothold).
  • White Box test (the testers have access to internal documentation and/or source code about the environment they’re testing, and thus can run customised and specific tests against the target environment).
  • Configuration Analysis (access to servers, source code, firewall policies or network topology documentation intending to check where flaws may have been introduced).
  • Social Engineering test (see how amenable staff, customers and suppliers are to providing access to the target environment for your testing team).
  • Physical access test (prove whether the testing team can gain physical access to elements of your target, e.g. servers, documentation, management stations, signing keys, etc).
  • Some testing firms will also stress test any Denial Of Service Mitigations you may have in-place, but these must be carefully negotiated first with your bandwidth providers, their peering firms and so on, as they will more-than-likely disrupt more than just your services! DO NOT ENGAGE A DOS TEST LIGHTLY!

Once the Terms have been agreed and the duration of these tests have been ironed out (some tests could go on indefinitely, but you wouldn’t *really* want to pay the bills for an indefinite Black Box test, for example!), a project plan is usually defined showing these stages. Depending on the complexity of your environment, I might expect a reasonable duration for a small estate being approximately a day or two for each test. In a larger estate, particularly where little-to-no automation has been implemented, you may find (for example) a thorough Configuration Analysis of your server configurations taking weeks or even months.

Depending on how true-to-life the test “should” be, you may have the Physical Security assessment and Social Engineering tests be considered part of the Black Box test, or perhaps you may purposefully provide some entry point for the testing team to reduce the entry time. Most of the Black Box tests I’ve been involved in supporting have started from giving the testers access to a point inside your trusted network (e.g. a server which has been built for the purpose of giving access to testers or a VPN entry point with a “lax” firewall policy). Others will provide a “standard” asset (e.g. laptop) and user credential to the testing firm. Finally, some environments will put the testing firm through “recruitment” and put them in-situ in the firm for a week or two to bed them in before the testing starts… this is pretty extreme however!!

The Black Box test will typically be run before any others (except perhaps the Social Engineering and Physical Access tests) and without the knowledge of the “normal” administrators. This will also test the responses of the “Blue Team” (the system administrators and any security operations centre teams) to see whether they would notice an in-progress, working attack by a “Red Team” (the attackers).

After the Black Box test is completed, the “Blue Team” may be notified that there was a pentest, and then (providing it is being run) the testing organisation will start a White Box test will be given open access to the tested environment.

The Configuration Check will normally start with hands-on time with members of the “Blue Team” to see configuration and settings, and will compare these settings against known best practices. If there is an application being tested where source code is available to the testers, then they may check the source code against programming bad practices.

Once these tests are performed, the testing organisation will write a report documenting the state of the environment and rate criticality of the flaws against current recommendations.

The report would be submitted to the organisation who requested the test, and then the *real* fun begins – either fixing the flaws, or finger pointing at who let the flaws occur… Oh, and then scheduling the next pentest :)

I hope this has helped people who may be wondering what happens during a pentest!

Just to noteIf you want to know more about pentests, and how they work in the real world, check out the podcast “Darknet Diaries“, and in particular episode 6 – “The Beirut Bank Job”. To get an idea of what the pentest is supposed to simulate, (although it’s a fictional series) “Mr Robot” (<- Amazon affiliate link) is very close to how I would imagine a sequence of real-world “Red Team” attacks might look like, and experts seem to agree!

Image Credit: “hacker” by Dani Latorre licensed under a Creative Commons, By-Attribution, Share-Alike license.


Additional note; 2018-12-12: Following me posting this to the #security channel on the McrTech Slack group, one of the other members of that group (Jay Harris from Digital Interruption) mentioned that I’d conflated a black box test and a red team test. A black box test is like a white box test, but with no documentation or access to the implementing team. It’s much slower than a white box test, because you don’t have access to the people to ask “why did you do this” or “what threat are you trying to mitigate here”. He’s right. This post was based on a previous experience I had with a red team test, but that they’d referred to it as a black box test, because that’s what the engagement started out as. Well spotted Jay!

He also suggested reading his post in a similar vein.

One to read/watch: IPsec and IKE Tutorial

Ever been told that IPsec is hard? Maybe you’ve seen it yourself? Well, Paul Wouters and Sowmini Varadhan recently co-delivered a talk at the NetDev conference, and it’s really good.

Sowmini’s and Paul’s slides are available here: https://www.files.netdevconf.org/d/a18e61e734714da59571/

A complete recording of the tutorial is here. Sowmini’s part of the tutorial (which starts first in the video) is quite technically complex, looking at specifically the way that Linux handles the packets through the kernel. I’ve focused more on Paul’s part of the tutorial (starting at 26m23s)… but my interest was piqued from 40m40s when he starts to actually show how “easy” configuration is. There are two quick run throughs of typical host-to-host IPsec and subnet-to-subnet IPsec tunnels.

A key message for me, which previously hadn’t been at all clear in IPsec using {free,libre,open}swan is that they refer to Left and Right as being one party and the other… but the node itself works out if it’s “left” or “right” so the *SAME CONFIG* can be used on both machines. GENIUS.

Also, when you’re looking at the config files, anything prefixed with an @ symbol is something that doesn’t need resolving to something else.

It’s well worth a check-out, and it’s inspired me to take another look at IPsec for my personal VPNs :)

I should note that towards the end, Paul tried to run a selection of demonstrations in Opportunistic Encryption (which basically is a way to enable encryption between two nodes, even if you don’t have a pre-established VPN with them). Because of issues with the conference wifi, plus the fact that what he’s demoing isn’t exactly production-grade yet, it doesn’t really work right, and much of the rest of the video (from around 1h10m) is him trying to show that working while attendees are running through the lab, and having conversations about those labs with the attendees.