“Swatch Water Store, Grand Central Station, NYC, 9/2016, pics by Mike Mozart of TheToyChannel and JeepersMedia on YouTube #Swatch #Watch” by “Mike Mozart” on Flickr

Time Based Security

I came across the concept of “Time Based Security” (TBS) in the Sysadministrivia podcast, S4E13.

I’m still digging into the details of it, but in essence, the “Armadillo” (Crunchy on the outside, soft on the inside) protection model is broken (sometimes known as the “Fortress Model”). You assume that your impenetrable network boundary will prevent attackers from getting to your sensitive data. While this may stop them for a while, what you’re actually seeing here is one part of a complex protection system, however many organisations miss the fact that this is just one part.

The examples used in the only online content I’ve found about this refer to a burglary.

In this context, your “Protection” (P) is measured in time. Perhaps you have hardened glass that takes 20 seconds to break.

Next, we evaluate “Detection” (D) which is also, surprisingly enough, measured in time. As the glass is hit, it triggers an alarm to a security facility. That takes 20 seconds to respond and goes to a dispatch centre, another 20 seconds for that to be answered and a police officer dispatched.

The police officer being dispatched is the “Response” (R). The police take (optimistically) 2 minutes to arrive (it was written in the 90’s so the police forces weren’t decimated then).

So, in the TBS system, we say that Detection (D) of 40 seconds plus Response (R) of 120 seconds = 160 seconds. This is greater than Protection (P) of 20 seconds, so we have an Exposure (E) time of 140 seconds E = P – (D + R). The question that is posed is, how much damage can be done in E?

So, compare this to your average pre-automation SOC. Your firewall, SIEM (Security Incident Event Management system), IDS (Intrusion Detection System) or WAF (Web Application Firewall) triggers an alarm. Someone is trying to do something (e.g. Denial Of Service attack, password spraying or port scanning for vulnerable services) a system you’re responsible for. While D might be in the tiny fractions of a minute (perhaps let’s say 1 minute, for maths sake), R is likely to be minutes or even hours, depending on the refresh rate of the ticket management system or alarm system (again, for maths sake, let’s say 60 minutes). So, D+R is now 61 minutes. How long is P really going to hold? Could it be less than 30 minutes against a determined attacker? (Let’s assume P is 30 minutes for maths sake).

Let’s do the calculation for a pre-automation SOC (Security Operations Centre). P-(D+R)=E. E here is 31 minutes. How much damage can an attacker do in 31 minutes? Could they put a backdoor into your system? Can they download sensitive data to a remote system? Could they pivot to your monitoring system, and remove the logs that said they were in there?

If you consider how much smaller the D and R numbers become with an event driven SOAR (Security Orchestration and Automation Response) system – does that improve your P and E numbers? Consider that if you can get E to 0, this could be considered to be “A Secure Environment”.

Also, consider the fact that many of the tools we implement for security reduce D and R, but if you’re not monitoring the outputs of the Detection components, then your response time grows significantly. If your Detection component is misconfigured in that it’s producing too many False Positives (for example, “The Boy Who Cried Wolf“), so you don’t see the real incident, then your Response might only be when a security service notifies you that your data, your service or your money has been exposed and lost. And that wouldn’t be good now… Time to look into automation 😁

Featured image is “Swatch Water Store, Grand Central Station, NYC, 9/2016, pics by Mike Mozart of TheToyChannel and JeepersMedia on YouTube #Swatch #Watch” by “Mike Mozart” on Flickr and is released under a CC-BY license.

"Confused" by "CollegeDegrees360" on Flickr

Using AWSCLI on Ubuntu/Ubuntu-in-WSL

This is a brief note to myself (but might be useful to you)!

awscli (similar to the Azure az command) is packaged for Ubuntu, but the version which is in the Ubuntu 18.04 repositories is “out of date” and won’t work with AWS. You *actually* need to run the following:

sudo apt update && sudo apt install python3-pip -y && sudo -H pip3 install --upgrade awscli

If you’ve unfortunately already installed awscli from apt, do the following:

sudo apt remove awscli -y

Then, logout (for some reason, binary path caching is a thing now?) and log back in, and then run the above pip install line.

Solution found via this logged issue on the awscli git repo.

Featured image is “Confused” by “CollegeDegrees360” on Flickr and is released under a CC-BY-SA license.

"Hacker" by "The Preiser Project" on Flickr

What to do when a website you’ve trusted gets hacked?

Hello! Maybe you just got a sneeking suspicion that a website you trusted isn’t behaving right, perhaps someone told you that “unusual content” is being posted in your name somewhere, or, if you’re really lucky, you might have just had an email from a website like “HaveIBeenPwned.com” or “Firefox Monitor“. It might look something like this:

An example of an email from the service “Have I Been Pwned”

Of course, it doesn’t feel like you’re lucky! It’s OK. These things happen quite a lot of the time, and you’re not the only one in this boat!

How bad is it, Doc?

First of all, don’t panic! Get some idea of the scale of problem this is by looking at a few key things.

  1. How recent was the breach? Give this a score between 1 (right now) and 10 (more than 1 month ago).
  2. How many websites and services do you use this account on? Give this a score between 1 (right now) and 10 (OMG, this is *my* password, and I use it everywhere).
  3. How many other services would use this account to authenticate to, or get a password reset from? Give this a score between 1 (nope, it’s just this website. We’re good) and 10 (It’s my email account, and everything I’ve ever signed up to uses this account as the login address… or it’s Facebook/Google and I use their authentication to login to everything else).
  4. How much does your reputation hang on this website or any other websites that someone reusing the credentials of this account would get access to? Give this a score between 1 (meh, I post cat pictures from an anonymous username) and 10 (I’m an INFLUENCER HERE dagnamit! I get money because I said stuff here and/or my job is on that website, or I am publicly connected to my employer by virtue of that profile).
  5. (Optional) If this is from a breach notification, does it say that it’s just email addresses (score 1), or that it includes passwords (score 5), unencrypted or plaintext passwords (score 8) or full credit card details (score 10)?

Once you’ve got an idea of scale (4 to 40 or 5 to 50, depending on whether you used that last question), you’ve got an idea of how potentially bad it is.

Take action!

Make a list of the websites you think that you need to change this password on.

  1. Start with email accounts (GMail, Hotmail, Outlook, Yahoo, AOL and so on) – each email account that uses the same password needs to be changed, and this is because almost every website uses your email address to make a “password” change on it! (e.g. “Forgot your password, just type in your email address here, and we’ll send you a reset link“).
  2. Prominent social media profiles (e.g. Facebook, Twitter, Instagram) come next, even if they’re not linked to your persona. This is where your potential reputation damage comes from!
  3. Next up is *this* website, the one you got the breach notification for. After all, you know this password is “wild” now!

Change some passwords

This is a bit of a bind, but I’d REALLY recommend making a fresh password for each of those sites. There are several options for doing this, but my preferred option is to use a password manager. If you’re not very tech savvy, consider using the service Lastpass. If you’re tech savvy, and understand how to keep files in sync across multiple devices, you might be interested in using KeePassXC (my personal preference) or BitWarden instead.

No really. A fresh password. Per site. Honest. And not just “MyComplexPassw0rd-Hotmail” because there are ways of spotting you’ve done something like that, and when they come to your facebook account, they’ll try “MyComplexPassw0rd-Facebook” just to see if it gets them in.

ℹ️ Using a password manager gives you a unique, per-account password. I just generated a fresh one (for a dummy website), and it was 2-K$F+j7#Zz8b$A^]qj. But, fortunately, I don’t have to remember it. I can just copy and paste it in to the form when I need to change it, or perhaps, if you have a browser add-on, that’ll fill it in for you.

Making a list, and checking it twice!

Fab, so you’ve now got a lovely list of unique passwords. A bit like Santa, it’s time to check your list again. Assume that your list of sites you just changed passwords for were all compromised, because someone knew that password… I know, it’s a scary thought! So, have a look at all those websites you just changed the password on and figure out what they have links to, then you’ll probably make your list of things you need to change a bit bigger.

Not sure what they have links to?

Well, perhaps you’re looking at an email account… have a look through the emails you’ve received in the last month, three months or year and see how many of those come from “something” unique. Perhaps you signed up to a shopping site with that email address? It’s probably worth getting a password reset for that site done.

Perhaps you’re looking at a social media site that lets you login to other services? Check through those other services and make sure that “someone” hasn’t allowed access to a website they control. After all, you did lose access to that website, and so you don’t know what it’s connected to.

Also, check all of these sites, and make sure there aren’t any unexpected “active sessions” (where someone else is logged into your account still). If you have got any, kick them out :)

OK, so the horse bolted, now close the gate!

Once you’ve sorted out all of these passwords, it’s probably worth looking at improving your security in general. To do this, we need to think about how people get access to your account. As I wrote in my “What to do when your Facebook account gets hacked?” post:

What if you accidentally gave your password to someone? Or if you went to a website that wasn’t actually the right page and put your password in there by mistake? Falling prey to this when it’s done on purpose is known as social engineering or phishing, and means that someone else has your password to get into your account.

A quote from the “Second Factor” part of “What to do when your Facebook account gets hacked?

The easiest way of locking this down is to use a “Second Factor” (sometimes abbreviated to 2FA). You need to give your password (“something you know”) to log into the website. Now you also need something separate, that isn’t in the same store. If this were a physical token (like a SoloKey, Yubikey, or a RSA SecurID token), it’d be “something you have” (after all, you need to carry around that “token” with you), but normally these days it’s something on your phone.

Some places will send you a text message, others will pop up an “approve login” screen (and, I should note, if you get one and YOU AREN’T LOGGING IN, don’t press “approve”!), or you might have a separate app (perhaps called “Google Authenticator”, “Authy” or something like “Duo Security”) that has a number that keeps changing.

You should then finish your login with a code from that app, SMS or token or reacting to that screen or perhaps even pressing a button on a thing you plug into your computer. If you want to know how to set this up, take a look at “TwoFactorAuth.org“, a website providing access to the documentation on setting up 2FA on many of the websites you currently use… but especially do this with your email accounts.

Featured image is “Hacker” by “The Preiser Project” on Flickr and is released under a CC-BY license.

JonTheNiceGuy and "The Chief" Peter Bleksley at BSides Liverpool 2019

Review of BSIDES Liverpool 2019

I had the privilege today to attend BSIDES Liverpool 2019. BSIDES is a infosec community conference. The majority of the talks were recorded, and I can strongly recommend making your way through the content when it becomes available.

Full disclosure: While my employer is a sponsor, I was not there to represent the company, I was just enjoying the show. A former colleague (good friend and, while he was still employed by Fujitsu, an FDE – so I think he still is one) is one of the organisers team.

The first talk I saw (aside from the welcome speech) was the keynote by Omri Segev Moyal (@gelossnake) about how to use serverless technologies (like AWS Lambda) to build a malware research platform. The key takeaway I have from that talk was how easy it is to build a simple python lambda script using Chalice. That was fantastic, and I’m looking forward to trying some things with that service!

For various reasons (mostly because I got talking to people), I missed the rest of the morning tracks except for the last talk before lunch. I heard great things about the Career Advice talk by Martin King, and the Social Engineering talk by Tom H, but will need to catch up on those on the videos released after.

Just before lunch we received a talk from “The Chief” (from the Channel 4 TV Series “Hunted”), Peter Bleksley, about an investigation he’s currently involved in. This was quite an intense session, and his history (the first 1/4 of his talk) was very interesting. Just before he went in for his talk, I got a selfie with him (which is the “Featured Image” for this post :) )

After lunch, I sat on the Rookies Track, and saw three fantastic talks, from Chrissi Robertson (@frootware) on Imposter Syndrome, Matt (@reversetor) on “Privacy in the age of Convenience” (reminding me of one of my very early talks at OggCamp/BarCamp Manchester) and Jan (@janfajfer) about detecting data leaks on mobile devices with EVPN. All three speakers were fab and nailed their content.

Next up was an unrecorded talk by Jamie (@2sec4u) about WannaCry, as he was part of the company who discovered the “Kill-Switch” domain. He gave a very detailed overview of the timeline about WannaCry, the current situation of the kill-switch, and a view on some of the data from infected-but-dormant machines which are still trying to reach the kill-switch. A very scary but well explained talk. Also, memes and rude words, but it’s clearly a subject that needed some levity, being part of a frankly rubbish set of circumstances.

After that was a talk from (two-out-of-six of) The Beer Farmers. This was a talk (mostly) about privacy and the lack of it from the social media systems of Facebook, Twitter and Google. As I listen to The Many Hats Club podcast, on which the Beer Farmers occasionally appear, it was a great experience matching faces to voices.

We finished the day on a talk by Finux (@f1nux) about Machiavelli as his writings (in the form of “The Prince”) would apply to Infosec. I was tempted to take a whole slew of photos of the slide deck, but figured I’d just wait for the video to be released, as it would, I’m sure, make more sense in context.

There was a closing talk, and then everyone retired to the bar. All in all, a great day, and I’m really glad I got the opportunity to go (thanks for your ticket Paul (@s7v7ns) – you missed out mate!)

"Key mess" by "Alper Çuğun" on Flickr

Making Ansible Keys more useful in complex and nested data structures

One of the things I miss about Jekyll when I’m working with Ansible is the ability to fragment my data across multiple files, but still have it as a structured *whole* at the end.

For example, given the following directory structure in Jekyll:

+ _data
|
+---+ members
|   +--- member1.yml
|   +--- member2.yml
|
+---+ groups
    +--- group1.yml
    +--- group2.yml

The content of member1.yml and member2.yml will be rendered into site.data.members.member1 and site.data.members.member2 and likewise, group1 and group2 are loaded into their respective variables.

This kind of structure isn’t possible in Ansible, because all the data files are compressed into one vars value that we can read. To work around this on a few different projects I’ve worked on, I’ve ended up doing the following:

- set_fact:
    my_members: |-
      [
        {%- for var in vars | dict2items -%}
          {%- if var.key | regex_search(my_regex) is not none -%}
            "{{ var.key | regex_replace(my_regex, '') }}": 
              {%- if var.value | string %}"{% endif -%}
              {{ var.value }}
              {%- if var.value | string %}"{% endif %},
          {%- endif -%}
        {%- endfor -%}
      ]
  vars:
    my_regex: '^member_'

So, what this does is to step over all the variables defined (for example, in host_vars\*, group_vars\*, from the gathered facts and from the role you’re in – following Ansible’s loading precedence), and then checks to see whether the key of that variable name (e.g. “member_i_am_a_member” or “member_1”) matches the regular expression (click here for more examples). If it does, the key (minus the regular expression matching piece [using regex_replace]) is added to a dictionary, and the value attached. If the value is actually a string, then it wraps it in quotes.

So, while this doesn’t give me my expressive data structure that Jekyll does (no site.data.members.member1.somevalue for me), I do at least get to have my_members.member1.somevalue if I put the right headers in! :)

I’ll leave extending this model for doing other sorts of building variables out (for example, something like if var.value['variable_place'] | default('') == 'my_members.member' + current_position) to the reader to work out how they could use something like this in their workflows!

Featured image is “Key mess” by “Alper Çuğun” on Flickr and is released under a CC-BY license.

"Untitled" by "Ryan Dickey" on Flickr

Run an Ansible Playbook against a Check Point Gaia node running R80+

In Check Point Gaia R77, if you wanted to run Ansible against this node, you were completely out of luck. The version of Python on the host was broken, modules were missing and … well, it just wouldn’t work.

Today, I’m looking at running some Ansible playbooks against Check Point R80 nodes. Here’s some steps you need to get through to make it work.

  1. Make sure the user that Ansible is going to be using has the shell /bin/bash. If you don’t have this set up, the command is: set user ansible shell /bin/bash.
  2. If you want a separate user account to do ansible actions, run these commands:
    add user ansible uid 9999 homedir /home/ansible
    set user ansible password-hash $1$D3caF9$B4db4Ddecafbadnogoood (note this hash is not valid!)
    add rba user ansible roles adminRole
    set user ansible shell /bin/bash
  3. Make sure your inventory specifies the right path for your Python binary. In the next code block you’ll see my inventory for three separate Check Point R80+ nodes. Note that I’ll only be targetting the “checkpoint” group, but that I’m using the r80_10, r80_20 and r80_30 groups to load the variables into there. I could, alternatively, add these in as values in group_vars/r80_10.yml and so on, but I find keeping everything to do with my connection in one place much cleaner. The python interpreter is in a separate path for each version time, and if you don’t specify ansible_ssh_transfer_method=piped you’ll get a message like this: [WARNING]: sftp transfer mechanism failed on [cpr80-30]. Use ANSIBLE_DEBUG=1 to see detailed information (fix from Add pipeline-ish method using dd for file transfer over SSH (#18642) on the Ansible git repo)
[checkpoint]
cpr80-10        ansible_user=admin      ansible_password=Sup3rS3cr3t-
cpr80-20        ansible_user=admin      ansible_password=Sup3rS3cr3t-
cpr80-30        ansible_user=admin      ansible_password=Sup3rS3cr3t-

[r80_10]
cpr80-10

[r80_20]
cpr80-20

[r80_30]
cpr80-30

[r80_10:vars]
ansible_ssh_transfer_method=piped
ansible_python_interpreter=/opt/CPsuite-R80/fw1/Python/bin/python

[r80_20:vars]
ansible_ssh_transfer_method=piped
ansible_python_interpreter=/opt/CPsuite-R80.20/fw1/Python/bin/python

[r80_30:vars]
ansible_ssh_transfer_method=piped
ansible_python_interpreter=/opt/CPsuite-R80.30/fw1/Python/bin/python

And there you have it, one quick “ping” check later…

$ ansible -m 'ping' -i hosts checkpoint
cpr80-10 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cpr80-30 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cpr80-20 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

One quick word of warning though, don’t use gather_facts: true or the setup: module. Both of these still rely on missing libraries on the Check Point nodes, and won’t work… But then again, you can get whatever you need from shell commands….. right? ;)

Featured image is “Untitled” by “Ryan Dickey” on Flickr and is released under a CC-BY license.

"Tower" by " Yijun Chen" on Flickr

Building a Gitlab and Ansible Tower (AWX) Demo in Vagrant with Ansible

TL;DR – I created a repository on GitHub‌ containing a Vagrantfile and an Ansible Playbook to build a VM running Docker. That VM hosts AWX (Ansible Tower’s upstream open-source project) and Gitlab.

A couple of years ago, a colleague created (and I enhanced) a Vagrant and Ansible playbook called “Project X” which would run an AWX instance in a Virtual Machine. It’s a bit heavy, and did a lot of things to do with persistence that I really didn’t need, so I parked my changes and kept an eye on his playbook…

Fast-forward to a week-or-so ago. I needed to explain what a Git/Ansible Workflow would look like, and so I went back to look at ProjectX. Oh my, it looks very complex and consumed a lot of roles that, historically, I’ve not been that impressed with… I just needed the basics to run AWX. Oh, and I also needed a Gitlab environment.

I knew that Gitlab had a docker-based install, and so does AWX, so I trundled off to find some install guides. These are listed in the playbook I eventually created (hence not listing them here). Not all the choices I made were inspired by those guides – I wanted to make quite a bit of this stuff “build itself”… this meant I wanted users, groups and projects to be created in Gitlab, and users, projects, organisations, inventories and credentials to be created in AWX.

I knew that you can create Docker Containers in Ansible, so after I’d got my pre-requisites built (full upgrade, docker installed, pip libraries installed), I add the gitlab-ce:latest docker image, and expose some ports. Even now, I’m not getting the SSH port mapped that I was expecting, but … it’s no disaster.

I did notice that the Gitlab service takes ages to start once the container is marked as running, so I did some more digging, and found that the uri module can be used to poll a URL. It wasn’t documented well how you can make it keep polling until you get the response you want, so … I added a PR on the Ansible project’s github repo for that one (and I also wrote a blog post about that earlier too).

Once I had a working Gitlab service, I needed to customize it. There are a bunch of Gitlab modules in Ansible but since a few releases back of Gitlab, these don’t work any more, so I had to find a different way. That different way was to run an internal command called “gitlab-rails”. It’s not perfect (so it doesn’t create repos in your projects) but it’s pretty good at giving you just enough to build your demo environment. So that’s getting Gitlab up…

Now I need to build AWX. There’s lots of build guides for this, but actually I had most luck using the README in their repository (I know, who’d have thought it!??!) There are some “Secrets” that should be changed in production that I’m changing in my script, but on the whole, it’s pretty much a vanilla install.

Unlike the Gitlab modules, the Ansible Tower modules all work, so I use these to create the users, credentials and so-on. Like the gitlab-rails commands, however, the documentation for using the tower modules is pretty ropey, and I still don’t have things like “getting your users to have access to your organisation” working from the get-go, but for the bulk of the administration, it does “just work”.

Like all my playbooks, I use group_vars to define the stuff I don’t want to keep repeating. In this demo, I’ve set all the passwords to “Passw0rd”, and I’ve created 3 users in both AWX and Gitlab – csa, ops and release – indicative of the sorts of people this demo I ran was aimed at – Architects, Operations and Release Managers.

Maybe, one day, I’ll even be able to release the presentation that went with the demo ;)

On a more productive note, if you’re doing things with the tower_ modules and want to tell me what I need to fix up, or if you’re doing awesome things with the gitlab-rails tool, please visit the repo with this automation code in, and take a look at some of my “todo” items! Thanks!!

Featured image is “Tower” by “Yijun Chen” on Flickr and is released under a CC-BY-SA license.

"funfair action" by "Jon Bunting" on Flickr

Improving the speed of Azure deployments in Ansible with Async

Recently I was building a few environments in Azure using Ansible, and found this stanza which helped me to speed things up.

  - name: "Schedule UDR Creation"
    azure_rm_routetable:
      resource_group: "{{ resource_group }}"
      name: "{{ item.key }}_udr"
    loop: "{{ routetables | dict2items }}"
    loop_control:
        label: "{{ item.key }}_udr"
    async: 1000
    poll: 0
    changed_when: False
    register: sleeper

  - name: "Check UDRs Created"
    async_status:
      jid: "{{ item.ansible_job_id }}"
    register: sleeper_status
    until: sleeper_status.finished
    retries: 500
    delay: 4
    loop: "{{ sleeper.results|flatten(levels=1) }}"
    when: item.ansible_job_id is defined
    loop_control:
      label: "{{ item._ansible_item_label }}"

What we do here is to start an action with an “async” time (to give the Schedule an opportunity to register itself) and a “poll” time of 0 (to prevent the Schedule from waiting to be finished). We then tell it that it’s “never changed” (changed_when: False) because otherwise it always shows as changed, and to register the scheduled item itself as a “sleeper”.

After all the async jobs get queued, we then check the status of all the scheduled items with the async_status module, passing it the registered job ID. This lets me spin up a lot more items in parallel, and then “just” confirm afterwards that they’ve been run properly.

It’s not perfect, and it can make for rather messy code. But, it does work, and it’s well worth giving it the once over, particularly if you’ve got some slow-to-run tasks in your playbook!

Featured image is “funfair action” by “Jon Bunting” on Flickr and is released under a CC-BY license.

A web browser with the example.com web page loaded

Working around the fact that Ansible’s URI module doesn’t honour the no_proxy variable…

An Ansible project I’ve been working on has tripped me up this week. I’m working with some HTTP APIs and I need to check early whether I can reach the host. To do this, I used a simple Ansible Core Module which lets you call an HTTP URI.

- uri:
    follow_redirects: none
    validate_certs: False
    timeout: 5
    url: "http{% if ansible_https | default(True) %}s{% endif %}://{{ ansible_host }}/login"
  register: uri_data
  failed_when: False
  changed_when: False

This all seems pretty simple. One of the environments I’m working in uses the following values in their environment:

http_proxy="http://192.0.2.1:8080"
https_proxy="http://192.0.2.1:8080"
no_proxy="10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 192.0.2.0/24, 198.51.100.0/24, 203.0.113.0/24"

And this breaks the uri module, because it tries to punt everything through the proxy if the “no_proxy” contains CIDR values (like 192.0.2.0/24) (there’s a bug raised for this)… So here’s my fix!

- set_fact:
    no_proxy_match: |
      {
        {% for no_proxy in (lookup('env', 'no_proxy') | replace(',', '') ).split() %}
          {% if no_proxy| ipaddr | type_debug != 'NoneType' %}
            {% if ansible_host | ipaddr(no_proxy) | type_debug != 'NoneType' %}
              "match": "True"
            {% endif %}
          {% endif %}
        {% endfor %}
      }

- uri:
    follow_redirects: none
    validate_certs: False
    timeout: 5
    url: "http{% if ansible_https | default(True) %}s{% endif %}://{{ ansible_host }}/login"
  register: uri_data
  failed_when: False
  changed_when: False
  environment: "{ {% if no_proxy_match.match | default(False) %}'no_proxy': '{{ ansible_host }}'{% endif %} }"

So, let’s break this down.

The key part to this script is that we need to override the no_proxy environment variable with the IP address that we’re trying to address (so that we’re not putting 16M addresses for 10.0.0.0/8 into no_proxy, for example). To do that, we use the exact same URI block, except for the environment line at the end.

In turn, the set_fact block steps through the no_proxy values, looking for IP Addresses to check ({% if no_proxy | ipaddr ... %}‌ says “if the no_proxy value is an IP Address, return it, but if it isn’t, return a ‘None’ value”) and if it’s an IP address or subnet mask, it checks to see whether the IP address of the host you’re trying to reach falls inside that IP Address or Subnet Mask ({% if ansible_host | ipaddr(no_proxy) ... %} says “if the ansible_host address falls inside the no_proxy range, then return it, otherwise return a ‘None’ value”). Both of these checks say “If this previous check returns anything other than a ‘None’ value, do the next thing”, and on the last check, the “next” thing is to set the flag ‘match’ to ‘true’. When we get to the environment variable, we say “if match is not true, it’s false, so don’t put a value in there”.

So that’s that! Yes, I could merge the set_fact block into the environment variable, but I do end up using that a fair amount. And really, if it was merged, that would be even MORE complicated to pick through.

I have raised a pull request on the Ansible project to update the documentation, so we’ll see whether we end up with people over here looking for ways around this issue. If so, let me know in the comments below! Thanks!!

"Matrix" by "Paul Downey" on Flickr

Idempotent Dynamic Content in Ansible

One of my colleagues recently sent out a link to a post about generating idempotent random numbers for Ansible. As I was reading it, I realised that there are other ways of doing the same thing (but not quite as pretty).

See, one of the things I (mis-)use Ansible for is to build Azure, AWS and OpenStack environments (instead of, perhaps, using Terraform, Cloud Formations or Heat Stacks). As a result, I frequently want to set complex passwords that are unique to *that environment* but that aren’t new for each build. My way of doing this is to run a delegated task to generate files in host_vars. Here’s a version of the playbook I use for that!

In the same gist as that block has been sourced from I have some example output from “20 hosts” – one of which has a pre-defined password in the inventory, and the rest of which are generated by the script.

I hope this is useful to someone!

Late Edit – 2019-05-19: Encrypting the values you generate

Following this post, a friend of mine – Jeremy mentioned on Linked In that I should have a look at Ansible Vault. Well, *ideally*, yes, however, when I looked at this code, I couldn’t work out a way of forcing the session to run Vault against a value I’ve just created, short of running something a raw or a shell module like “ansible-vault encrypt {{ file_containing_password }}“. Realistically, if you’re doing a lot with these passwords, you should probably use an external password vault, such as HashiCorp’s Vault or PasswordStore.org’s Pass. Neither of which I tend to use, because it’s just not part of my life yet – but I’ve heard good things about both!

Featured image is “Matrix” by “Paul Downey” on Flickr and is released under a CC-BY license.