Picture of a bull from an 1882 reference guide, as found on Flickr

Abattoir Architecture for Evergreen – using the Pets versus Cattle parable

Work very generously sent me on a training course today about a cloud based technology we’re considering deploying.

During the course, the organiser threw a question to the audience about “who can explain what a container does?” and a small number of us ended up talking about Docker (primarily for Linux) and CGroups, and this then turned into a conversation about the exceedingly high rate of changes deployed by Amazon, Etsy and others who have completely embraced microservices and efficient CI/CD pipelines… and then I mentioned the parable of Pets versus Cattle.

The link above points to where the story comes from, but the short version is…

When you get a pet, it comes as a something like a puppy or kitten, you name it, you nurture it, bring it up to live in your household, and when it gets sick, because you’ve made it part of your family, you bring in a vet who nurses it back to health.

When you have cattle, it doesn’t have a name, it has a number, and if it gets sick, you isolate it from the herd and if it doesn’t get better, you take it out back and shoot it, and get a new one.

A large number of the audience either hadn’t heard of the parable, or if they had, hadn’t heard it delivered like this.

We later went on to discuss how this applies in a practical sense, not just in docker or kubernetes containers, but how it could be applied to Infrastructure as a Service (IaaS), even down to things like vendor supplied virtual firewalls where you have Infrastructure as Code (IaC).

If, in your environment, you have some service you treat like cattle – perhaps a cluster of firewalls behind a load balancer or a floating IP address and you need to upgrade it (because it isn’t well, or it’s not had the latest set of policy deployed to it). You don’t amend the policy on the boxes in question… No! You stand up a new service using your IaC with the latest policy deployed upon it, and then you would test it (to make sure it’s stood up right), and then once you’re happy it’s ready, you transition your service to the new nodes. Once the connections have drained from your old nodes, you take them out and shoot them.

Or, if you want this in pictures…

Stage 1 - Before the new service deployment
Stage 1 – Before the new service deployment
Stage 2 - The new service deployment is built and tested
Stage 2 – The new service deployment is built and tested
Stage 3 - Service transitions to the new service deployment
Stage 3 – Service transitions to the new service deployment
Stage 4 - The old service is demised, and any resources (including licenses) return to the pool
Stage 4 – The old service is demised, and any resources (including licenses) return to the pool

I was advised (by a very enthusiastic Mike until he realised that I intended to follow through with it) that the name for this should be as per the title. So, the next time someone asks me to explain how they could deploy, I’ll suggest they look for the Abattoir in my blog, because, you know, that’s normal, right? :)

Image credit: Image from page 255 of “Breeder and sportsman” (1882) via Internet Archive Book Image on Flickr

Troubleshooting FortiGate API issues with the CLI?

One of my colleagues has asked me for some help with an Ansible script he’s writing to push some policy to a cloud hosted FortiGate appliance. Unfortunately, he kept getting some very weird error messages, like this one:

fatal: [localhost]: FAILED! => {"changed": false, "meta": {"build": 200, "error": -651, "http_method": "PUT", "http_status": 500, "mkey": "vip8080", "name": "vip", "path": "firewall", "revision": "36.0.0.10745196634707694665.1544442857", "serial": "CENSORED", "status": "error", "vdom": "root", "version": "v6.0.3"}, "msg": "Error in repo"}

This is using Fortinet’s own Ansible Modules, which, in turn use the fortiosapi python module.

This same colleague came across a post on the Fortinet Developer Network site (access to the site requires vendor approval), which said “this might be an internal bug, but to debug it, use the following”

fgt # diagnose debug enable

fgt # diagnose debug cli 8
Debug messages will be on for 30 minutes.

And then run your API commands. Your error message will be surfaced there… so here’s mine! (Mapped port doesn’t match extport in a vip).

0: config firewall vip
0: edit "vip8080"
0: unset src-filter
0: unset service
0: set extintf "port1"
0: set portforward enable
0: unset srcintf-filter
0: set mappedip "192.0.2.1-192.0.2.1"
0: unset extport
0: set extport 8080-8081
0: unset mappedport
0: set mappedport 8080
-651: end

Creating Self Signed certificates in Ansible

In my day job, I sometimes need to use a self-signed certificate when building a box. As I love using Ansible, I wanted to make the self-signed certificate piece something that was part of my Ansible workflow.

Here follows a bit of basic code that you could use to work through how the process of creating a self-signed certificate would work. I would strongly recommend using something more production-ready (e.g. LetsEncrypt) when you’re looking to move from “development” to “production” :)

---
- hosts: localhost
  vars:
  - dnsname: your.dns.name
  - tmppath: "./tmp/"
  - crtpath: "{{ tmppath }}{{ dnsname }}.crt"
  - pempath: "{{ tmppath }}{{ dnsname }}.pem"
  - csrpath: "{{ tmppath }}{{ dnsname }}.csr"
  - pfxpath: "{{ tmppath }}{{ dnsname }}.pfx"
  tasks:
  - file:
      path: "{{ tmppath }}"
      state: absent
  - file:
      path: "{{ tmppath }}"
      state: directory
  - openssl_privatekey:
      path: "{{ pempath }}"
      passphrase: password
      cipher: aes256
  - openssl_csr:
      path: "{{ csrpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: password
      common_name: "{{ dnsname }}"
  - openssl_certificate:
      path: "{{ crtpath }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: password
      csr_path: "{{ csrpath }}"
      provider: selfsigned
  - openssl_pkcs12:
      action: export
      path: "{{ pfxpath }}"
      name: "{{ dnsname }}"
      privatekey_path: "{{ pempath }}"
      privatekey_passphrase: password
      passphrase: password
      certificate_path: "{{ crtpath }}"
      state: present

Hacker at Keyboard

What happens on an IT Penetration Test

I recently was asked to describe what happens in a penetration test (pentest), how it’s organised and what happens after the test is completed.

Some caveats first:

  • While I’ve been involved in escorting penetration testers in controlled areas, and helping to provide environments for the tests to occur in, I’ve not been heavily involved in the set-up of one, so some of the details in that area are likely to be a bit fuzzy.
  • I’m not involved in procurement in any way, so I can’t endorse or discredit any particular testing organisations.
  • This is a personal viewpoint and doesn’t represent a professional recommendation of how a penetration test should or typically does occur.

So, what actually happens?…

Before the pentest begins, a testing firm would be sourced and the “Terms of Engagement” (TOE), or perhaps a list of requirements would be defined. This might result in a list of tasks that are expected to be performed, and some idea on resources required. It would also be the point where the initiator (the organisation getting the test performed) defines what is “In Scope” (is available to be tested) and what is “Out Of Scope” (and must not be tested).

Some of the usual tasks are:

  • Internet Scan (the testers will have a testing appliance that will scan all IPs provided to them, and all open ports on those IPs, looking for vulnerable services, servers and responding applications).
  • Black Box [See note below] Red Team test (the testers are probing the network as though they were malicious outsiders with no knowledge of the system, similar to the sorts of things you might see in hacker movies – go through discovered documentation, socially engineer access to environments, run testing applications like NMAP and Metasploit, and generally see what stuff appears to be publicly accessible, and from there see how the environment appears once you have a foothold).
  • White Box test (the testers have access to internal documentation and/or source code about the environment they’re testing, and thus can run customised and specific tests against the target environment).
  • Configuration Analysis (access to servers, source code, firewall policies or network topology documentation intending to check where flaws may have been introduced).
  • Social Engineering test (see how amenable staff, customers and suppliers are to providing access to the target environment for your testing team).
  • Physical access test (prove whether the testing team can gain physical access to elements of your target, e.g. servers, documentation, management stations, signing keys, etc).
  • Some testing firms will also stress test any Denial Of Service Mitigations you may have in-place, but these must be carefully negotiated first with your bandwidth providers, their peering firms and so on, as they will more-than-likely disrupt more than just your services! DO NOT ENGAGE A DOS TEST LIGHTLY!

Once the Terms have been agreed and the duration of these tests have been ironed out (some tests could go on indefinitely, but you wouldn’t *really* want to pay the bills for an indefinite Black Box test, for example!), a project plan is usually defined showing these stages. Depending on the complexity of your environment, I might expect a reasonable duration for a small estate being approximately a day or two for each test. In a larger estate, particularly where little-to-no automation has been implemented, you may find (for example) a thorough Configuration Analysis of your server configurations taking weeks or even months.

Depending on how true-to-life the test “should” be, you may have the Physical Security assessment and Social Engineering tests be considered part of the Black Box test, or perhaps you may purposefully provide some entry point for the testing team to reduce the entry time. Most of the Black Box tests I’ve been involved in supporting have started from giving the testers access to a point inside your trusted network (e.g. a server which has been built for the purpose of giving access to testers or a VPN entry point with a “lax” firewall policy). Others will provide a “standard” asset (e.g. laptop) and user credential to the testing firm. Finally, some environments will put the testing firm through “recruitment” and put them in-situ in the firm for a week or two to bed them in before the testing starts… this is pretty extreme however!!

The Black Box test will typically be run before any others (except perhaps the Social Engineering and Physical Access tests) and without the knowledge of the “normal” administrators. This will also test the responses of the “Blue Team” (the system administrators and any security operations centre teams) to see whether they would notice an in-progress, working attack by a “Red Team” (the attackers).

After the Black Box test is completed, the “Blue Team” may be notified that there was a pentest, and then (providing it is being run) the testing organisation will start a White Box test will be given open access to the tested environment.

The Configuration Check will normally start with hands-on time with members of the “Blue Team” to see configuration and settings, and will compare these settings against known best practices. If there is an application being tested where source code is available to the testers, then they may check the source code against programming bad practices.

Once these tests are performed, the testing organisation will write a report documenting the state of the environment and rate criticality of the flaws against current recommendations.

The report would be submitted to the organisation who requested the test, and then the *real* fun begins – either fixing the flaws, or finger pointing at who let the flaws occur… Oh, and then scheduling the next pentest :)

I hope this has helped people who may be wondering what happens during a pentest!

Just to noteIf you want to know more about pentests, and how they work in the real world, check out the podcast “Darknet Diaries“, and in particular episode 6 – “The Beirut Bank Job”. To get an idea of what the pentest is supposed to simulate, (although it’s a fictional series) “Mr Robot” (<- Amazon affiliate link) is very close to how I would imagine a sequence of real-world “Red Team” attacks might look like, and experts seem to agree!

Image Credit: “hacker” by Dani Latorre licensed under a Creative Commons, By-Attribution, Share-Alike license.


Additional note; 2018-12-12: Following me posting this to the #security channel on the McrTech Slack group, one of the other members of that group (Jay Harris from Digital Interruption) mentioned that I’d conflated a black box test and a red team test. A black box test is like a white box test, but with no documentation or access to the implementing team. It’s much slower than a white box test, because you don’t have access to the people to ask “why did you do this” or “what threat are you trying to mitigate here”. He’s right. This post was based on a previous experience I had with a red team test, but that they’d referred to it as a black box test, because that’s what the engagement started out as. Well spotted Jay!

He also suggested reading his post in a similar vein.

One to read: A Beginner’s Guide to IPFS

One to read: “A Beginner’s Guide to IPFS”

Ever wondered about IPFS (the “Inter Planetary File System”) – a new way to share and store content. This doesn’t rely on a central server (e.g. Facebook, Google, Digital Ocean, or your home NAS) but instead uses a system like bittorrent combined with published records to keep the content in the system.

If your host goes down (where the original content is stored) it’s also cached on other nodes who have visited your site.

These caches are cleared over time, so are suitable for short outages, or you can have other nodes who “pin” your content (and this can be seen as a paid solution that can fund hosts).

IPFS is great at hosting static content, but how to deal with dynamic content? That’s where PubSub comes into play (which isn’t in this article). There’s a database service which sits on IPFS and uses PubSub to sync data content across the network, called Orbit-DB.

It’s looking interesting, especially in light of the announcement from CloudFlare about their introduction of an available IPFS gateway.

It’s looking good for IPFS!

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

“You can’t run multiple commands in sudo” – and how to work around this

At work, we share tips and tricks, and one of my colleagues recently called me out on the following stanza I posted:

I like this [ansible] one for Debian based systems:
  - name: "Apt update, Full-upgrade, autoremove, autoclean"
    become: yes
    apt:
      upgrade: full
      update_cache: yes
      autoremove: yes
      autoclean: yes

And if you’re trying to figure out how to do that in Shell:
apt-get update && apt-get full-update -y && apt-get autoremove -y && apt-get autoclean -y

His response was “Surely you’re not logging into bash as root”. I said “I normally sudo -i as soon as I’ve logged in. I can’t recall offhand how one does a sudo for a string of command && command statements”

Well, as a result of this, I looked into it. Here’s one comment from the first Stack Overflow page I found:

You can’t run multiple commands from sudo – you always need to trick it into executing a shell which may accept multiple commands to run as parameters

So here are a few options on how to do that:

  1. sudo -s whoami \; whoami (link to answer)
  2. sudo sh -c "whoami ; whoami" (link to answer)
  3. But, my favourite is from this answer:

    An alternative using eval so avoiding use of a subshell: sudo -s eval 'whoami; whoami'

Why do I prefer the last one? Well, I already use eval for other purposes – mostly for starting my ssh-agent over SSH, like this: eval `ssh-agent` ; ssh-add

One to listen to: “Jason Scott is Breaking Out in Archives”

http://sysadministrivia.com/episodes/S3E13

This is a fascinating episode. Jason Scott works for the Internet Archive, personally hosts a large archive of historic BBS text files, and is an engaging interviewee (plus, he talks a lot :) )

If you’re interested in how archive.org works (including how they choose disks for their storage, how content is processed and managed, whether they keep spam, how many hours of “sermons” have been stored, etc), or what happens when you get sued for USD2,000,000,000, it’s well worth a listen.