Creating Self Signed certificates in Ansible

In my day job, I sometimes need to use a self-signed certificate when building a box. As I love using Ansible, I wanted to make the self-signed certificate piece something that was part of my Ansible workflow.

Here follows a bit of basic code that you could use to work through how the process of creating a self-signed certificate would work. I would strongly recommend using something more production-ready (e.g. LetsEncrypt) when you’re looking to move from “development” to “production” :)

Hacker at Keyboard

What happens on an IT Penetration Test

I recently was asked to describe what happens in a penetration test (pentest), how it’s organised and what happens after the test is completed.

Some caveats first:

  • While I’ve been involved in escorting penetration testers in controlled areas, and helping to provide environments for the tests to occur in, I’ve not been heavily involved in the set-up of one, so some of the details in that area are likely to be a bit fuzzy.
  • I’m not involved in procurement in any way, so I can’t endorse or discredit any particular testing organisations.
  • This is a personal viewpoint and doesn’t represent a professional recommendation of how a penetration test should or typically does occur.

So, what actually happens?…

Before the pentest begins, a testing firm would be sourced and the “Terms of Engagement” (TOE), or perhaps a list of requirements would be defined. This might result in a list of tasks that are expected to be performed, and some idea on resources required. It would also be the point where the initiator (the organisation getting the test performed) defines what is “In Scope” (is available to be tested) and what is “Out Of Scope” (and must not be tested).

Some of the usual tasks are:

  • Internet Scan (the testers will have a testing appliance that will scan all IPs provided to them, and all open ports on those IPs, looking for vulnerable services, servers and responding applications).
  • Black Box [See note below] Red Team test (the testers are probing the network as though they were malicious outsiders with no knowledge of the system, similar to the sorts of things you might see in hacker movies – go through discovered documentation, socially engineer access to environments, run testing applications like NMAP and Metasploit, and generally see what stuff appears to be publicly accessible, and from there see how the environment appears once you have a foothold).
  • White Box test (the testers have access to internal documentation and/or source code about the environment they’re testing, and thus can run customised and specific tests against the target environment).
  • Configuration Analysis (access to servers, source code, firewall policies or network topology documentation intending to check where flaws may have been introduced).
  • Social Engineering test (see how amenable staff, customers and suppliers are to providing access to the target environment for your testing team).
  • Physical access test (prove whether the testing team can gain physical access to elements of your target, e.g. servers, documentation, management stations, signing keys, etc).
  • Some testing firms will also stress test any Denial Of Service Mitigations you may have in-place, but these must be carefully negotiated first with your bandwidth providers, their peering firms and so on, as they will more-than-likely disrupt more than just your services! DO NOT ENGAGE A DOS TEST LIGHTLY!

Once the Terms have been agreed and the duration of these tests have been ironed out (some tests could go on indefinitely, but you wouldn’t *really* want to pay the bills for an indefinite Black Box test, for example!), a project plan is usually defined showing these stages. Depending on the complexity of your environment, I might expect a reasonable duration for a small estate being approximately a day or two for each test. In a larger estate, particularly where little-to-no automation has been implemented, you may find (for example) a thorough Configuration Analysis of your server configurations taking weeks or even months.

Depending on how true-to-life the test “should” be, you may have the Physical Security assessment and Social Engineering tests be considered part of the Black Box test, or perhaps you may purposefully provide some entry point for the testing team to reduce the entry time. Most of the Black Box tests I’ve been involved in supporting have started from giving the testers access to a point inside your trusted network (e.g. a server which has been built for the purpose of giving access to testers or a VPN entry point with a “lax” firewall policy). Others will provide a “standard” asset (e.g. laptop) and user credential to the testing firm. Finally, some environments will put the testing firm through “recruitment” and put them in-situ in the firm for a week or two to bed them in before the testing starts… this is pretty extreme however!!

The Black Box test will typically be run before any others (except perhaps the Social Engineering and Physical Access tests) and without the knowledge of the “normal” administrators. This will also test the responses of the “Blue Team” (the system administrators and any security operations centre teams) to see whether they would notice an in-progress, working attack by a “Red Team” (the attackers).

After the Black Box test is completed, the “Blue Team” may be notified that there was a pentest, and then (providing it is being run) the testing organisation will start a White Box test will be given open access to the tested environment.

The Configuration Check will normally start with hands-on time with members of the “Blue Team” to see configuration and settings, and will compare these settings against known best practices. If there is an application being tested where source code is available to the testers, then they may check the source code against programming bad practices.

Once these tests are performed, the testing organisation will write a report documenting the state of the environment and rate criticality of the flaws against current recommendations.

The report would be submitted to the organisation who requested the test, and then the *real* fun begins – either fixing the flaws, or finger pointing at who let the flaws occur… Oh, and then scheduling the next pentest :)

I hope this has helped people who may be wondering what happens during a pentest!

Just to noteIf you want to know more about pentests, and how they work in the real world, check out the podcast “Darknet Diaries“, and in particular episode 6 – “The Beirut Bank Job”. To get an idea of what the pentest is supposed to simulate, (although it’s a fictional series) “Mr Robot” (<- Amazon affiliate link) is very close to how I would imagine a sequence of real-world “Red Team” attacks might look like, and experts seem to agree!

Image Credit: “hacker” by Dani Latorre licensed under a Creative Commons, By-Attribution, Share-Alike license.


Additional note; 2018-12-12: Following me posting this to the #security channel on the McrTech Slack group, one of the other members of that group (Jay Harris from Digital Interruption) mentioned that I’d conflated a black box test and a red team test. A black box test is like a white box test, but with no documentation or access to the implementing team. It’s much slower than a white box test, because you don’t have access to the people to ask “why did you do this” or “what threat are you trying to mitigate here”. He’s right. This post was based on a previous experience I had with a red team test, but that they’d referred to it as a black box test, because that’s what the engagement started out as. Well spotted Jay!

He also suggested reading his post in a similar vein.

Podcast Summary – “TechSNAP Episode 384: Interplanetary Peers”

Last night I was a guest host on TechSNAP, a systems, network and administration podcast.

The episode I recorded was: TechSNAP Episode 384: Interplanetary Peers

In this episode, I helped cover the news items, mostly talking about the breach over at NewEgg by the MagePay group and a (now fixed) vulnerability in Alpine Linux, and then did a bit of a dive into IPFS.

It’s a good listen, but the audio right at the end was quite noisy as a storm settled in just as I was recording my outro.

One to read: A Beginner’s Guide to IPFS

One to read: “A Beginner’s Guide to IPFS”

Ever wondered about IPFS (the “Inter Planetary File System”) – a new way to share and store content. This doesn’t rely on a central server (e.g. Facebook, Google, Digital Ocean, or your home NAS) but instead uses a system like bittorrent combined with published records to keep the content in the system.

If your host goes down (where the original content is stored) it’s also cached on other nodes who have visited your site.

These caches are cleared over time, so are suitable for short outages, or you can have other nodes who “pin” your content (and this can be seen as a paid solution that can fund hosts).

IPFS is great at hosting static content, but how to deal with dynamic content? That’s where PubSub comes into play (which isn’t in this article). There’s a database service which sits on IPFS and uses PubSub to sync data content across the network, called Orbit-DB.

It’s looking interesting, especially in light of the announcement from CloudFlare about their introduction of an available IPFS gateway.

It’s looking good for IPFS!

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

“You can’t run multiple commands in sudo” – and how to work around this

At work, we share tips and tricks, and one of my colleagues recently called me out on the following stanza I posted:

I like this [ansible] one for Debian based systems:
  - name: "Apt update, Full-upgrade, autoremove, autoclean"
    become: yes
    apt:
      upgrade: full
      update_cache: yes
      autoremove: yes
      autoclean: yes

And if you’re trying to figure out how to do that in Shell:
apt-get update && apt-get full-update -y && apt-get autoremove -y && apt-get autoclean -y

His response was “Surely you’re not logging into bash as root”. I said “I normally sudo -i as soon as I’ve logged in. I can’t recall offhand how one does a sudo for a string of command && command statements”

Well, as a result of this, I looked into it. Here’s one comment from the first Stack Overflow page I found:

You can’t run multiple commands from sudo – you always need to trick it into executing a shell which may accept multiple commands to run as parameters

So here are a few options on how to do that:

  1. sudo -s whoami \; whoami (link to answer)
  2. sudo sh -c "whoami ; whoami" (link to answer)
  3. But, my favourite is from this answer:

    An alternative using eval so avoiding use of a subshell: sudo -s eval 'whoami; whoami'

Why do I prefer the last one? Well, I already use eval for other purposes – mostly for starting my ssh-agent over SSH, like this: eval `ssh-agent` ; ssh-add

One to listen to: “Jason Scott is Breaking Out in Archives”

http://sysadministrivia.com/episodes/S3E13

This is a fascinating episode. Jason Scott works for the Internet Archive, personally hosts a large archive of historic BBS text files, and is an engaging interviewee (plus, he talks a lot :) )

If you’re interested in how archive.org works (including how they choose disks for their storage, how content is processed and managed, whether they keep spam, how many hours of “sermons” have been stored, etc), or what happens when you get sued for USD2,000,000,000, it’s well worth a listen.

One to read/watch: IPsec and IKE Tutorial

Ever been told that IPsec is hard? Maybe you’ve seen it yourself? Well, Paul Wouters and Sowmini Varadhan recently co-delivered a talk at the NetDev conference, and it’s really good.

Sowmini’s and Paul’s slides are available here: https://www.files.netdevconf.org/d/a18e61e734714da59571/

A complete recording of the tutorial is here. Sowmini’s part of the tutorial (which starts first in the video) is quite technically complex, looking at specifically the way that Linux handles the packets through the kernel. I’ve focused more on Paul’s part of the tutorial (starting at 26m23s)… but my interest was piqued from 40m40s when he starts to actually show how “easy” configuration is. There are two quick run throughs of typical host-to-host IPsec and subnet-to-subnet IPsec tunnels.

A key message for me, which previously hadn’t been at all clear in IPsec using {free,libre,open}swan is that they refer to Left and Right as being one party and the other… but the node itself works out if it’s “left” or “right” so the *SAME CONFIG* can be used on both machines. GENIUS.

Also, when you’re looking at the config files, anything prefixed with an @ symbol is something that doesn’t need resolving to something else.

It’s well worth a check-out, and it’s inspired me to take another look at IPsec for my personal VPNs :)

I should note that towards the end, Paul tried to run a selection of demonstrations in Opportunistic Encryption (which basically is a way to enable encryption between two nodes, even if you don’t have a pre-established VPN with them). Because of issues with the conference wifi, plus the fact that what he’s demoing isn’t exactly production-grade yet, it doesn’t really work right, and much of the rest of the video (from around 1h10m) is him trying to show that working while attendees are running through the lab, and having conversations about those labs with the attendees.

TCPDump Made Easier Parody Book Cover, with the subtitle "Who actually understands all those switches?"

One to use: tcpdump101.com

I’m sure that anyone doing operational work has been asked at some point if you can run a “TCPDump” on something, or if you could get a “packet capture” – if you have, this tool (as spotted on the Check Point community sites) might help you!

https://tcpdump101.com

Using simple drop-down fields for filters and options and using simple prompts, this tool tells you how to run each of the packet capturing commands for common firewall products (FortiGate, ASA, Check Point) and the more generic tcpdump tool (indicated by a Linux Penguin, but it runs on all major desktop and server OSs, as well as rooted Android devices).

Well worth a check out!