One to read: “Export Check Point firewall logs to a readable format”
This was automatically posted from my RSS Reader, and may be edited later to add commentary.
One to read: “Export Check Point firewall logs to a readable format”
This was automatically posted from my RSS Reader, and may be edited later to add commentary.
I was very fortunate yesterday to spend some time with two Check Point engineering staff. Check Point make high-end firewall products that I’m using at work. During the conversation, I mentioned two issues I’ve had during automated builds of Checkpoint appliances…
mgmt_cli -r true add host name "New Host 1" ip-address "192.0.2.1"
mgmt_cli -r true login domain "System Data" > id.txt
mgmt_cli -s id.txt set api-settings accepted-api-calls-from "all ip addresses that can be used for gui clients" automatic-start true
mgmt_cli -s id.txt publish
api restart
My sincere thanks to Javier and Uri for their guidance. For those wondering about those API calls – see these links: Using the -r flag and configuring the API for remote access.
This week sees the publication of The Admin Admin Podcast episode #55 in which I guest present (and guest introduce!) about network infrastructure. I also answer some questions about using certbot (the free TLS certificate provider), about where to put script files on Linux and a bit about MTU (Message Transfer Units) – although that’s a bit outside my area of expertise, so if I got it wrong, let them know!
The guys who host the Admin Admin podcast are a really nice, and cover a really great range of subjects about working as a server or network administrator. They have a chat room on Telegram, so if you’re interested in being an admin, it’s worth having a listen, and then maybe join the chat room!
Do you use this pattern in your sh/bash/zsh/etc-sh scripts?
cat somefile | grep 'some string' | awk '{print $2}'
If so, you can replace that as follows:
cat somefile | awk '/some string/ {print $2}'
Or how about this?
grep -v 'something' < somefile | awk '{print $0}'
Try this:
awk '! /something/ {print $0}' < somefile
Ooo OK, how about if you want to get all actions performed by users when the ISO formatted dates (Y-m-d) match the first day of the month, but where you don’t want to also match January (unless you’re talking about the first of January)…
# echo 'BLOGGSF 2001-01-23 SOME_ACTION' | awk '$2 ~ /-01$/ {print $1, $3}'
(EMPTY LINE)
# echo 'BLOGGSF 2002-02-01 SOME_ACTION' | awk '$2 ~ /-01$/ {print $1, $3}'
BLOGGSF SOME_ACTION
This is so cool! Thanks to the tutorials “SKIP grep, use AWK” and the follow-up tutorials starting here…
Because of templates I was building in my post “Today I learned… Ansible Include Templates”, I thought you could repeat the same sections over again. Here’s a snippet of something like what I’d built (after combining lots of templates together):
#cloud-config
packages:
- iperf
- git
write_files:
- content: {% include 'files/public_key.j2' %}
path: /root/.ssh/authorized_keys
owner: root:root
permission: '0600'
- content: {% include 'files/private_key.j2' %}
path: /root/.ssh/id_rsa
owner: root:root
permission: '0600'
packages:
- byobu
write_files:
- content: |
#!/bin/bash
git clone {{ test_scripts }} /root/iperf_scripts
bash /root/iperf_scripts/run_test.sh
path: /root/run_test
owner: root:root
permission: '0700'
runcmd:
- /root/run_test
I’d get *bits* of it to run – basically, the last file, the last package and the last runcmd… but not all of it.
Turns out, cloud-init doesn’t like having to rebuild all the fragments together. Instead, you need to put them all together, so the write_files items, and the packages items all live in the same area.
Which, when you think about what it’s doing, which is that the parent lines are defining a variable called… well, whatever that line is, and if you replace it, it’s only going to keep the last one, then it all makes sense really!
Today I have been debugging why my Cloud-init scripts weren’t triggering on my Openstack environment.
I realised that something was wrong when I tried to use the noVNC console[1] with a password I’d set… no luck. So, next I ran a command to review the console logs[2], and saw a message (now, sadly, long gone – so I can’t even include it here!) suggesting there was an issue parsing my YAML file. Uh oh!
I’m using Ansible’s os_server module, and using templates to complete the userdata field, which in turn gets populated as cloud-init scripts…. and so clearly I had two ways to debug this – prefix my ansible playbook with a few debug commands, but then that can get messy… OR SSH into the box, and look through the logs. I knew I could SSH in, so the cloud-init had partially fired, but it just wasn’t parsing what I’d submitted. I had a quick look around, and found a post which mentioned debugging cloud-init. This mentioned that there’s a path (/var/lib/cloud/instances/$UUID/) you can mess around in, to remove some files to “fool” cloud-init into thinking it’s not been run… but, I reasoned, why not just see what’s there.
And in there, was the motherlode – user-data.txt…. bingo.
In the jinja2 template I was using to populate the userdata, I’d referenced another file, again using a template. It looks like that template needs an extra line at the end, otherwise, it all runs together.
Whew!
This does concern me a little, as I had previously been using this stanza to “simply” change the default user password to something a little less complicated:
#cloud-config
ssh_pwauth: True
chpasswd:
list: |
ubuntu:{{ default_password }}
expire: False
But now that I look at the documentation, I realise you can also specify that as a pre-hashed value (in which case, you would suffix that default_password
item above with |password_hash('sha512')
) which makes it all better again!
[1] If you run openstack --os-cloud cloud_a console url show servername
gives you a URL to visit that has an HTML5 based VNC-ish client. Note the “cloud_a” and “servername” should be replaced by your clouds.yml reference and the server name or server ID you want to connect to.
[2] Like before, openstack --os-cloud cloud_a console log show servername
gives you the output of the boot sequence (e.g. dmesg plus the normal startup commands, and finally, cloud-init). It can be useful. Equally, it’s logs… which means there’s a lot to wade through!
Thanks to my colleague Simon (@sipart on Twitter), I spotted this post (and it’s companion Github Repository) which explains how to do test-driven development in Ansible.
Essentially, you create two roles – test (the author referred to it as “validate”) and one to actually do the thing you want it to do (in the author’s case “add_vlan”).
In the testing role, you’d have the following layout:
/path/to/roles/testing/tasks/main.yml
/path/to/roles/testing/tasks/SOMEFEATUREtest.yml
In the main.yml file, you have a simple stanza:
---
- name: Include all the test files
include: "{{ outer_item }}"
with_fileglob:"/path/to/roles/validate/tasks/*test.yml"
loop_control: loop_var=outer_item
I’m sure that “with_fileglob” line could be improved to not actually need a full path… anyway
Then in your YourFeature_test.yml file, you do things like this:
---
- name: "Pseudocode in here. Use real modules for your testing!!"
get_vlan_config: filter_for=needle_vlan
register:haystack_var
- assert: that=" {{ needle_item }} in haystack_var "
When you run the play of the role the first time, the response will be “failed” (because “needle_vlan” doesn’t exist). Next do the “real” play of the role (so, in the author’s case, add_vlan) which creates the vlan. Then re-run the test role, your response should now be “ok”.
I’d probably script this so that it goes:
The benefit to doing it that way is that you “know” your tests aren’t running if the environment doesn’t have the “set_testing” thing in place, you get to run all your tests in a “clean room”, and then you clear it back down again afterwards, leaving it clear for the next pass of your automated testing suite.
Fun!
I am building Openstack Servers with the ansible os_server module. One of these fields will accept a very long string (userdata). Typically, I end up with a giant blob of unreadable build script in this field…
Today I learned that I can use this:
---
- name: "Create Server"
os_server:
name: "{{ item.value.name }}"
state: present
availability_zone: "{{ item.value.az.name }}"
flavor: "{{ item.value.flavor }}"
key_name: "{{ item.value.az.keypair }}"
nics: "[{%- for nw in item.value.ports -%}{'port-name': '{{ ProjectPrefix }}{{ item.value.name }}-Port-{{nw.network.name}}'}{%- if not loop.last -%}, {%- endif -%} {%- endfor -%}]" # Ignore this line - it's complicated for a reason
boot_volume: "{{ ProjectPrefix }}{{ item.value.name }}-OS-Volume" # Ignore this line also :)
terminate_volume: yes
volumes: "{%- if item.value.log_size is defined -%}[{{ ProjectPrefix }}{{ item.value.name }}-Log-Volume]{%- else -%}{{ omit }}{%- endif -%}"
userdata: "{% include 'templates/userdata.j2' %}"
auto_ip: no
timeout: 65535
cloud: "{{ cloud }}"
with_dict: "{{ Servers }}"
This file (/path/to/ansible/playbooks/servers.yml) is referenced by my play.yml (/path/to/ansible/play.yml) via an include, so the template reference there is in my templates directory (/path/to/ansible/templates/userdata.j2).
That template can also then reference other template files itself (using {% include 'templates/some_other_file.extension' %}
) so you can have nicely complex userdata fields with loads and loads of detail, and not make the actual play complicated (or at least, no more than it already needs to be!)
Last night, I met up with my friend Tim Dobson to talk about Ansible. I’m not an expert, but I’ve done a lot of Ansible recently, and he wanted some pointers.
He already had some general knowledge, but wanted some pointers on “other things you can do with Ansible”, so here are a couple of the things we did.
---
- hosts: localhost
tasks:
- set_fact:
my_run_state: "{% if lookup('env', 'runstate') == '' %}{{ default_run_state|default('prod') }}{% else %}{{ lookup('env', 'runstate')|lower() }}{% endif %}"
- debug: msg="Doing prod"
when: my_run_state == 'prod'
- debug: msg="Doing something else"
when: my_run_state != 'prod'
With this, you can define a default run state (prod), override it with a group or host var (if you have, for example, a staging service or proof of concept space), or use your Environment variables to do things. In the last case, you’d execute this as follows:
runstate=preprod ansible-playbook site.yml
---
- name: Get facts from your hosts
tags: configure
hosts: all
- name: Tell me all the variable data you've collected
tags: dump
hosts: localhost
tasks:
- name: Show data
tags: show
debug:
var=item
with_items: hostvars
When you then run
ansible-playbook test.yml --list-tags
You get
playbook: test.yml
play #1 (all): Get facts from your hosts TAGS: [configure]
TASK TAGS: [configure]
play #2 (localhost): Tell me all the variable data you've collected TAGS: [dump]
TASK TAGS: [dump, show]
Now you can run ansible-playbook test.yml -t configure
or ansible-playbook test.yml --skip-tags configure
To show how useful this can be, here’s the output from the “–list-tags” I’ve got on a project I’m doing at work:
playbook: site.yml
play #1 (localhost): Provision A-Side Infrastructure TAGS: [Functional_Testing,A_End]
TASK TAGS: [A_End, EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]
play #2 (localhost): Provision B-Side Infrastructure TAGS: [Functional_Testing,B_End]
TASK TAGS: [B_End, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, Functional_Testing, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers]
play #3 (localhost): Provision InterProject Links - Part 1 TAGS: [Functional_Testing,InterProjectLink]
TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]
play #4 (localhost): Provision InterProject Links - Part 2 TAGS: [Functional_Testing,InterProjectLink]
TASK TAGS: [EXCLUDE_K5_InterProjectLinks, Functional_Testing, InterProjectLink, K5_InterProjectLinks]
play #5 (localhost): Provision TPT environment TAGS: [Performance_Testing]
TASK TAGS: [EXCLUDE_K5_FirewallManagers, EXCLUDE_K5_Firewalls, EXCLUDE_K5_Networks, EXCLUDE_K5_SecurityGroups, EXCLUDE_K5_Servers, K5_Auth, K5_FirewallManagers, K5_Firewalls, K5_InterProjectLinks, K5_Networks, K5_SecurityGroups, K5_Servers, Performance_Testing, debug]
This then means that if I get a build fail part-way through, or if I’m debugging just a particular part, I can run this: ansible-playbook site.yml -t Performance_Testing --skip-tags EXCLUDE_K5_Firewalls,EXCLUDE_K5_SecurityGroups,EXCLUDE_K5_Networks
http://www.radiolab.org/story/null-and-void/
This podcast from Radiolab is intriguing. The first half had me hoping for the underdog, then there’s an interview with a very cross older gentleman, who’s clearly had enough of not having his voice heard…. At which point, I realise what is proposed could “burn it [American Civilisation] all down”… And suddenly I don’t want the underdog to win.
And the reason I think this is “one to listen to” is because of that guy. Basically, if you fight so passionately about something that you’re ready to hurt someone over that thing, you need to take a step back and check it’s the right thing to be fighting for. Chances are, it probably isn’t.
This podcast talks about a concept in US (and probably UK) law called “Jury Nullification”, where even if the law clearly defines some act or inaction to be prohibited, the Jury can express their distaste about that law by deciding “Not Guilty”. If that verdict comes down often enough, it “might” send a message to the law makers that there’s something wrong with that particular law, and perhaps it will be re-written.