"Accept a New SSH Host Key" by "Linux Screenshots" on Flickr

Purposefully Reducing SSH Security when performing Builds of short-lived devices

I’ve recently been developing a few builds of things at home using throw-away sessions of virtual machines, and I found myself repeatedly having to accept and even having to remove SSH host keys for things I knew wouldn’t be around for long. It’s not a huge disaster, but it’s an annoyance.

This annoyance comes from the fact that SSH uses a thing called “Trust-On-First-Use” (Or TOFU) to protect yourself against a “Man-in-the-Middle” attack (or even where the host has been replaced with something malicious), which, for infrastructure that has a long lifetime (anything more than a couple of days) makes sense! You’re building something you want to trust hasn’t been compromised! That said, if you’re building new virtual machines, testing something and then rebuilding it to prove your script worked… well, that’s not so useful!

So, in this case, if you’ve got a designated build network, or if you trust, implicitly, your normal working network, this is a dead simple work-around.

In $HOME/.ssh/config or in $HOME/.ssh/config.d/local (if you’ve followed my previous advice to use separate ssh config files), add the following stanza:

# RFC1918
Host 10.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* 192.168.*
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null

# RFC5373 and RFC2544
Host 192.0.2.* 198.51.100.* 203.0.113.* 198.18.* 198.19.*
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null

These stanzas let you disable host key checking for any IP address in the RFC1918 ranges (10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16), and for the RFC5373 ranges (192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24) – which should be used for documentation, and for the RFC2544 range (198.18.0.0/15) which should be used for inter-network testing.

Alternatively, if you always use a DDNS provider for short-lived assignments (for example, I use davd/docker-ddns) then instead, you can use this stanza:

Host *.ddns.example.com
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null

(Assuming, of course, you use ddns.example.com as your DDNS address!)

Featured image is “Accept a New SSH Host Key” by “Linux Screenshots” on Flickr and is released under a CC-BY license.

A screenshot of the Wordpress site, showing updates available

wp-upgrade.sh – A simple tool to update and upgrade WordPress components via cron

A simple tool to update and upgrade WordPress components

A few years ago, I hosted my blog on Dreamhost. They’ve customized something inside the blog which means it doesn’t automatically update itself. I’ve long since moved this blog off to my own hosting, but I can’t figure out which thing it was they changed, and I’ve got so much content and stuff in here, I don’t really want to mess with it.

Anyway, I like keeping my systems up to date, and I hate logging into a blog and finding updates are pending, so I wrote this script. It uses wp-cli which I have installed to /usr/local/bin/wp as per the install guide. This is also useful if you’re hosting your site in such a way that you can’t make changes to core or plugins from the web interface.

This script updates:

  1. All core files (lines core update-db, core update and language core update)
  2. All plugins (lines plugin update --all and language plugin update --all)
  3. All themes (lines theme update --all and language theme update --all)

To remove any part of this script, just delete those lines, including the /usr/local/bin/wp and --quiet && \ fragments!

I then run sudo -u www-data crontab -e (replacing www-data with the real account name of the user who controls the blog, which can be found by doing an ls -l /var/www/html/ replacing the path to where your blog is located) and I add the bottom line to that crontab file (the rest is just comments to remind you what the fields are!)

#                                         day of month [1-31]
#                                             month [1-12]
#                                                 day of week [1-6 Mon-Sat, 0/7 Sun]
# minute   hour                                         command
1          1,3,5,7,9,11,13,15,17,19,21,23 *   *   *     /usr/local/bin/wp-upgrade.sh /var/www/jon.sprig.gs/blog

This means that every other hour, at 1 minute past the hour, every day, every month, I run the update :)

If you’ve got email setup for this host and user, you’ll get an email whenever it upgrades a component too.

"Monitors" by "Jaysin Trevino" on Flickr

Using monit to monitor Docker Containers

As I only run a few machines with services that matter on them (notably, my home server and my web server), I don’t need a full-on monitoring service, so instead rely on a system called monit.

Monit is an open source piece of software, used to monitor (see, it’s easily named 😄) and, if possible remediate issues with things it sees wrong.

I use this for watching whether particular services are running (and if not, restart them), for whether the ink in my printer is empty, and to monitor the free space and SMART status on my disks.

Today I noticed that a Docker container had stopped, and I’d not noticed. It wasn’t a big thing, but it gnawed at me, so I had a bit of a look around to see what I can find about this.

I found this blog post, titled “Monitoring Docker Containers with Monit”, from 2014, which suggested monitoring the result from docker top… and would you believe it, that’s a valid trick 🙂

So, here’s what I’m doing! Each container has it’s own file called/etc/monit/scripts/check_container_<container-name>.sh which has just this command in it:

#! /bin/bash
docker top "<container-name>"
exit $?

Note that you replace <container-name> in both the filename and the script itself with the name of the container – for example, the container hello-world would be monitored with the file check_container_hello-world.sh, and the line in that file would say docker top "hello-world".

I then have a file in /etc/monit/conf.d/ called check_container_<container-name> which has this content

CHECK PROGRAM <container-name> WITH PATH /etc/monit/scripts/check_container_<container-name>.sh
  START PROGRAM = "/usr/bin/docker start <container-name>"
  STOP PROGRAM = "/usr/bin/docker stop <container-name>"
  IF status != 0 FOR 3 CYCLES THEN RESTART
  IF 2 RESTARTS WITHIN 5 CYCLES THEN UNMONITOR

I then ensure that in /etc/monit/monitrc the line “include /etc/monit/conf.d/*” is included and not commented out, and then restart monit with systemctl restart monit.

Featured image is “Monitors” by “Jaysin Trevino” on Flickr and is released under a CC-BY license.

"Field Notes - Sweet Tooth" by "The Marmot" on Flickr

Multi-OS builds in AWS with Terraform – some notes from the field!

Late edit: 2020-05-22 – Updated with better search criteria from colleague conversations

I’m building a proof of concept for … well, a product that needs testing on several different Linux and Windows variants on AWS and Azure. I’m building this environment with Terraform, and it’s thrown me a few curve balls, so I thought I’d document the issues I’ve had!

The versions of distributions I have tested are the latest releases of each of these images at-or-near the time of writing. The major version listed is the earliest I have tested, so no assumption is made about previous versions, and later versions, after the time of this post should not assume any of this data is also accurate!

(Fujitsu Staff – please contact me on my work email address for details on how to get the internal AMIs of our builds of these images 😄)

Linux Distributions

On the whole, I tend to be much more confident and knowledgable about Linux distributions. I’ve also done far more installs of each of these!

Almost all of these installs are Free of Charge, with the exception of Red Hat Enterprise Linux, which requires a subscription fee, and this can be “Pay As You Go” or “Bring Your Own License”. These sorts of things are arranged for me, so I don’t know how easy or hard it is to organise these licenses!

These builds all use cloud-init, via either a cloud-init yaml script, or some shell scripting language (usually accepted to be bash). If this script fails to execute, you will find your user-data file in /var/lib/cloud/instance/scripts/part-001. If this is a shell script then you will be able to execute it by running that script as your root user.

Amazon Linux 2 or Amzn2

Amazon Linux2 is the “preferred” distribution for Amazon Web Services (AWS) (surprisingly enough). It is based on Red Hat Enterprise Linux (RHEL), and many of the instructions you’ll want to run to install software will use RHEL based instructions. This platform is not available outside the AWS ecosystem, as far as I can tell, although you might be able to run it on-prem.

Software packages are limited in this distribution, so any “extra” features require the installation of the “EPEL” repository, by executing the command sudo amazon-linux-extras install epel and then using the yum command to install further packages. I needed nginx for part of my build, and this was only in EPEL.

Amzn2 AMI Lookup

data "aws_ami" "amzn2" {
  most_recent = true

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-2.0.*-gp2"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["amazon"] # Canonical
}

Amzn2 User Account

Amazon Linux 2 images under AWS have a default “ec2-user” user account. sudo will allow escalation to Root with no password prompt.

Amzn2 AWS Interface Configuration

The primary interface is called eth0. Network Manager is not installed. To manage the interface, you need to edit /etc/sysconfig/network-scripts/ifcfg-eth0 and apply changes with ifdown eth0 ; ifup eth0.

Amzn2 user-data / Cloud-Init Troubleshooting

I’ve found the output from user-data scripts appearing in /var/log/cloud-init-output.log.

CentOS 7

For starters, AWS doesn’t have an official CentOS8 image, so I’m a bit stymied there! In fact, as far as I can make out, CentOS is only releasing ISOs for builds now, and not any cloud images. There’s an open issue on their bug tracker which seems to suggest that it’s not going to get any priority any time soon! Blimey.

This image may require you to “subscribe” to the image (particularly if you have a “private marketplace”), but this will be requested of you (via a URL provided on screen) when you provision your first machine with this AMI.

Like with Amzn2, CentOS7 does not have nginx installed, and like Amzn2, installation of the EPEL library is not a difficult task. CentOS7 bundles a file to install the EPEL, installed by running yum install epel-release. After this is installed, you have the “full” range of software in EPEL available to you.

CentOS AMI Lookup

data "aws_ami" "centos7" {
  most_recent = true

  filter {
    name   = "name"
    values = ["CentOS Linux 7*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["aws-marketplace"]
}

CentOS User Account

CentOS7 images under AWS have a default “centos” user account. sudo will allow escalation to Root with no password prompt.

CentOS AWS Interface Configuration

The primary interface is called eth0. Network Manager is not installed. To manage the interface, you need to edit /etc/sysconfig/network-scripts/ifcfg-eth0 and apply changes with ifdown eth0 ; ifup eth0.

CentOS Cloud-Init Troubleshooting

I’ve run several different user-data located bash scripts against this system, and the logs from these scripts are appearing in the default syslog file (/var/log/syslog) or by running journalctl -xefu cloud-init. They do not appear in /var/log/cloud-init-output.log.

Red Hat Enterprise Linux (RHEL) 7 and 8

Red Hat has both RHEL7 and RHEL8 images in the AWS market place. The Proof Of Value (POV) I was building was only looking at RHEL7, so I didn’t extensively test RHEL8.

Like Amzn2 and CentOS7, RHEL7 needs EPEL installing to have additional packages installed. Unlike Amzn2 and CentOS7, you need to obtain the EPEL package from the Fedora Project. Do this by executing these two commands:

wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install epel-release-latest-7.noarch.rpm

After this is installed, you’ll have access to the broader range of software that you’re likely to require. Again, I needed nginx, and this was not available to me with the stock install.

RHEL7 AMI Lookup

data "aws_ami" "rhel7" {
  most_recent = true

  filter {
    name   = "name"
    values = ["RHEL-7*GA*Hourly*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["309956199498"] # Red Hat
}

RHEL8 AMI Lookup

data "aws_ami" "rhel8" {
  most_recent = true

  filter {
    name   = "name"
    values = ["RHEL-8*HVM-*Hourly*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["309956199498"] # Red Hat
}

RHEL User Accounts

RHEL7 and RHEL8 images under AWS have a default “ec2-user” user account. sudo will allow escalation to Root with no password prompt.

RHEL AWS Interface Configuration

The primary interface is called eth0. Network Manager is installed, and the eth0 interface has a profile called “System eth0” associated to it.

RHEL Cloud-Init Troubleshooting

In RHEL7, as per CentOS7, logs from user-data scripts are appear in the general syslog file (in this case, /var/log/messages) or by running journalctl -xefu cloud-init. They do not appear in /var/log/cloud-init-output.log.

In RHEL8, logs from user-data scrips now appear in /var/log/cloud-init-output.log.

Ubuntu 18.04

At the time of writing this, the vendor, who’s product I was testing, categorically stated that the newest Ubuntu LTS, Ubuntu 20.04 (Focal Fossa) would not be supported until some time after our testing was complete. As such, I spent no time at all researching or planning to use this image.

Ubuntu is the only non-RPM based distribution in this test, instead being based on the Debian project’s DEB packages. As such, it’s range of packages is much wider. That said, for the project I was working on, I required a later version of nginx than was available in the Ubuntu Repositories, so I had to use the nginx Personal Package Archive (PPA). To do this, I found the official PPA for the nginx project, and followed the instructions there. Generally speaking, this would potentially risk any support from the distribution vendor, as it’s not certified or supported by the project… but I needed that version, so I had to do it!

Ubuntu 18.04 AMI Lookup

data "aws_ami" "ubuntu1804" {
  most_recent = true

  filter {
    name   = "name"
    values = ["*ubuntu*18.04*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["099720109477"] # Canonical
}

Ubuntu 18.04 User Accounts

Ubuntu 18.04 images under AWS have a default “ubuntu” user account. sudo will allow escalation to Root with no password prompt.

Ubuntu 18.04 AWS Interface Configuration

The primary interface is called eth0. Network Manager is not installed, and instead Ubuntu uses Netplan to manage interfaces. The file to manage the interface defaults is /etc/netplan/50-cloud-init.yaml. If you struggle with this method, you may wish to install ifupdown and define your configuration in /etc/network/interfaces.

Ubuntu 18.04 Cloud-Init Troubleshooting

In Ubuntu 18.04, logs from user-data scrips appear in /var/log/cloud-init-output.log.

Windows

This section is far more likely to have it’s data consolidated here!

Windows has a common “standard” username – Administrator, and a common way of creating a password (this is generated on-boot, and the password is transferred to the AWS Metadata Service, which it is retrieved and decrypted with the SSH key you’ve used to build the “authentication” to the box) which Terraform handles quite nicely.

The network device is referred to as “AWS PV Network Device #0”. It can be managed with powershell, netsh (although apparently Microsoft are rumbling about demising this script), or from the GUI.

Windows 2012R2

This version is very old now, and should be compared to Windows 7 in terms of age. It is only supported by Microsoft with an extended maintenance package!

Windows 2012R2 AMI Lookup

data "aws_ami" "w2012r2" {
  most_recent = true

  filter {
    name = "name"
    values = ["Windows_Server-2012-R2_RTM-English-64Bit-Base*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["801119661308"] # AWS
}

Windows 2012R2 Cloud-Init Troubleshooting

Logs from the Metadata Service can be found in C:\Program Files\Amazon\Ec2ConfigService\Logs\Ec2ConfigLog.txt. You can also find the userdata script in C:\Program Files\Amazon\Ec2ConfigService\Scripts\UserScript.ps1. This can be launched and debugged using PowerShell ISE, which is in the “Start” menu.

Windows 2016

This version is reasonably old now, and should be compared to Windows 8 in terms of age. It is supported until 2022 in “mainline” support.

Windows 2016 AMI Lookup

data "aws_ami" "w2016" {
  most_recent = true

  filter {
    name = "name"
    values = ["Windows_Server-2016-English-Full-Base*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["801119661308"] # AWS
}

Windows 2016 Cloud-Init Troubleshooting

The metadata service has moved from Windows 2016 and onwards. Logs are stored in a partially hidden directory tree, so you may need to click in the “Address” bar of the Explorer window and type in part of this path. The path to these files is: C:\ProgramData\Amazon\EC2-Windows\Launch\Log. I say “files” as there are two parts to this file – an “Ec2Launch.log” file which reports on the boot process, and “UserdataExecution.log” which shows the output from the userdata script.

Unlike with the Windows 2012R2 version, you can’t get hold of the actual userdata script on the filesystem, you need to browse to a special path in the metadata service (actually, technically, you can do this with any of the metadata services – OpenStack, Azure, and so on) which is: http://169.254.169.254/latest/user-data/

This will contain userdata between a <powershell> and </powershell> pair of tags. This would need to be copied out of this URL and pasted into a new file on your local machine to determine why issues are occurring. Again, I would recommend using PowerShell ISE from the Start Menu to debug your code.

Windows 2019

This version is the most recent released version of Windows Server, and should be compared to Windows 10 in terms of age.

Windows 2019 AMI Lookup

data "aws_ami" "w2019" {
  most_recent = true

  filter {
    name = "name"
    values = ["Windows_Server-2019-English-Full-Base*"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "state"
    values = ["available"]
  }

  owners = ["801119661308"] # AWS
}

Windows 2019 Cloud-Init Troubleshooting

Functionally, the same as Windows 2016, but to recap, the metadata service has moved from Windows 2016 and onwards. Logs are stored in a partially hidden directory tree, so you may need to click in the “Address” bar of the Explorer window and type in part of this path. The path to these files is: C:\ProgramData\Amazon\EC2-Windows\Launch\Log. I say “files” as there are two parts to this file – an “Ec2Launch.log” file which reports on the boot process, and “UserdataExecution.log” which shows the output from the userdata script.

Unlike with the Windows 2012R2 version, you can’t get hold of the actual userdata script on the filesystem, you need to browse to a special path in the metadata service (actually, technically, you can do this with any of the metadata services – OpenStack, Azure, and so on) which is: http://169.254.169.254/latest/user-data/

This will contain userdata between a <powershell> and </powershell> pair of tags. This would need to be copied out of this URL and pasted into a new file on your local machine to determine why issues are occurring. Again, I would recommend using PowerShell ISE from the Start Menu to debug your code.

Featured image is “Field Notes – Sweet Tooth” by “The Marmot” on Flickr and is released under a CC-BY license.

“New shoes” by “Morgaine” from Flickr

Making Windows Cloud-Init Scripts run after a reboot (Using Terraform)

I’m currently building a Proof Of Value (POV) environment for a product, and one of the things I needed in my environment was an Active Directory domain.

To do this in AWS, I had to do the following steps:

  1. Build my Domain Controller
    1. Install Windows
    2. Set the hostname (Reboot)
    3. Promote the machine to being a Domain Controller (Reboot)
    4. Create a domain user
  2. Build my Member Server
    1. Install Windows
    2. Set the hostname (Reboot)
    3. Set the DNS client to point to the Domain Controller
    4. Join the server to the domain (Reboot)

To make this work, I had to find a way to trigger build steps after each reboot. I was working with Windows 2012R2, Windows 2016 and Windows 2019, so the solution had to be cross-version. Fortunately I found this script online! That version was great for Windows 2012R2, but didn’t cover Windows 2016 or later… So let’s break down what I’ve done!

In your userdata field, you need to have two sets of XML strings, as follows:

<persist>true</persist>
<powershell>
$some = "powershell code"
</powershell>

The first block says to Windows 2016+ “keep trying to run this script on each boot” (note that you need to stop it from doing non-relevant stuff on each boot – we’ll get to that in a second!), and the second bit is the PowerShell commands you want it to run. The rest of this now will focus just on the PowerShell block.

  $path= 'HKLM:\Software\UserData'
  
  if(!(Get-Item $Path -ErrorAction SilentlyContinue)) {
    New-Item $Path
    New-ItemProperty -Path $Path -Name RunCount -Value 0 -PropertyType dword
  }
  
  $runCount = Get-ItemProperty -Path $path -Name Runcount -ErrorAction SilentlyContinue | Select-Object -ExpandProperty RunCount
  
  if($runCount -ge 0) {
    switch($runCount) {
      0 {
        $runCount = 1 + [int]$runCount
        Set-ItemProperty -Path $Path -Name RunCount -Value $runCount
        if ($ver -match 2012) {
          #Enable user data
          $EC2SettingsFile = "$env:ProgramFiles\Amazon\Ec2ConfigService\Settings\Config.xml"
          $xml = [xml](Get-Content $EC2SettingsFile)
          $xmlElement = $xml.get_DocumentElement()
          $xmlElementToModify = $xmlElement.Plugins
          
          foreach ($element in $xmlElementToModify.Plugin)
          {
            if ($element.name -eq "Ec2HandleUserData") {
              $element.State="Enabled"
            }
          }
          $xml.Save($EC2SettingsFile)
        }
        $some = "PowerShell Script"
      }
    }
  }

Whew, what a block! Well, again, we can split this up into a couple of bits.

In the first few lines, we build a pointer, a note which says “We got up to here on our previous boots”. We then read that into a variable and find that number and execute any steps in the block with that number. That’s this block:

  $path= 'HKLM:\Software\UserData'
  
  if(!(Get-Item $Path -ErrorAction SilentlyContinue)) {
    New-Item $Path
    New-ItemProperty -Path $Path -Name RunCount -Value 0 -PropertyType dword
  }
  
  $runCount = Get-ItemProperty -Path $path -Name Runcount -ErrorAction SilentlyContinue | Select-Object -ExpandProperty RunCount
  
  if($runCount -ge 0) {
    switch($runCount) {

    }
  }

The next part (and you’ll repeat it for each “number” of reboot steps you need to perform) says “increment the number” then “If this is Windows 2012, remind the userdata handler that the script needs to be run again next boot”. That’s this block:

      0 {
        $runCount = 1 + [int]$runCount
        Set-ItemProperty -Path $Path -Name RunCount -Value $runCount
        if ($ver -match 2012) {
          #Enable user data
          $EC2SettingsFile = "$env:ProgramFiles\Amazon\Ec2ConfigService\Settings\Config.xml"
          $xml = [xml](Get-Content $EC2SettingsFile)
          $xmlElement = $xml.get_DocumentElement()
          $xmlElementToModify = $xmlElement.Plugins
          
          foreach ($element in $xmlElementToModify.Plugin)
          {
            if ($element.name -eq "Ec2HandleUserData") {
              $element.State="Enabled"
            }
          }
          $xml.Save($EC2SettingsFile)
        }
        
      }

In fact, it’s fair to say that in my userdata script, this looks like this:

  $path= 'HKLM:\Software\UserData'
  
  if(!(Get-Item $Path -ErrorAction SilentlyContinue)) {
    New-Item $Path
    New-ItemProperty -Path $Path -Name RunCount -Value 0 -PropertyType dword
  }
  
  $runCount = Get-ItemProperty -Path $path -Name Runcount -ErrorAction SilentlyContinue | Select-Object -ExpandProperty RunCount
  
  if($runCount -ge 0) {
    switch($runCount) {
      0 {
        ${file("templates/step.tmpl")}

        ${templatefile(
          "templates/rename_windows.tmpl",
          {
            hostname = "SomeMachine"
          }
        )}
      }
      1 {
        ${file("templates/step.tmpl")}

        ${templatefile(
          "templates/join_ad.tmpl",
          {
            dns_ipv4 = "192.0.2.1",
            domain_suffix = "ad.mycorp",
            join_account = "ad\someuser",
            join_password = "SomePassw0rd!"
          }
        )}
      }
    }
  }

Then, after each reboot, you need a new block. I have a block to change the computer name, a block to join the machine to the domain, and a block to install an software that I need.

Featured image is “New shoes” by “Morgaine” on Flickr and is released under a CC-BY-SA license.

"Fishing line and bobbin stuck on tree at Douthat State Park" by "Virginia State Parks" on Flickr

Note to self: Linux shell scripts don’t cope well with combined CRLF + LF files… Especially in User-Data / Custom Data / Cloud-Init scripts

This one is more a nudge to myself. On several occasions when building Infrastructure As Code (IAC), I split out a code sections into one or more files, for readability and reusability purposes. What I tended to do, and this was more apparent with the Linux builds than the Windows builds, was to forget to set the line terminator from CRLF to LF.

While this doesn’t really impact Windows builds too much (they’re kinda designed to support people being idiots with line endings now), Linux still really struggles with CRLF endings, and you’ll only see when you’ve broken this because you’ll completely fail to run any of the user-data script.

How do you determine this is your problem? Well, actually it’s a bit tricky, as neither cat, less, more or nano spot this issue. The only two things I found that identified it were file and vi.

The first part of the combined file with mixed line endings. This part has LF termination.
The second part of the combined file with mixed line endings. This part has CRLF termination.
What happens when we cat these two parts into one file? A file with CRLF, LF line terminators obviously!
What the combined file looks like in Vi. Note the blue ^M at the ends of the lines.

So, how to fix this? Assuming you’re using Visual Studio Code;

A failed line-ending clue in Visual Studio Code

You’ll notice this line showing “CRLF” in the status bar at the bottom of Code. Click on that, which brings up a discrete box near the top, as follows:

Oh no, it’s set to “CRLF”. That’s not what we want!

Selecting LF in that box changes the line feeds into LF for this file, but it’s not saved. Make sure you save this file before you re-run your terraform script!

Notice, we’re now using LF endings, but the file isn’t saved.

Fantastic! It’s all worked!

In Nano, I’ve opened the part with the invalid line endings.

Oh no! We have a “DOS Format” file. Quick, let’s fix it!

To fix this, we need to write the file out. Hit Ctrl+O. This tells us that we’re in DOS Format, and also gives us the keyboard combination to toggle “DOS Format” off – it’s Alt+D (In Unix/Linux world, the Alt key is referred to as the Meta key – hence M not A).

This is how we fix things

So, after hitting Alt+D, the “File Name to write” line changes, see below:

Yey, no pesky “DOS Format” warning here!

Using either editor (or any others, if you know how to solve line ending issues in other editors), you still need to combine your script back together before you can run it, so… do that, and your file will be fine to run! Good luck!

Featured image is “Fishing line and bobbin stuck on tree at Douthat State Park” by “Virginia State Parks” on Flickr and is released under a CC-BY license.

"Root" by "llee_wu" on Flickr

A quick note on using Firefox in Windows in a Corporate or Enterprise environment

I’ve been using Firefox as my “browser of choice” for around 15 years. I tend to prefer to use it for all sorts of reasons, but the main thing I expect is support for extensions. Not many of them, but … well, there’s a few!

There are two stumbling blocks for using Firefox in a corporate or enterprise setting. These are:

  1. NTLM or Kerberos Authentication for resources like Sharepoint and ADFS.
  2. Enterprise TLS certificates (usually deployed via GPO as part of the domain)

These are both trivially fixed in the about:config screen, but first you need to get past a scary looking warning page!

In the address bar, where it probably currently says jon.sprig.gs, click in there and type about:config.

Getting to about:config

This brings you to a scary page!

Proceed with caution! (of course!!)

Click the “Accept the Risk and Continue” (note, this is with Firefox 76. Wording with later or earlier versions may differ).

As if it wasn’t obvious enough from the previous screen, this “may impact performance and security”…

And then you get a search box.

In the “Search preference name” type in ntlm and find the line that says network.automatic-ntlm-auth.trusted-uris.

The “NTLM Options page”

Type in there the suffixes of any TRUSTED domains. For example, if your company uses the domain names of bigcompany.com, bigco.local and big.company then you’d type in:

bigcompany.com,bigco.local,big.company

Any pages that you browse to, where they request NTLM authentication, will receive an NTLM set of credentials if prompted (same as IE, Edge, and Chrome already do!) NTLM is effectively a way to pass a trusted Kerberos ticket (a bit like your domain credentials) into a web page.

Next up, let’s get those pesky certificate errors removed!

This assumes that you have a centrally managed TLS Root Certificate, and the admins in your network haven’t just been dumping self signed certificates everywhere (nothing gets around that… just sayin’).

Still in about:config, clear the search box and type enterprise, like this!

Enterprise Roots are here!

Find the line security.enterprise_roots.enabled and make sure it says true. If it doesn’t double click it, so it does.

Now you can close your preferences page, and you should be fine to visit your internal source code repository, time sheeting system or sharepoint site, with almost no interruptions!

If you’ve been tasked for turning this stuff on in your estate of managed desktop environment machines, then you might find this article (on Autoconfiguration of Firefox) of use (but I’ve not tried it!)

Featured image is “Root” by “llee_wu” on Flickr and is released under a CC-BY-ND license.

"Debian" by "medithIT" on Flickr

One to read: Installing #Debian on #QNAP TS-219P

A couple of years ago, a very lovely co-worked gave me his QNAP TS-219P which he no longer required. I’ve had it sitting around storing data, but not making the most use of it since he gave it to me.

After a bit of tinkering in my home network, I decided that I needed a more up-to-date OS on this device, so I found this page that tells you how to install Debian Buster. This will wipe the device, so make sure you’ve got a full backup of your content!

https://www.cyrius.com/debian/kirkwood/qnap/ts-219/install/

Essentially, you backup the existing firmware with these commands:

cat /dev/mtdblock0 > mtd0
cat /dev/mtdblock1 > mtd1
cat /dev/mtdblock2 > mtd2
cat /dev/mtdblock3 > mtd3
cat /dev/mtdblock4 > mtd4
cat /dev/mtdblock5 > mtd5

These files need to be transferred off, in “case of emergency”, then download the installation files:

mkdir /tmp/debian
cd /tmp/debian
busybox wget http://ftp.debian.org/debian/dists/buster/main/installer-armel/current/images/kirkwood/network-console/qnap/ts-21x/{initrd,kernel-6281,kernel-6282,flash-debian,model}
sh flash-debian
reboot

The flash-debian command takes around 3-5 minutes, apparently, although I did start the job and walk away, so it might have taken anywhere up to 30 minutes!

Then, SSH to the IP of the device, and use the credentials installer and install as username and password.

Complete the installation steps in the SSH session, then let it reboot.

Be aware that the device is likely to have at least one swap volume it will try to load, so it might be worth opening a shell and running the command “swapoff -a” before removing the swap partitions. It’s also worth removing all the partitions and then rebooting and starting again if you have any problems with the partitioning.

When it comes back, you have a base installation of Debian, which doesn’t have sudo installed, so use su - and put in the root password.

Good luck!

"the home automation system designed by loren amelang himself" by "Nicolás Boullosa" on Flickr

One to read: Ansible for Networking – Part 3: Cisco IOS

One to read: “Ansible for Networking – Part 3: Cisco IOS”

One of the guest hosts and stalwart member of the Admin Admin Telegram group has been documenting how he has built his Ansible Networking lab.

Stuart has done three posts so far, but this is the first one actually dealing with the technology. It’s a mammoth read, so I’d recommend doing it on a computer, and not on a tablet or phone!

Posts one and two were about what the series would cover and how the lab has been constructed.

Featured image is “the home automation system designed by loren amelang himself” by “Nicolás Boullosa” on Flickr and is released under a CC-BY license.