Using Python-OpenstackClient and Ansible with K5

Recently, I have used K5, which is an instance of OpenStack, run by Fujitsu (my employer). To do some of the automation tasks I have played with both python-openstackclient and Ansible. This post is going to cover how to get those tools to work with K5.

I have access to a Linux virtual machine (Ubuntu 16.04) and the Windows Subsystem for Linux in Windows 10 to run “Bash on Ubuntu on Windows”, and both accept the same set of commands.

In order to run these commands, you need a couple of dependencies. Your mileage might vary with other Linux distributions, but, for Ubuntu based distributions, run this command:

sudo apt install python-pip build-essential libssl-dev libffi-dev python-dev

Next, use pip to install the python modules you need:

sudo -H pip install shade==1.11.1 ansible cryptography python-openstackclient

If you’re only ever going to be working with a single project, you can define a handful of environment variables prefixed OS_, like this:

export OS_USERNAME=BloggsF
export OS_PASSWORD=MySuperSecretPasswordIsHere
export OS_REGION_NAME=uk-1
export OS_USER_DOMAIN_NAME=YourProjectName
export OS_PROJECT_NAME=YourProjectName-prj
export OS_PROJECT_ID=baddecafbaddecafbaddecafbaddecaf
export OS_AUTH_URL=https://identity.uk-1.cloud.global.fujitsu.com/v3
export OS_VOLUME_API_VERSION=2
export OS_IDENTITY_API_VERSION=3

But, if you’re working with a few projects, it’s probably worth separating these out into clouds.yml files. This would be stored in ~/.config/openstack/clouds.yml with the credentials for the environment you’re using:

---
clouds:
  root:
    identity_api_version: 3
    regions:
    - uk-1
    auth:
      auth_url: https://identity.uk-1.cloud.global.fujitsu.com/v3
      password: MySuperSecretPasswordIsHere
      project_id: baddecafbaddecafbaddecafbaddecaf
      project_name: YourProjectName-prj
      username: BloggsF
      user_domain_name: YourProjectName

Optionally, you can separate out the password, username or any other “sensitive” information into a secure.yml file stored in the same location (removing those lines from the clouds.yml file), like this:

---
clouds:
  root:
    auth:
      password: MySuperSecretPasswordIsHere

Now, you can use the Python based Openstack Client, using this invocation:

openstack --os-cloud root server list

Alternatively you can use the Ansible Openstack (and K5) modules, like this:

---
tasks:
- name: "Authenticate to K5"
  k5_auth:
    cloud: root
  register: k5_auth_reg
- name: "Create Network"
  k5_create_network:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Subnet"
  k5_create_subnet:
    name: "Public"
    network_name: "Public"
    cidr: "192.0.2.0/24"
    gateway_ip: "192.0.2.1"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Create Router"
  k5_create_router:
    name: "Public"
    availability_zone: "uk-1a"
    state: present
    k5_auth: "{{ k5_auth_reg.k5_auth_facts }}"
- name: "Attach private network to router"
  os_router:
    name: "Public"
    state: present
    network: "inf_az1_ext-net02"
    interfaces: "Public"
    cloud: root
- name: "Create Servers"
  os_server:
    name: "Server"
    availability_zone: "uk-1a"
    flavor: "P-1"
    state: present
    key_name: "MyFirstKey"
    network: "Public-Network"
    image: "Ubuntu Server 14.04 LTS (English) 02"
    boot_from_volume: yes
    terminate_volume: yes
    security_groups: "Default"
    auto_ip: no
    timeout: 7200
    cloud: root

Working with complicated template data UserData in Ansible

My new job means I’m currently building a lot of test boxes with Ansible, particularly OpenStack guests. This means I’m trying to script as much as possible without actually … getting my hands dirty with the actual “logging into it and running things” perspective.

This week, I hit a problem standing up a popular firewall vendor’s machine with Ansible, because I was trying to bypass the first-time-wizard… anyway, it wasn’t working, and I couldn’t figure out why. I talked to my colleague [mohclips] and he eventually told me that I needed to use a template, because what I was trying to do was too complicated.

But, damn him, I knew that wasn’t the answer :)

Anyway, I found this comment on a ticket, which lead me to the following… if you’re finding that your userdata: variable in the os_server module of Ansible isn’t working, you might need to wrap it up like this:

userdata: |
  {%- raw -%}#!/bin/bash
  # Kill script if the pipe fails
  set -euf -o pipefail
  # Write everything from this point on to Syslog
  echo " == Set admin credentials == "
  clish -c 'set user admin password-hash {% endraw -%}{{ default_password|password_hash('sha512') }}{%- raw -%}' -s
  {% endraw %}

Note that, if you have a space before your variable, use {% endraw -%} and if you’ve a space after it, use {%- raw %} as the hyphen means “ditch all the spaces before/after this command”.

Use your Debian System with as an iBeacon for Home Automation

I have been using the Home-Assistant application at home to experiment with Home Automation.

One thing I’ve found is that the Raspberry Pi is perfect for a few of the monitoring things that I wanted it to do (see also https://github.com/JonTheNiceGuy/home-assistant-configs for more details of what I’m doing there!).

I’m using the OwnTracks application to talk to an MQTT server, but I could also do with it knowing where I am in the house, so I looked around for some details on iBeacons.

iBeacon is an Apple standard, but it’s very easy to configure on Linux systems. I took some pointers from this article and wrote up a script to turn on the iBeacon on my Raspbian Raspberry Pi 3.

Configuring the Script

When you first run it as root, it will pre-populate a config file in /etc/iBeacon.conf. Edit it and run the script again.

Running the script

This script needs to be run as root, so to test it, or to reconfigure the beacon, run sudo /root/iBeacon.sh (or wherever you put it!)

Making it persistent

To be honest, at this point, I’d probably just stick this into my root Crontab file by adding this line:

@reboot /root/iBeacon.sh | logger

Again, replace /root/iBeacon.sh with wherever you put it!

Please visit this link to see the script and make suggestions on improvements.

Using Expect to SFTP a file

Because of technical limitations on a pair of platforms I’m using at work, I am unable to set-up key-based SFTP or SCP to transfer files between the pair of them, so I knocked together this short script using the TCL based Expect language.


#!/usr/bin/expect
set arg1 [lindex $argv 0]
set arg2 [lindex $argv 1]
set arg3 [lindex $argv 2]
set timeout 1000
spawn sftp "$arg2"
expect {
yes {
send "yes\r"
exp_continue
}
ass {
send "$arg3\r"
exp_continue
}
sftp {
send "put $arg1\r"
expect {
100% {
send "quit\r"
exp_continue
}
}
}
}

view raw

upload.exp

hosted with ❤ by GitHub

There’s no error checking here, which isn’t great, but as a quick-and-dirty script to SFTP files to a box which needs the password each run… it works! :)

GPG Encrypting files using a keyserver

Another “at work” post!

I’ve been generating files which need to be distributed via a file server, but need to be encrypted using GPG (the open source PGP application). Rather than managing keys for a large number of users, instead, I have a text file with the user names in, and a batch file. Please see the below gist for details :)

Starting EC2 instances using PHP

I run a small podcast website called CCHits.net. It runs on Dreamhost because they offer unlimited storage and bandwidth, but while it’s a great service for storage, it’s not really useful for running a batch process because long running processes are killed regularly (in my case, building the cchits podcasts on a daily basis).

As a result, I built an EC2 instance which I trigger every day using a cronjob. Previously, I used the “AWS CLI tools”, but as this uses a Java Virtual Machine, it was taking an awful lot of resources just to spin up an instance, and Dreamhost kept killing the task off. As a result, I found the AWS PHP SDK, and coded up this little snippet to spin up the EC2 instance.

Development Environment Replication with Vagrant and Puppet

This week, I was fortunate enough to meet up with the Cheadle Geeks group. I got talking to a couple of people about Vagrant and Puppet, and explaining how it works, and I thought the best thing to do would be to also write that down here, so that I can point anyone who missed any of what I was saying to it.

Essentially, Vagrant is program to read a config file which defines how to initialize a pre-built virtual machine. It has several virtual machine engines which it can invoke (see [1] for more details on that), but the default virtual machine to use is VirtualBox.

To actually find a virtual box to load, there’s a big list over at vagrantbox.es which have most standard cloud servers available to you. Personally I use the Ubuntu Precise 32bit image from VagrantUp.com for my open source projects (which means more developers can get involved). Once you’ve picked an image, use the following command to get it installed on your development machine (you only need to do this step once per box!):

vagrant box add {YourBoxName} {BoxURL}

After you’ve done that, you need to set up the Vagrant configuration file.

cd /path/to/your/dev/environment
mkdir Vagrant
cd Vagrant
vagrant init {YourBoxName}

This will create a file called Vagrantfile in /path/to/your/dev/environment/Vagrant. It looks overwhelming at first, but if you trim out some of the notes (and tweak one or two of the lines), you’ll end up with a file which looks a bit like this:

Vagrant.configure("2") do |config|
  config.vm.box = "{YourBoxName}"
  config.vm.hostname = "{fqdn.of.your.host}"
  config.vm.box_url = "{BoxURL}"
  config.vm.network :forwarded_port, guest: 80, host: 8080
  # config.vm.network :public_network
  config.vm.synced_folder "../web", "/var/www"
  config.vm.provision :puppet do |puppet|
    puppet.manifests_path = "manifests"
    puppet.manifest_file  = "site.pp"
  end
end

This assumes you’ve replaced anything with {}’s in it with a real value, and that you want to forward TCP/8080 on your machine to TCP/80 on that box (there are other work arounds, using more Vagrant plugins, different network types, or other services such as pagekite, but this will do for now).

Once you’ve got this file, you could start up your machine and get a bare box, but that’s not much use to you, as you’d have to tell people how to configure your development environment every time they started up a new box. Instead, we’ll be using a Provisioning service, and we’re going to use Puppet for that.

Puppet was originally designed as a way of defining configuration across all an estate’s servers, and a lot of tutorials I’ve found online explain how to use it for that, but when we’re setting up Puppet for a development environment, we just need a simple file. This is the site.pp manifest, and in here we define the extra files and packages we need, plus any commands we need to run. So, let’s start with a basic manifest file:

node default {

}

Wow, isn’t that easy? :) We need some more detail than that though. First, let’s make sure the timezone is set. I live in the UK, so my timezone is “Europe/London”. Let’s put that in. We also need to make sure that any commands we run have the right path in them. So here’s our revised, debian based, manifest file.

node default {
    Exec {
        path => '/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/sbin:/usr/sbin'
    }

    package { "tzdata":
        ensure => "installed"
    }

    file { "/etc/timezone":
        content => "Europe/London\n",
        require => Package["tzdata"]
    }

    exec { "Set Timezone":
        unless => "diff /etc/localtime /usr/share/zoneinfo/`cat /etc/timezone`",
        command => "dpkg-reconfigure -f noninteractive tzdata",
        require => File["/etc/timezone"]
    }
}

OK, so we’ve got some pretty clear examples of code to run here. The first Exec statement must always be in there, otherwise it gets a bit confused, but after that, we’re making sure the package tzdata is installed, we then make sure that, once the tzdata package is installed, we create or update the /etc/timezone file with the value we want, and then we use the dpkg-reconfigure command to set the timezone, but only if the timezone isn’t already set to that.

Just to be clear, this file describes what the system should look like at the end of it running, not a step-by-step guide to getting it running, so you might find that some of these packages install out of sequence, or something else might run before or after when you were expecting it to run. As a result, you should make good use of the “require” and “unless” statements if you want a proper sequence of events to occur.

Now, so far, all this does is set the timezone for us, it doesn’t set up anything like Apache or MySQL… perhaps you want to install something like WordPress here? Well, let’s see how we get other packages installed.

In the following lines of code, we’ll assume you’re just adding this text above the last curled bracket (the “}” at the end).

First, we need to ensure our packages are up to date:

exec { "Update packages":
    command => "sudo apt-get update && sudo apt-get dist-upgrade -y",
}

Here’s Apache getting installed:

package { "apache2":
    ensure => "installed",
    require => Exec['Update packages']
}

And, maybe you’ll want to set up something that needs mod_rewrite and a custom site? Add this to your Vagrantfile

config.vm.synced_folder "../Apache_Site", "/etc/apache2/shared_config"

Create a directory called /path/to/your/dev/environment/Apache_Site which should contain your apache site configuration file called “default”. Then add this to your site.pp

exec { "Enable rewrite":
    command => 'a2enmod rewrite',
    onlyif => 'test ! -e /etc/apache2/mods-enabled/rewrite.load',
    require => Package['apache2']
}

file { "/etc/apache2/sites-enabled/default":
  ensure => link,
  target => "/etc/apache2/shared_config/default",
}

So, at the end of all this, we have the following file structure:

/path/to/your/dev/environment
+ -- /Apache_Site
|    + -- default
+ -- /web
|    + -- index.html
+ -- /Vagrant
     + -- /manifests
     |    + -- site.pp
     + -- Vagrantfile

And now, you can add all of this to your Git repository [2], and off you go! To bring up your Vagrant machine, type (from the Vagrant directory):

vagrant up

And then to connect into it:

vagrant ssh

And finally to halt it:

vagrant halt

Or if you just want to kill it off…

vagrant destroy

If you’re tweaking the provisioning code, you can run this instead of destroying it and bringing it back up again:

vagrant provision

You can do some funky stuff with running several machines, and using the same puppet file for all of those, but frankly, that’s a topic for another day.

[1] Vagrant is extended using plugins. There is a list of plugins on this Github Wiki Page. The plugins here can include additional virtual machine back ends (called Providers in Vagrant terminology), and methods of configuring the OS after bootup (called Provisioners), but also anything around defining where to find resources, to define network addresses, even to handle caches and proxies.

[2] If you’re not using Git, you should be! However, you might want to add some stuff to your .gitignore – in particular, Vagrant adds a directory called /path/to/your/dev/environment/Vagrant/.vagrant where it puts the VMs it creates.

A quick note on autoloaders for PHP

Over the past few days, as you may have noticed, I’ve been experimenting with PHPUnit, and writing up notes on what I’ve learned. Here’s a biggie, but it’s such a small actual change, I didn’t want to miss it.

So, when you have your autoloader written, you’ll have a function like this (probably):

<?php
function __autoload($classname)
{
    if (file_exists(dirname(__FILE__) . '/classes/' . $classname . '.php')) {
        require_once dirname(__FILE__) . '/classes/' . $classname . '.php';
    }
}

Load this from your test, or in a bootstrap file (more to come on that particular subject, I think!), like this:

<?php
require_once dirname(__FILE__) . '/../autoloader.php';
class SomeClassTest extends ........

And you’ll probably notice the autoloader doesn’t do anything… but why is this? Because PHPUnit has it’s own autoloader, and you need to chain our autoloader to the end. So, in your autoloader file, add this line to the end:

<?php
function __autoload($classname)
{
    if (file_exists(dirname(__FILE__) . '/classes/' . $classname . '.php')) {
        require_once dirname(__FILE__) . '/classes/' . $classname . '.php';
    }
}

spl_autoload_register('__autoload');

And it all should just work, which is nice :)

A quick word on salting your hashes.

If you don’t know what hashing is in relation to coding, the long version is here: Cryptographic Hash Function but the short version is that it performs a mathermatical formula to components of the file, string or data, and returns a much shorter number with a slim chance of “collisions”.

I don’t know whether it’s immediately clear to anyone else, but I used to think this was a good idea.

<?php
$password = sha1($_POST['password']);

Then I went to a PHPNW session, and asked someone to take a look at my code, and got a thorough drubbing for not adding a cryptographic salt (wikipedia).

For those who don’t know, a salt is a set of characters you add before or after the password (or both!) to make it so that a simple “rainbow table analysis” doesn’t work (essentially a brute-force attack against the authentication data by hashing lots and lots of strings looking for another hash which matches the stored hash). In order to make it possible to actually authenticate with that string again in the future, the string should be easily repeatable, and a way to do that is to use other data that’s already in the user record.

For example, this is a simple salt:

<?php
$password = sha1('salt' . $_POST['password']);

I read in the April 2012 edition of 2600 magazine something that I should have been doing with my hashes all along. How’s this for more secure code?

<?php
$site_salt = 'pepper';
$SQL = "SELECT intUserID FROM users WHERE strUsername = ?";
$DB = new PDO($dsn);
$query = $DB->prepare($SQL);
$query->execute(array(strtolower($_POST['username'])));
$userid = $query->fetch();
if ($userid == false) {
  return false;
}
$prefix = '';
$suffix = '';
if ($userid % 2 == 0) {
  $prefix = $site_salt;
} else {
  $suffix = $site_salt;
}
if ($userid % 3 == 0) {
  $prefix .= strtolower($_POST['username']);
} else {
  $suffix .= strtolower($_POST['username']);
}
if ($userid % 4 == 0) {
  $prefix = strrev($prefix);
}
if ($userid % 5 == 0) {
  $suffix = strrev($suffix);
}
$hashedPassword = sha1($prefix . $_POST['password'] . $suffix);

So, this gives you an easily repeatable string, that’s relatively hard to calculate without easy access to the source code :)

Getting started with Unit Testing for PHP

Unit testing seems like a bit of a dark art when you’re first introduced to it. “Create this new file. Tell it what is supposed to be the result when you run a test, and it’ll tell you if you’re right nor not.”

Let’s start with a pseudocode example:

test->assertTrue(1+1 = 2); // Test returns true, huzzah!
test->assertFalse(1+1 = 3); // Test returns false. Those integers must not have been large enough

I want to use PHPUnit, and for me the easiest way to get this and the rest of the tools I’ll be referring to in this collection of posts is to install “The PHP Quality Assurance Toolchain“. On my Ubuntu install, this was done as follows:

sudo pear upgrade PEAR
sudo pear config-set auto_discover 1
sudo pear install --all-deps pear.phpqatools.org/phpqatools

Now we’ve got the tools in place, let’s set up the directory structure.

/
+ -- Classes
|    + -- Config.php
+ -- Tests
     + -- ConfigTest.php

In here, you see we’ve created two files, one contains the class we want to use, and the other contains the tests we will be running.

So, let’s slap on the veneer of coating that these two files need to be valid to test.

/Classes/Config.php

<?php
class Config
{
}

/Tests/Config.php

<?php

include dirname(__FILE__) . '/../Classes/Config.php';

class ConfigTest extends PHPUnit_Framework_TestCase
{
}

So, just to summarise, here we have two, essentially empty classes.

Let’s put some code into the test file.

<?php

include dirname(__FILE__) . '/../Classes/Config.php';

class ConfigTest extends PHPUnit_Framework_TestCase
{
  public function testCreateObject()
  {
    $config = new Config();
    $this->assertTrue(is_object($config));
  }
}

We can now run this test from the command line as follows:

phpunit Tests/ConfigTest.php

phpunit Tests/01_ConfigTest.php
PHPUnit 3.6.10 by Sebastian Bergmann.

.

Time: 1 second, Memory: 3.00Mb

OK (1 test, 1 assertion)

That was nice and straightforward!

Let’s add some more code!

In ConfigTest, let’s tell it to load some configuration, using a config file.

<?php

include dirname(__FILE__) . '/../Classes/Config.php';

class ConfigTest extends PHPUnit_Framework_TestCase
{
  public function testCreateObject()
  {
    $config = new Config();
    $this->assertTrue(is_object($config));
  }

  public function testLoadConfig()
  {
    $config = new Config();
    $config->load();
  }
}

And now when we run it?

PHP Fatal error:  Call to undefined method Config::load() in /var/www/PhpBetterPractices/Tests/ConfigTest.php on line 16

Ah, perhaps we need to write some code into /Classes/Config.php

<?php
class Config
{
  public function load()
  {
    include dirname(__FILE__) . '/../Config/default_config.php';
  }
}

But, running this, again, we get an error message!

PHPUnit 3.6.10 by Sebastian Bergmann.

.E

Time: 0 seconds, Memory: 3.00Mb

There was 1 error:

1) ConfigTest::testLoadConfig
include(/var/www/PhpBetterPractices/Config/default_config.php): failed to open stream: No such file or directory

/var/www/PhpBetterPractices/Classes/Config.php:7
/var/www/PhpBetterPractices/Classes/Config.php:7
/var/www/PhpBetterPractices/Tests/ConfigTest.php:16

FAILURES!
Tests: 2, Assertions: 1, Errors: 1.

So, we actually need to check that the file exists first, perhaps we should throw an error if it doesn’t? We could also pass the name of the config file to pass to the script, which would let us test more and different configuration options, should we need them.

class Config
{
    public function load($file = null)
    {
        if ($file == null) {
            $file = 'default.config.php';
        }

        $filename = dirname(__FILE__) . '/../Config/' . $file;

        if (file_exists($filename)) {
            include $filename;
        } else {
            throw new InvalidArgumentException("File not found");
        }
    }
}

So, here’s the new UnitTest code:

class ConfigTest extends PHPUnit_Framework_TestCase
{
    public function testCreateObject()
    {
        $config = new Config();
        $this->assertTrue(is_object($config));
    }

    public function testLoadConfig()
    {
        $config = new Config();
        $config->load();
    }

    /**
     * @expectedException InvalidArgumentException
     */
    public function testFailLoadingConfig()
    {
        $config = new Config();
        @$config->load('A file which does not exist');
    }
}

This assumes the file /Config/default.config.php exists, albeit as an empty file.

So, let’s run those tests and see what happens?

PHPUnit 3.6.10 by Sebastian Bergmann.

...

Time: 0 seconds, Memory: 3.25Mb

OK (3 tests, 2 assertions)

Huzzah! That’s looking good. Notice that to handle a test of something which should throw an exception, you can either wrapper the function in a try/catch loop and, in the try side of the loop, have $this->assertTrue(false) to prevent false positives and in the catch side, do your $this->assertBlah() on the exception. Alternatively, (and much more simplely), use a documentation notation of @expectedException NameOfException and then prefix the function you are testing with the @ symbol. This is how I did it with the test “testFailLoadingConfig()”.

This obviously doesn’t handle setting and getting configuration values, so let’s add those.

Here’s the additions to the Config.php file:

    public function set($key = null, $value = null)
    {
        if ($key == null) {
            throw new BadFunctionCallException("Key not set");
        }
        if ($value == null) {
            unset ($this->arrValues[$key]);
            return true;
        } else {
            $this->arrValues[$key] = $value;
            return true;
        }
    }

    public function get($key = null)
    {
        if ($key == null) {
            throw new BadFunctionCallException("Key not set");
        }
        if (isset($this->arrValues[$key])) {
            return $this->arrValues[$key];
        } else {
            return null;
        }
    }

And the default.config.php file:

<?php
$this->set('demo', true);

And lastly, the changes to the ConfigTest.php file:

    public function testLoadConfig()
    {
        $config = new Config();
        $this->assertTrue(is_object($config));
        $config->load('default.config.php');
        $this->assertTrue($config->get('demo'));
    }

    /**
     * @expectedException BadFunctionCallException
     */
    public function testFailSettingValue()
    {
        $config = new Config();
        @$config->set();
    }

    /**
     * @expectedException BadFunctionCallException
     */
    public function testFailGettingValue()
    {
        $config = new Config();
        @$config->get();
    }

We’ve not actually finished testing this yet. Not sure how I can tell?

phpunit --coverage-text Tests/ConfigTest.php
PHPUnit 3.6.10 by Sebastian Bergmann.

....

Time: 0 seconds, Memory: 3.75Mb

OK (4 tests, 5 assertions)

Generating textual code coverage report, this may take a moment.

Code Coverage Report
  2012-05-08 18:54:16

 Summary:
  Classes: 0.00% (0/1)
  Methods: 0.00% (0/3)
  Lines:   76.19% (16/21)

@Config::Config
  Methods: 100.00% ( 3/ 3)   Lines:  76.19% ( 16/ 21)

Notice that there are 5 lines outstanding – probably around the unsetting values and using default values. If you use an IDE (like NetBeans) you can actually get the editor to show you, using coloured lines, exactly which lines you’ve not yet tested! Nice.

So, the last thing to talk about is Containers and Dependency Injection. We’ve already started with the Dependency Injection here – that $config->load(‘filename’); function handles loading config files, or you could just bypass that with $config->set(‘key’, ‘value); but once you get past a file or two, you might just end up with a lot of redundant re-loading of config files, or worse, lots of database connections open.

So, this is where Containers come in (something I horrifically failed to understand before).

Here’s a container:

class ConfigContainer
{
  protected static $config = null;

  public static function Load()
  {
    if (self::$config == null) {
      self::$config = new Config();
      self::$config->load();
    }
    return self::$Config;
  }
}

It’s purpose (in this case) is to load the config class, including any dependencies that you may need for that class, and then return that class to you. You could conceivably create a Database container, or a Request container or a User container with very little extra work, and with a few short calls, have a single function for each of your regular and routine sources of processing data, but without preventing you from being able to easily and repeatably test that data – by not going through the container.

Of course, there’s nothing to stop you just having these created in a registry class, or store them in a global from the get-go, but, I am calling these “Better Practices” after all, and these are considered to be not-so-good-practices.

Just as a note, code from this section can be seen at GitHub, if you want to use them at all.

Update 2012-05-11: Added detail to the try/catch exception catching as per frimkron’s comment. Thanks!