One to read: Using HEREDOCs in bash to grep exit code details

One to read: “Using HEREDOCs in bash to grep exit code details”

One of my little gripes is that when you do a bit of noddy automation with bash and you capture the exit code from your app (as you should do always) but you don’t have an easy way of looking up the meaning of that exit code if things go wrong. ’cause you are going to alert/report that non-zero exit code to someone, something right?

Then you start to play with hashes et al. and write something a little more fuller, which is possible but not very nice.  And this is bash, so you should be able to just grep something really.

So, here is my work around…

Take rsync for example. Occasionally you might get a network error and your remote copy fails. Of course, being a good engineer you have captured the non-zero exit code and at least dropped this into the logs, report or an alert. But then you need to look up that code in the manuals.

eg.

$ man rsync

and scroll down through the text to get to the exit codes.

EXIT VALUES
       0      Success

       1      Syntax or usage error

       2      Protocol incompatibility

       3      Errors selecting input/output files, dirs

       4      Requested action not supported: an attempt was made to manipulate 64-bit files on a platform that cannot support them; or an option was specified that is supported by the client and not by the server.

       5      Error starting client-server protocol

       6      Daemon unable to append to log-file

       10     Error in socket I/O
Manual page rsync(1) line 2734/2862 98% (press h for help or q to quit)

This is fine and dandy. Plus there is only a few times where you may even need to do this.

But being as i am, this is not good enough for me.  The exit codes are there, why can i not look them up programmatically? (a la – grep)

Now we all know you can create hashes and lists (python speak) in bash.  But it’s cumbersome for this and hey, i have the values there and this is bash i should be able to grep them, right?

Well, it seems you can…

$ cat << EODOC | grep "dog"
cat
fish
dog
rabbit
EODOC

returns

$ cat << EODOC | grep "dog" > cat
> fish
> dog
> rabbit
> EODOC
dog

�  woo hoo. ‘simples’ as the meerkats say.

So with a little bit of bash-foo we can just pull the raw text out of any man manual and drop it into a templated function to lookup the exit codes.

Here is an example

get_rsync_error() {
        ERR_NUM=$1

		# xargs hack (bash-foo) to trim white space
		cat << EODOC | grep -P "^\s+$ERR_NUM" | xargs 

EXIT VALUES
       0      Success
       1      Syntax or usage error
       2      Protocol incompatibility
       3      Errors selecting input/output files, dirs
       4      Requested action not supported: an attempt was made to manipulate 64-bit files on a platform that cannot support them; or an option was specified that is supported by the client and not by the server.
       5      Error starting client-server protocol
       6      Daemon unable to append to log-file
       10     Error in socket I/O
       11     Error in file I/O
       12     Error in rsync protocol data stream
       13     Errors with program diagnostics
       14     Error in IPC code
       20     Received SIGUSR1 or SIGINT
       21     Some error returned by waitpid()
       22     Error allocating core memory buffers
       23     Partial transfer due to error
       24     Partial transfer due to vanished source files
       25     The --max-delete limit stopped deletions
       30     Timeout in data send/receive
       35     Timeout waiting for daemon connection
EODOC

}


        time rsync $RSYNC_OPTS --out-format="%t %i %f sent:%b" /backups/ $DEST
        ERR=$?

        echo "" 
        if [ "$ERR" -ne "0" ] ; then
                # look up the error details
                err_text=$(get_rsync_error "$ERR")
                # display the error
                echo "ERROR: [$ERR] = $err_text"
        else
                echo "***Successful"
        fi

How useful this is to anyone else, i’m not sure. But it was an itch i had to scratch. �

Did you also notice that ‘xargs’ will trim a string for you?

$ echo "   leading and trailing whitespace    " | xargs
leading and trailing whitespace

Cool huh?

Anyway, i’ll get back to my Python and Ansible now. But Granddaddy Bash still rocks in my book!

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

One to read: A Beginner’s Guide to IPFS

One to read: “A Beginner’s Guide to IPFS”

Ever wondered about IPFS (the “Inter Planetary File System”) – a new way to share and store content. This doesn’t rely on a central server (e.g. Facebook, Google, Digital Ocean, or your home NAS) but instead uses a system like bittorrent combined with published records to keep the content in the system.

If your host goes down (where the original content is stored) it’s also cached on other nodes who have visited your site.

These caches are cleared over time, so are suitable for short outages, or you can have other nodes who “pin” your content (and this can be seen as a paid solution that can fund hosts).

IPFS is great at hosting static content, but how to deal with dynamic content? That’s where PubSub comes into play (which isn’t in this article). There’s a database service which sits on IPFS and uses PubSub to sync data content across the network, called Orbit-DB.

It’s looking interesting, especially in light of the announcement from CloudFlare about their introduction of an available IPFS gateway.

It’s looking good for IPFS!

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

One to read: Overview of TLS v1.3

One to read: “Overview of TLS v1.3”

Wondering what TLS v1.3 means to your web browsing? OWASP break it down into what the differences are between TLS1.2 and TLS1.3. It’s a really good set of slides and would be great if you need so show someone some of the moving pieces without reading the RFS (RFC8446). It’s good :)

This was automatically posted from my RSS Reader, and may be edited later to add commentary.

One to read: Automating backups on a Raspberry Pi NAS

One to read: “Automating backups on a Raspberry Pi NAS”

human head, brain outlined with computer hardware background

In the first part of this three-part series using a Raspberry Pi for network-attached storage (NAS), we covered the fundamentals of the NAS setup, attached two 1TB hard drives (one for data and one for backups), and mounted the data drive on a remote device via the network filesystem (NFS). In part two, we will look at automating backups. Automated backups allow you to continually secure your data and recover from a hardware defect or accidental file removal.

read more

This was automatically posted from my RSS Reader, and may be edited later to add commentary.