A tip for users who SSH to a system running ecryptfs and byobu

I’ve been an Ubuntu User for a while (on and off), and a few versions back, Ubuntu added two great installed-by-default options (both of which are turned off by default), called Byobu (a Pimp-My-GnuScreen app) and ECryptFS (an “Encrypt my home directory” extension).

Until just recently, if you wanted to enable both, and then SSH to the box using public/private keys, it would use the fact you’d connected and authenticated with keys to unlock the ECryptFS module and then start Byobu. A few months back, I noticed that if I rebooted, it wouldn’t automatically unlock the ECryptFS module, so I’d be stuck without either having started. A few login attempts later, and it was all sorted, but just recently, this has got worse, and now every SSH session leaves me at a box with an unmounted ECryptFS module and no Byobu.

So, how does one fix such a pain? With a .profile file of course :)

SSH in, and before you unlock your ECryptFS module run this:

sudo nano .profile

You need to run the above using sudo, as the directory you access before you start ECryptFS is owned by root, and you have no permissions to write to it.

In that editor, paste this text.

#! /bin/bash
`which ecryptfs-mount-private`
cd
`which byobu-launcher`

Then use Ctrl+X to exit the editor and save the file.

The next time you log in, it’ll ask you for your passphrase to unlock the ECryptFS module. Once that’s in, it’ll start Byobu. Job’s a good’n.

Watching for file changes on a shared linux web server

$NEWPROJECT has a script which runs daily to produce a file which will be available for download, but aside from that one expected daily task, there shouldn’t be any unexpected changes to the content on the website.

As I’m hosting this on a shared webhost, I can’t install Tripwire or anything like that, and to be honest, for what I’m using it for, I probably don’t need it. So, instead, I wrote my own really simple file change monitor which runs as a CronJob.

Here’s the code:

#! /bin/bash
# This file is called scan.sh
function sha512sum_files() {
find $HOME/$DIR/* -type f -exec sha512sum '{}' \; >> $SCAN_ROOT/current_status
}
SCAN_ROOT=$HOME/scan
mv $SCAN_ROOT/current_status $SCAN_ROOT/old_status
for DIR in site_root media/[A-Za-z]*
do
sha512sum_files
done
diff -U 0 $SCAN_ROOT/old_status $SCAN_ROOT/current_status

And here’s my crontab:


MAILTO="my.email@add.ress"
# Minute Hour Day of Month Month Day of Week Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0,15,30,45 * * * * /home/siteuser/scan/scan.sh

And lastly, a sample of the output

--- /home/siteuser/scan/old_status 2010-10-25 14:30:03.000000000 -0700
+++ /home/siteuser/scan/current_status 2010-10-25 14:45:06.000000000 -0700
@@ -4 +4 @@
-baeb2692403619398b44a510e8ca0d49db717d1ff7e08bf1e210c260e04630606e9be2a3aa80f7db3d451e754e189d4578ec7b87db65e6729697c735713ee5ed /home/siteuser/site_root/LIBRARIES/library.php
+c4d739b3e0a778009e0d53315085d75cf8380ac431667c31b23e4b24d4db273dfc98ffad6842a1e5f59d6ea84c33ecc73bed1437e6105475fefd3f3a966de118 /home/siteuser/site_root/LIBRARIES/library.php
@@ -71 +71 @@
-88ddd746d70073183c291fa7da747d7318caa697ace37911db55afce707cd1634f213f340bb4870f1194c48292f846adaf006ad61b4ff1cb245972c26962b42d /home/siteuser/site_root/api.php
+d79e8a6e6c3db39e07c22e7b7485050007fd265ad7e9bdda728866f65638a8aa534f8cb51121a68e9287f384e8694a968b48d840d37bcd805c117ff871e7c618 /home/siteuser/site_root/api.php

While this isn’t the most technically sound way (I’m sure) of checking for file changes, at least it gives me some idea (to within 15 minutes or so) of what files have been changed, so gives me a time to start hunting.

Weirdness with Bash functions and Curl

I’m writing a script (for $NEW_PROJECT) which, due to my inability to figure out how to compile a certain key library on Dreamhost, runs SSH to a remote box (with public/private keys and a limitation on what that key can *actually* achieve) to perform an off-box process of some data.

After it’s all done, I am using curl to call the API of the project like this:

curl --fail -F "file=@`pwd`/file" -F "other=form" -F "options=are_set" http://user:password@server/api/function

Because I’m making a few calls against the API, I wrote a function like this:

function callAPI() {
API=$1
if [ "$2" != "" ]
then
API=$API/$2
fi
if [ "$3" != "" ]
then
API=$API/$3
fi
if [ "${OPTION}" != "" ]
then
FORM="${OPTION}"
else
FORM=""
fi
if [ $DEBUG == "1" ]
then
echo "curl --fail ${FORM} http://${USER}:**********@${SITE}/api/${API}"
fi
eval `curl --fail ${FORM} http://${USER}:${PASS}@${SITE}/api/${API} 2>/dev/null`
}

and then call it like this:

OPTION="-F \"file=@filename\" -F \"value=one\" -F \"value=two\""
callAPI function

For all the rest of my API calls (those which ask for data, rather than supply it, these calls work *fine*, but as soon as I tell it to post a form containing a file, it throws this error:

curl: (26) failed creating form post data

I did some digging around, and found that this means that the script can’t read from the file. The debug line, when run outside of the script processed the command perfectly, so what’s going on?

To be honest, in the end, I just copied the command into the body of the code, and I’m praying that I can figure out why I can’t compile this library on Dreamhost, before I need to work out why running that curl line doesn’t work from inside a function.

Like the idea of GMail’s Priority Inbox, but you’ve already got “Multiple Inboxes” and you don’t want to loose them?

That’s the position I’m in. Because I use my Android phone for e-mail a lot, and so I don’t want my phone to beep every 5 minutes, I set up a huge bundle of filters to shunt my e-mail into various labels, for the social groups I belong to, for my SVN commits and ticket tracking, to prioritize emails from friends and family.

OK, so technically, GMail’s Priority Inbox should automagically do some of this for me, but, well, I wanted more!

So, I thought I’d write up some short notes on how to use Priority Inbox in a way that might actually be useful.

First, turn on Priority Inbox. It’s a simple radio button, found under “Settings” -> “Priority Inbox” -> “Show Priority Inbox”. This will probably make you reload your GMail session.

Next, go back to the “Priority Inbox” settings page, and set your “Default Inbox” to “Inbox”. I like as much information as possible in my GMail screen, so I’ve got the indicators turned on and I’m overriding the filters (I don’t know if this is useful or not, but, why not, eh?)

Save your changes. Again, I’m guessing this will reload your GMail session.

Go into “Settings” -> “Multiple Inboxes” (feel free to turn it on under Labs first, if it’s not already there).

Before Priority Inbox, I had two “new” inboxes – “All Unread” and “Muted” (so that I can mark-all-as-read those mails I’d already muted but that kept on being noisy!). These two inboxes sat underneath my main inbox, but as “Priority Inbox” is supposed to go above all that lot, it’s not going to be much use after the main Inbox. So, I’ve changed my Multiple Inboxes now as follows:

  1. (in:important OR is:starred) AND is:unread [Called “Priority Inbox”]
  2. in:inbox AND -in:important AND is:unread [Called “Inbox Unread”]
  3. -in:inbox AND -in:important AND -is:muted AND is:unread [Called “Unread Other”]
  4. is:muted is:unread [Called “Muted”]

These are all configured to show 20 messages, and to sit above the Inbox. I’ll accept, there is some waste with having the inbox at the bottom of the screen, and again part way up, but at least now, my messages are sorted (nearly) the way Google intended them to be ;)

Oh, and one nice feature from doing it this way, if an “Important” message isn’t quite important enough to disturb you on your phone (and thus is “archived” before being filed into your e-mail folders), it’ll still show up, in that top bit there… it just won’t be disturbing your sleep until you check your mailbox when you get up.

Need to quickly integrate some IRC into your app? Running Linux? Try ii

I know, it looks like a typo, but the script ii makes IRC all better for small applications which don’t need their own re-implementation of an IRC client.

I know it’s available under Ubuntu and Debian (apt-get install ii), but I don’t know what other platforms it’s available for.

It’s not much use as a user-focused IRC client (although it would vaguely work like that with a little scripting!), but for scripts it works like a charm.

Read More

Some notes on OpenSSH

At the hackspace recently, I was asked for a brief rundown of what SSH can do, and how to do it.

Just as an aside, for one-off connections to hosts, you probably don’t need to use a public/private key pair, but for regular access, it’s probably best to have a key pair, if not per-host, then per-group of hosts (for example, home servers, work servers, friends machines, web servers, code repositories). We’ll see how to keep these straight later in this entry. For some reasons, you may want to have multiple keys for one host even!

If you want to create a public/private key pair, you run a very simple command. There are some tweaks you can make, but here’s the basic command

ssh-keygen

Generating public/private key pair
Enter the file in which to save the key (/home/bloggsf/.ssh/id_rsa): /home/bloggsf/.ssh/hostname
Enter passphrase (empty for no passphrase): A Very Complex Passphrase
Enter same passphrase again: A Very Complex Passphrase
Your identification has been saved in /home/bloggsf/.ssh/hostname.
Your public key has been saved in /home/bloggsf/.ssh/hostname.pub.
The key fingerprint is:
00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff bloggsf@ur-main-machine

See that wasn’t too hard was it? Transfer the PUBLIC portion (the .pub file) to your destination box, as securely as possible, whether that’s by SFTP, putting them on a pen drive and posting it to your remote server, or something else… but those .pub files should be appended to the end of /home/USERNAME/.ssh/authorized_keys

You achieve that by typing:

cat /path/to/file.pub >> /home/username/.ssh/authorized_keys

Note that, if you don’t spell it the American way (authoriZed), it’ll completely fail to work, and you’ll stress out!

So, now that key is on your remote host, how do we do stuff with it?

1) SSH to a console (this won’t try to use the public/private key pair, unless you left the default filename when you made your key)

ssh user@host

2) SSH to a host on an unusual port

ssh user@host -p 12345

3) SSH using a private key (see towards the end of the document about public and private keys)

ssh user@host -i /path/to/private_key

4) SSH on a new port and with a private key

ssh user@host -p 54321 -i /home/user/.ssh/private_key

5) Pulling a port (e.g. VNC service) back to your local machine

ssh user@host -L 5900:127.0.0.1:5900

The format of the portion starting -L is local-port:destination-host:destination-port.

Note, I would then connect to localhost on port 5900. If you are already running a VNC service on port 5900, you would make the first port number something not already in use – I’ll show an example of this next.

6) Pulling multiple ports from different remote hosts to your local machine.
This one I do for my aunt! It forwards the VNC service to a port I’m not using at home, and also gives me access to her router from her laptop.

ssh user@host -L 1443:192.168.1.1:443 -L 5901:localhost:5900

Here I’ve used two formats for selecting what host to forward the ports from – I’ve asked the SSH server to transfer connections I make to my port 1443 to the host 192.168.1.1 on port 443. I’ve also asked it to transfer connections I make on port 5901 to the machine it resolves the name “localhost” as (probably 127.0.0.1 – a virtual IP address signifying my local machine) and to it’s port 5901.

7) Reverse Port Forwarding… offering services from the client end to the server end.

ssh user@host -R 1080:localhost:80

I’ve identified here the most common reason you’ll do a reverse port forward – if you’re not permitted to run sftp (in case you transfer files out of the system), but you need to transfer a file to the target host. In that case, you’d run a web server on your local machine (port 80) and access the web server over port 1080 from your destination host.

8) Running a command instead of a shell on the remote host

ssh user@host run-my-very-complex-script –with-options

9) If you only want your user to be able to use a specific command when they SSH to your host, edit their authorized_keys file, and add at the beginning:

command=”/the/only/command/that/key/can/run $SSH_ORIGINAL_COMMAND” ssh-rsa ……

This command will be run instead of any commands they try to run, with the command they tried to run as options passed to it.

10) Make a file to make it easier for you to connect to lots of different machines without needing to remember all this lot!

The file I’m talking about is called config and is stored in /home/bloggsf/.ssh/config

If it’s not already there, create it and then start putting lines into it. Here’s what mine looks like (hosts and files changed to protect the innocent!)

Host home external.home.server.name
Hostname external.home.server.name
User jon
Port 12345
LocalForward 1080 localhost:1080
LocalForward 9080 router:80
LocalForward 9443 router:443
Host github github.com
Hostname github.com
User git
IdentityFile /home/jon/.ssh/github_key
Host main.projectsite.com
User auser
RemoteForward 1080:localhost:80
Host *.projectsite.com
User projectowner
IdentityFile /home/jon/.ssh/supersecretproject
Host *
IdentityFile /home/jon/.ssh/default_ssh_key
Compression yes

The config file parser steps through it from top to bottom, and will ignore any subsequent lines which it matches already (with the exception of LocalForward and RemoteForward), so if I try to SSH to a box, and my SSH key isn’t already specified, it’ll use the default_ssh_key. Likewise, it’ll always try and use compression when connecting to the remote server.

A warning about the evils of Facebook

Facebook is one of the current breed of “Social networking” websites – which means that they let you exchange information, pictures and videos with each other… sounds good so far, right?

Here’s where the problem is. Facebook is a company which is trying to make money. Your profile (the collection of all your information) on their website belongs to them. They can market that information to anyone and do whatever they want with it. If you put any pictures on there, then they own those photos too. On top of that, every “application” (or service that isn’t written by Facebook) knows everything about you and the people you are friends with… which means that if you’ve decided not to install an application that collects e-mail addresses, but your friend does – then that application knows your e-mail address. Wonderful!

Facebook have a real problem with their “privacy policy” and the pages which let you share details with the rest of the world – every few months they write a new version of both to help them get even more of a chance to sell off your information, to use your photos and videos in new and interesting ways… so much so, that about a year ago, their CEO (Chief Executive Officer – the person who makes all the day-to-day decisions about where the company goes next) had all his details shared publicly because he forgot that they started using the new privacy settings page on that day and he’d not set his details to the most private settings. This happens all the time – to the extent another website was created called http://youropenbook.org that shows what people are making publicly available!

A few months back, Facebook changed their privacy policy again to let you log into other websites using your Facebook details, which sounds like a great idea, but it means that the website then (again) knows your e-mail address, all your friends, your birthday and (if you enter it) your phone number… not good!

Realistically, it is possible to use Facebook in a vaguely safe way if you take a lot of precautions about what you are sharing and doing on their website, but I really wouldn’t recommend using it, and in fact, I’d recommend who ever suggested you use it be forwarded a link to this page, warning them not to use it! Sadly, there’s nothing else available right now that does the same thing in a way that still maintains your privacy. I’m watching a few projects, and once something safe and easy to use comes out, I’ll let you know!

(Just as a disclaimer, I do use Facebook, but I don’t like it and I want to move away from it, PRONTO!)

Using the recursive_import.php script for importing photos to the #Horde module Ansel with subdirectories

I have a problem with the excellent Horde module “Ansel” – their photo
display and manipulation application – which I’m
documenting-until-I-fix-it.

If you have a lot of photos and you want to import the lot in one go,
there’s a script called recursive_import.php – you’ll find this under
/path/to/your/horde/install/ansel/scripts/recursive_import.php and it
takes the following arguments: -d /path/to/directory -u USERNAME -p
PASSWORD

I’d been using it thinking it would handle directory navigation a bit
better than it did, by running it as follows:

php recursive_import.php -d import_dir -u fred -p bloggs

Infact, I needed to do it like this:

php recursive_import.php -d `pwd`/import_dir -u fred -p bloggs

This is because the script navigates up and down the directory
structure as it works out the contents of each directory, instead of
handling the referencing properly. I plan to look at this properly
tomorrow when I’ve got a day off, but if I don’t, or if the patch
doesn’t get accepted, at least you know how to fix it now! :)

Posted via email from Jon’s posterous

Use GMail’s SMTP gateway using the command line from Ubuntu without lots of config tips

I’m writing a few little scripts at the moment, and one of them needed to be able to send an e-mail. I’d not got around to sorting out what my SMTP gateway was from my ISP – but I do tend to use GMail’s SMTP gateway for non-essential stuff.

I thought I could easily setup sendmail, but no, that’s SCARY stuff, and then I thought of Postfix, but that needs an awful lot of configuration for an TLS based SMTP connection, so I did a bit of digging.

Thanks to this post over at the Ubuntu Forums, I worked out how to get a local port 10025 to run, but PHP kept complaining, so I next looked for a “sendmail replacement”, in comes nullmailer.

So, thankfully this is all rather easy.

  • sudo apt-get install openssl xinetd nullmailer
  • sudo tee /usr/bin/gmail-smtp <<EOF >/dev/null
    #!/bin/sh
    # Thanks to http://ubuntuforums.org/showthread.php?t=918335 for this install guide
    /usr/bin/openssl s_client -connect smtp.gmail.com:465 -quiet 2>/dev/null
    EOF
    sudo chmod +x /usr/bin/gmail-smtp
  • sudo tee /etc/xinetd.d/gmail-smtp <<EOF >/dev/null
    # default: on
    # description: Gmail SMTP wrapper for clients without SSL support
    # Thanks to http://ubuntuforums.org/showthread.php?t=918335 for this install guide
    service gmail-smtp
    {
        disable         = no
        bind            = localhost
        port            = 10025
        socket_type     = stream
        protocol        = tcp
        wait            = no
        user            = root
        server          = /usr/bin/gmail-smtp
        type            = unlisted
    }
    EOF
    sudo /etc/init.d/xinetd reload
  • sudo tee /etc/nullmailer/remotes <<EOF >/dev/null
    127.0.0.1 smtp --port=10025 --user=your@user.tld --pass=Y0urC0mp3xGM@ilP@ssw0rd
    EOF
    sudo /etc/init.d/nullmailer reload

Setting all this lot up was pretty easy with these guides. There’s no reason why it wouldn’t work on any other version of Linux (provided you can install all these packages).

Good luck with your project!

Posted via web from Jon’s posterous

Supporting multiple machines in GNOME using VNC

I was recently asked how to configure VNC for user support across a series of machines running GNOME. I’m in the process of trying out a few different platforms at the moment, and didn’t have my GNOME machine to hand and working right, so I decided to work it out from what I’ve done in the past. Here’s the bulk of the e-mail I sent him to try and help him out. Maybe this will help you at some point.

If you find any errors (especially around the option names in the actual dialogue boxes) please post a note so I can correct this!

Thanks!

On most GNOME based systems (which includes Fedora), you can active “Remote Desktop Sharing” for users.

Go to System -> Preferences -> Remote Desktop Sharing (or something similar). I’m afraid I’ve just recently moved my systems to KDE, so I don’t know the exact options, but I believe it’ll say something like “Enable remote connections” (tick that), and “User is prompted to permit connection” (this will be down to policy) and “Remote user needs to enter a password” (this will need some text to be entered).

Once you have these for one system, you can automatically set this for all the other computers.

From the command line, type
  gconftool-2 -R /desktop/gnome/remote_access

This will return all the settings you have made. Here’s mine:

 view_only = false                                         
 alternative_port = 5900                                   
 prompt_enabled = false                                    
 icon_visibility = client                                  
 lock_screen_on_disconnect = false                         
 disable_xdamage = false                                   
 mailto =                                                  
 use_alternative_port = false                              
 enabled = true                                            
 disable_background = false                                
 network_interface =                                       
 require_encryption = false                                
 authentication_methods = [vnc]                            
 vnc_password = &&&&&&&&&&&&                               
 use_upnp = false

(I’ve removed the password for my box)

You can use this gconftool to set the same variables on your computers you’ve already deployed, either per-user, as a default policy for each machine, or as a mandatory policy for each machine.

This article from Sun’s GNOME configuration guide explains how to set variables: http://docs.sun.com/app/docs/doc/806-6878/6jfpqt2t5?a=view while this is an overview of all of the GNOME configuration tool (including that article): http://docs.sun.com/app/docs/doc/806-6878/6jfpqt2sv?a=view and lastly, this is how “Vino” the VNC client for GNOME works: http://www.gnome.org/~markmc/remote-desktop.html

I hope this helps you!

Posted via web from Jon’s posterous